GitHub Trends
10.4K subscribers
15.4K links
See what the GitHub community is most excited about today.

A bot automatically fetches new repositories from https://github.com/trending and sends them to the channel.

Author and maintainer: https://github.com/katursis
Download Telegram
#rust #ai #change_data_capture #context_engineering #data #data_engineering #data_indexing #data_infrastructure #data_processing #etl #hacktoberfest #help_wanted #indexing #knowledge_graph #llm #pipeline #python #rag #real_time #rust #semantic_search

**CocoIndex** is a fast, open-source Python tool (Rust core) for transforming data into AI formats like vector indexes or knowledge graphs. Define simple data flows in ~100 lines of code using plug-and-play blocks for sources, embeddings, and targets—install via `pip install cocoindex`, add Postgres, and run. It auto-syncs fresh data with minimal recompute on changes, tracking lineage. **You save time building scalable RAG/semantic search pipelines effortlessly, avoiding complex ETL and stale data issues for production-ready AI apps.**

https://github.com/cocoindex-io/cocoindex
#python #gemini #gemini_ai #gemini_api #gemini_flash #gemini_pro #information_extration #large_language_models #llm #nlp #python #structured_data

**LangExtract** is a free Python library that uses AI models like Gemini to pull structured data—like names, emotions, or meds—from messy text such as reports or books. It links every fact to its exact spot in the original, creates interactive visuals for easy checks, handles huge files fast with chunking and parallel runs, and works with cloud or local models without fine-tuning. You benefit by quickly turning unstructured docs into reliable, organized data for analysis, saving time and boosting accuracy in fields like healthcare or research.

https://github.com/google/langextract
#python #ai_tool #darkweb #darkweb_osint #investigation_tool #llm_powered #osint #osint_tool

Robin is an AI tool that searches and scrapes the dark web, refines queries with large language models, filters results, and produces a concise investigation summary you can save or export, with Docker and CLI options and support for multiple LLMs (OpenAI, Anthropic, Gemini, local models) to fit your workflow. This helps you save hours of manual searching by automating multi-engine dark-web searches, scraping Onion sites via Tor, filtering noise with AI, and producing ready-to-use reports for faster, more focused OSINT investigations.

https://github.com/apurvsinghgautam/robin
#python #agent #agentic_ai #agentic_framework #agentic_workflow #ai #ai_agents #ai_companion #ai_roleplay #benchmark #framework #llm #mcp #memory #open_source #python #sandbox

MemU lets AI systems take in conversations, documents, and media, turn them into structured memories, and store them in a clear three-layer file system. It offers both fast embedding search and deeper LLM-based retrieval, works with many data types, and supports cloud or self-hosted setups with simple APIs. This helps you build AI agents that truly remember past interactions, retrieve the right context when needed, and improve over time, making your applications more accurate, personal, and efficient.

https://github.com/NevaMind-AI/memU
#javascript #agent #agentic #agentic_ai #ai #ai_agents #automation #cursor #design #figma #generative_ai #llm #llms #mcp #model_context_protocol

Cursor Talk to Figma MCP lets Cursor AI read and edit your Figma designs directly, using tools like `get_selection` for info, `set_text_content` for bulk text changes, `create_rectangle` for shapes, and `set_instance_overrides` for components. Setup is quick: install Bun, run `bun setup` and `bun socket`, add the Figma plugin. This saves you hours by skipping context switches, automating repetitive tasks like text replacement or override propagation, speeding up design-to-code workflows, and keeping everything in sync for faster, precise builds.

https://github.com/grab/cursor-talk-to-figma-mcp
#typescript #acp #ai #ai_agent #banana #chat #chatbot #claude_code #codex #cowork #excel #gemini #gemini_cli #gemini_pro #llm #multi_agent #nano_banana #office #qwen_code #skills #webui

AionUi is a free, open-source app that gives your CLI AI tools like Gemini CLI, Claude Code, and Qwen Code a simple graphical interface on macOS, Windows, or Linux. It auto-detects them for easy chatting, saves talks locally with multi-sessions, organizes files smartly, previews 9+ formats like PDF or code instantly, generates/editing images, and offers web access. You benefit by ditching complex commands for quick, secure AI help in office tasks, coding, or data work—saving time and boosting productivity without data leaving your device.

https://github.com/iOfficeAI/AionUi
1
#jupyter_notebook #chinese_llm #chinese_nlp #finetune #generative_ai #instruct_gpt #instruction_set #llama #llm #lora #open_models #open_source #open_source_models #qlora

AirLLM is a tool that lets you run very large AI models on computers with limited memory by using a smart layer-by-layer loading technique instead of traditional compression methods. You can run a 70-billion-parameter model on just 4GB of GPU memory, or even a 405-billion-parameter model on 8GB, without losing model quality. The benefit is that you can use powerful AI models on affordable hardware without expensive upgrades, and the tool also offers optional compression features that can speed up performance by up to 3 times while maintaining accuracy.

https://github.com/lyogavin/airllm
#python #deepseek #demo #easy #embedding #flask #gpt #huggingface_transformers #llm #mcp #multimodal #openai #qwen #rag #sentence_transformers #ui #vllm #vlm

UltraRAG is a lightweight framework that makes building retrieval-augmented generation (RAG) systems simple and fast. It uses a low-code approach where you write just dozens of lines of YAML configuration instead of complex code to create sophisticated AI workflows with conditional logic and loops. The framework includes a visual development environment where you can drag-and-drop to build pipelines, adjust parameters in real-time, and instantly convert your logic into interactive chat applications. This means you can deploy powerful AI systems that ground answers in your own data—reducing hallucinations and improving accuracy—without needing extensive coding expertise or lengthy development cycles.

https://github.com/OpenBMB/UltraRAG
#python #abliteration #llm #transformer

Heretic is an automated tool that removes safety restrictions from AI language models while preserving their intelligence and capabilities. It uses advanced mathematical techniques called directional ablation to identify and disable the "refusal mechanisms" that prevent models from answering certain questions. The key benefit is that anyone can use it with a simple command—no technical expertise needed. Unlike manual methods that often damage model quality, Heretic achieves the same level of censorship removal with significantly better preservation of the original model's reasoning abilities, as measured by lower KL divergence scores. This means you get an uncensored model that still thinks clearly and produces high-quality responses.

https://github.com/p-e-w/heretic
#python #ai #claude #gemini #llama #llm #openai

You can access powerful AI language models for free or with trial credits through multiple legitimate platforms. Services like OpenRouter, Google AI Studio, Groq, and Mistral offer free tiers with varying request limits, while others like Fireworks, Baseten, and Inference.net provide trial credits ranging from $1 to $30. These platforms support diverse models including Llama, Gemma, Qwen, and DeepSeek, enabling you to build and test AI applications without upfront costs. The benefit is clear: you can prototype, develop, and deploy AI-powered features while managing your budget effectively, with options to scale up as your needs grow.

https://github.com/cheahjs/free-llm-api-resources