#typescript #agent #agentic #agentic_ai #agents #agents_sdk #ai #ai_agents #aiagentframework #genai #genai_chatbot #llm #llms #multi_agent #multi_agent_systems #multi_agents #multi_agents_collaboration
Agent Development Kit (ADK) for TypeScript is an open-source toolkit to build, test, and deploy advanced AI agents with full control in code. Key features include rich tools like Google Search, custom functions, and multi-agent hierarchies for scalable apps, plus a dev UI for easy debugging. Install via
https://github.com/google/adk-js
Agent Development Kit (ADK) for TypeScript is an open-source toolkit to build, test, and deploy advanced AI agents with full control in code. Key features include rich tools like Google Search, custom functions, and multi-agent hierarchies for scalable apps, plus a dev UI for easy debugging. Install via
npm install @google/adk. You benefit by creating flexible, versioned AI agents that integrate tightly with Google Cloud, run anywhere from laptop to cloud, and speed up development like regular software.https://github.com/google/adk-js
GitHub
GitHub - google/adk-js: An open-source, code-first Typescript toolkit for building, evaluating, and deploying sophisticated AI…
An open-source, code-first Typescript toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control. - google/adk-js
#go #gemma3 #go #gpt_oss #granite4 #llama #llama3 #llm #on_device_ai #phi3 #qwen3 #qwen3vl #sdk #stable_diffusion #vlm
NexaSDK runs AI models locally on CPUs, GPUs, and NPUs with a single command, supports GGUF/MLX/.nexa formats, and offers NPU-first Android and macOS support for fast, multimodal (text, image, audio) inference, plus an OpenAI‑compatible API for easy integration. This gives you low-latency, private on-device AI across laptops, phones, and embedded systems, reduces cloud costs and data exposure, and lets you deploy and test new models immediately on target hardware for faster development and better user experience.
https://github.com/NexaAI/nexa-sdk
NexaSDK runs AI models locally on CPUs, GPUs, and NPUs with a single command, supports GGUF/MLX/.nexa formats, and offers NPU-first Android and macOS support for fast, multimodal (text, image, audio) inference, plus an OpenAI‑compatible API for easy integration. This gives you low-latency, private on-device AI across laptops, phones, and embedded systems, reduces cloud costs and data exposure, and lets you deploy and test new models immediately on target hardware for faster development and better user experience.
https://github.com/NexaAI/nexa-sdk
GitHub
GitHub - NexaAI/nexa-sdk: Run frontier LLMs and VLMs with day-0 model support across GPU, NPU, and CPU, with comprehensive runtime…
Run frontier LLMs and VLMs with day-0 model support across GPU, NPU, and CPU, with comprehensive runtime coverage for PC (Python/C++), mobile (Android & iOS), and Linux/IoT (Arm64 &...
#python #ai #bug_detection #code_audit #code_quality #code_review #developer_tools #devsecops #google_gemini #llm #react #sast #security_scanner #supabase #typescript #vite #vulnerability_scanner #xai
**DeepAudit** is an AI-powered code audit tool using multi-agent collaboration to deeply scan projects for vulnerabilities like SQL injection, XSS, and path traversal. Import code from GitHub/GitLab or paste snippets; agents plan, analyze with RAG knowledge, and verify issues via secure Docker sandbox PoCs, generating PDF reports with fix suggestions. Deploy easily with one Docker command, supports local Ollama models for privacy, and cuts traditional tools' high false positives. **You benefit** by automating secure audits like a pro hacker—saving time, reducing errors, ensuring real exploits are caught, and speeding safe releases without manual hassle.
https://github.com/lintsinghua/DeepAudit
**DeepAudit** is an AI-powered code audit tool using multi-agent collaboration to deeply scan projects for vulnerabilities like SQL injection, XSS, and path traversal. Import code from GitHub/GitLab or paste snippets; agents plan, analyze with RAG knowledge, and verify issues via secure Docker sandbox PoCs, generating PDF reports with fix suggestions. Deploy easily with one Docker command, supports local Ollama models for privacy, and cuts traditional tools' high false positives. **You benefit** by automating secure audits like a pro hacker—saving time, reducing errors, ensuring real exploits are caught, and speeding safe releases without manual hassle.
https://github.com/lintsinghua/DeepAudit
GitHub
GitHub - lintsinghua/DeepAudit: DeepAudit:人人拥有的 AI 黑客战队,让漏洞挖掘触手可及。国内首个开源的代码漏洞挖掘多智能体系统。小白一键部署运行,自主协作审计 + 自动化沙箱 PoC 验证。支持 Ollama…
DeepAudit:人人拥有的 AI 黑客战队,让漏洞挖掘触手可及。国内首个开源的代码漏洞挖掘多智能体系统。小白一键部署运行,自主协作审计 + 自动化沙箱 PoC 验证。支持 Ollama 私有部署 ,一键生成报告。支持中转站。让安全不再昂贵,让审计不再复杂。 - lintsinghua/DeepAudit
#rust #ai #change_data_capture #context_engineering #data #data_engineering #data_indexing #data_infrastructure #data_processing #etl #hacktoberfest #help_wanted #indexing #knowledge_graph #llm #pipeline #python #rag #real_time #rust #semantic_search
**CocoIndex** is a fast, open-source Python tool (Rust core) for transforming data into AI formats like vector indexes or knowledge graphs. Define simple data flows in ~100 lines of code using plug-and-play blocks for sources, embeddings, and targets—install via `pip install cocoindex`, add Postgres, and run. It auto-syncs fresh data with minimal recompute on changes, tracking lineage. **You save time building scalable RAG/semantic search pipelines effortlessly, avoiding complex ETL and stale data issues for production-ready AI apps.**
https://github.com/cocoindex-io/cocoindex
**CocoIndex** is a fast, open-source Python tool (Rust core) for transforming data into AI formats like vector indexes or knowledge graphs. Define simple data flows in ~100 lines of code using plug-and-play blocks for sources, embeddings, and targets—install via `pip install cocoindex`, add Postgres, and run. It auto-syncs fresh data with minimal recompute on changes, tracking lineage. **You save time building scalable RAG/semantic search pipelines effortlessly, avoiding complex ETL and stale data issues for production-ready AI apps.**
https://github.com/cocoindex-io/cocoindex
GitHub
GitHub - cocoindex-io/cocoindex: Data transformation framework for AI. Ultra performant, with incremental processing. 🌟 Star if…
Data transformation framework for AI. Ultra performant, with incremental processing. 🌟 Star if you like it! - cocoindex-io/cocoindex
#python #gemini #gemini_ai #gemini_api #gemini_flash #gemini_pro #information_extration #large_language_models #llm #nlp #python #structured_data
**LangExtract** is a free Python library that uses AI models like Gemini to pull structured data—like names, emotions, or meds—from messy text such as reports or books. It links every fact to its exact spot in the original, creates interactive visuals for easy checks, handles huge files fast with chunking and parallel runs, and works with cloud or local models without fine-tuning. You benefit by quickly turning unstructured docs into reliable, organized data for analysis, saving time and boosting accuracy in fields like healthcare or research.
https://github.com/google/langextract
**LangExtract** is a free Python library that uses AI models like Gemini to pull structured data—like names, emotions, or meds—from messy text such as reports or books. It links every fact to its exact spot in the original, creates interactive visuals for easy checks, handles huge files fast with chunking and parallel runs, and works with cloud or local models without fine-tuning. You benefit by quickly turning unstructured docs into reliable, organized data for analysis, saving time and boosting accuracy in fields like healthcare or research.
https://github.com/google/langextract
GitHub
GitHub - google/langextract: A Python library for extracting structured information from unstructured text using LLMs with precise…
A Python library for extracting structured information from unstructured text using LLMs with precise source grounding and interactive visualization. - google/langextract
#python #ai_tool #darkweb #darkweb_osint #investigation_tool #llm_powered #osint #osint_tool
Robin is an AI tool that searches and scrapes the dark web, refines queries with large language models, filters results, and produces a concise investigation summary you can save or export, with Docker and CLI options and support for multiple LLMs (OpenAI, Anthropic, Gemini, local models) to fit your workflow. This helps you save hours of manual searching by automating multi-engine dark-web searches, scraping Onion sites via Tor, filtering noise with AI, and producing ready-to-use reports for faster, more focused OSINT investigations.
https://github.com/apurvsinghgautam/robin
Robin is an AI tool that searches and scrapes the dark web, refines queries with large language models, filters results, and produces a concise investigation summary you can save or export, with Docker and CLI options and support for multiple LLMs (OpenAI, Anthropic, Gemini, local models) to fit your workflow. This helps you save hours of manual searching by automating multi-engine dark-web searches, scraping Onion sites via Tor, filtering noise with AI, and producing ready-to-use reports for faster, more focused OSINT investigations.
https://github.com/apurvsinghgautam/robin
GitHub
GitHub - apurvsinghgautam/robin: AI-Powered Dark Web OSINT Tool
AI-Powered Dark Web OSINT Tool. Contribute to apurvsinghgautam/robin development by creating an account on GitHub.
#python #agent #agentic_ai #agentic_framework #agentic_workflow #ai #ai_agents #ai_companion #ai_roleplay #benchmark #framework #llm #mcp #memory #open_source #python #sandbox
MemU lets AI systems take in conversations, documents, and media, turn them into structured memories, and store them in a clear three-layer file system. It offers both fast embedding search and deeper LLM-based retrieval, works with many data types, and supports cloud or self-hosted setups with simple APIs. This helps you build AI agents that truly remember past interactions, retrieve the right context when needed, and improve over time, making your applications more accurate, personal, and efficient.
https://github.com/NevaMind-AI/memU
MemU lets AI systems take in conversations, documents, and media, turn them into structured memories, and store them in a clear three-layer file system. It offers both fast embedding search and deeper LLM-based retrieval, works with many data types, and supports cloud or self-hosted setups with simple APIs. This helps you build AI agents that truly remember past interactions, retrieve the right context when needed, and improve over time, making your applications more accurate, personal, and efficient.
https://github.com/NevaMind-AI/memU
GitHub
GitHub - NevaMind-AI/memU: Memory infrastructure for LLMs and AI agents
Memory infrastructure for LLMs and AI agents. Contribute to NevaMind-AI/memU development by creating an account on GitHub.
#javascript #agent #agentic #agentic_ai #ai #ai_agents #automation #cursor #design #figma #generative_ai #llm #llms #mcp #model_context_protocol
Cursor Talk to Figma MCP lets Cursor AI read and edit your Figma designs directly, using tools like `get_selection` for info, `set_text_content` for bulk text changes, `create_rectangle` for shapes, and `set_instance_overrides` for components. Setup is quick: install Bun, run `bun setup` and `bun socket`, add the Figma plugin. This saves you hours by skipping context switches, automating repetitive tasks like text replacement or override propagation, speeding up design-to-code workflows, and keeping everything in sync for faster, precise builds.
https://github.com/grab/cursor-talk-to-figma-mcp
Cursor Talk to Figma MCP lets Cursor AI read and edit your Figma designs directly, using tools like `get_selection` for info, `set_text_content` for bulk text changes, `create_rectangle` for shapes, and `set_instance_overrides` for components. Setup is quick: install Bun, run `bun setup` and `bun socket`, add the Figma plugin. This saves you hours by skipping context switches, automating repetitive tasks like text replacement or override propagation, speeding up design-to-code workflows, and keeping everything in sync for faster, precise builds.
https://github.com/grab/cursor-talk-to-figma-mcp
GitHub
GitHub - grab/cursor-talk-to-figma-mcp: TalkToFigma: MCP integration between Cursor and Figma, allowing Cursor Agentic AI to communicate…
TalkToFigma: MCP integration between Cursor and Figma, allowing Cursor Agentic AI to communicate with Figma for reading designs and modifying them programmatically. - grab/cursor-talk-to-figma-mcp
#typescript #acp #ai #ai_agent #banana #chat #chatbot #claude_code #codex #cowork #excel #gemini #gemini_cli #gemini_pro #llm #multi_agent #nano_banana #office #qwen_code #skills #webui
AionUi is a free, open-source app that gives your CLI AI tools like Gemini CLI, Claude Code, and Qwen Code a simple graphical interface on macOS, Windows, or Linux. It auto-detects them for easy chatting, saves talks locally with multi-sessions, organizes files smartly, previews 9+ formats like PDF or code instantly, generates/editing images, and offers web access. You benefit by ditching complex commands for quick, secure AI help in office tasks, coding, or data work—saving time and boosting productivity without data leaving your device.
https://github.com/iOfficeAI/AionUi
AionUi is a free, open-source app that gives your CLI AI tools like Gemini CLI, Claude Code, and Qwen Code a simple graphical interface on macOS, Windows, or Linux. It auto-detects them for easy chatting, saves talks locally with multi-sessions, organizes files smartly, previews 9+ formats like PDF or code instantly, generates/editing images, and offers web access. You benefit by ditching complex commands for quick, secure AI help in office tasks, coding, or data work—saving time and boosting productivity without data leaving your device.
https://github.com/iOfficeAI/AionUi
GitHub
GitHub - iOfficeAI/AionUi: Free, local, open-source Cowork for Gemini CLI, Claude Code, Codex, Opencode, Qwen Code, Goose Cli,…
Free, local, open-source Cowork for Gemini CLI, Claude Code, Codex, Opencode, Qwen Code, Goose Cli, Auggie, and more | 🌟 Star if you like it! - iOfficeAI/AionUi
❤1
#jupyter_notebook #chinese_llm #chinese_nlp #finetune #generative_ai #instruct_gpt #instruction_set #llama #llm #lora #open_models #open_source #open_source_models #qlora
AirLLM is a tool that lets you run very large AI models on computers with limited memory by using a smart layer-by-layer loading technique instead of traditional compression methods. You can run a 70-billion-parameter model on just 4GB of GPU memory, or even a 405-billion-parameter model on 8GB, without losing model quality. The benefit is that you can use powerful AI models on affordable hardware without expensive upgrades, and the tool also offers optional compression features that can speed up performance by up to 3 times while maintaining accuracy.
https://github.com/lyogavin/airllm
AirLLM is a tool that lets you run very large AI models on computers with limited memory by using a smart layer-by-layer loading technique instead of traditional compression methods. You can run a 70-billion-parameter model on just 4GB of GPU memory, or even a 405-billion-parameter model on 8GB, without losing model quality. The benefit is that you can use powerful AI models on affordable hardware without expensive upgrades, and the tool also offers optional compression features that can speed up performance by up to 3 times while maintaining accuracy.
https://github.com/lyogavin/airllm
GitHub
GitHub - lyogavin/airllm: AirLLM 70B inference with single 4GB GPU
AirLLM 70B inference with single 4GB GPU. Contribute to lyogavin/airllm development by creating an account on GitHub.
#python #deepseek #demo #easy #embedding #flask #gpt #huggingface_transformers #llm #mcp #multimodal #openai #qwen #rag #sentence_transformers #ui #vllm #vlm
UltraRAG is a lightweight framework that makes building retrieval-augmented generation (RAG) systems simple and fast. It uses a low-code approach where you write just dozens of lines of YAML configuration instead of complex code to create sophisticated AI workflows with conditional logic and loops. The framework includes a visual development environment where you can drag-and-drop to build pipelines, adjust parameters in real-time, and instantly convert your logic into interactive chat applications. This means you can deploy powerful AI systems that ground answers in your own data—reducing hallucinations and improving accuracy—without needing extensive coding expertise or lengthy development cycles.
https://github.com/OpenBMB/UltraRAG
UltraRAG is a lightweight framework that makes building retrieval-augmented generation (RAG) systems simple and fast. It uses a low-code approach where you write just dozens of lines of YAML configuration instead of complex code to create sophisticated AI workflows with conditional logic and loops. The framework includes a visual development environment where you can drag-and-drop to build pipelines, adjust parameters in real-time, and instantly convert your logic into interactive chat applications. This means you can deploy powerful AI systems that ground answers in your own data—reducing hallucinations and improving accuracy—without needing extensive coding expertise or lengthy development cycles.
https://github.com/OpenBMB/UltraRAG
GitHub
GitHub - OpenBMB/UltraRAG: UltraRAG v2: A Low-Code MCP Framework for Building Complex and Innovative RAG Pipelines
UltraRAG v2: A Low-Code MCP Framework for Building Complex and Innovative RAG Pipelines - OpenBMB/UltraRAG