New Chinese ai model DeepSeek-R1-Lite-Preview is now live:
🔍 o1-preview-level performance on AIME & MATH benchmarks.
💡 Transparent thought process in real-time.
🛠️ Open-source models & API coming soon.
🔍 o1-preview-level performance on AIME & MATH benchmarks.
💡 Transparent thought process in real-time.
🛠️ Open-source models & API coming soon.
⚡5❤4
Google DeepMind introduced AlphaQubit: AI-based system that can more accurately identify errors inside quantum computers.
Google
AlphaQubit tackles one of quantum computing’s biggest challenges
AlphaQubit is an AI-based decoder that identifies quantum computing errors with state-of-the-art accuracy.
❤5
ElevenLabs Introduced Conversational AI Bot Builder
ElevenLabs, known for AI voice cloning and text-to-speech tools, now lets developers create customizable conversational bots. Users can fine-tune voice tone, choose LLMs like GPT or Claude, and integrate their own knowledge bases or models.
The platform leverages ElevenLabs’ text-to-speech tech, with speech-to-text capabilities in development. It supports multiple programming environments and offers advanced customization through a WebSocket API.
Targeting a $3 billion valuation, ElevenLabs aims to compete with OpenAI and other voice AI startups, focusing on flexibility and personalization.
ElevenLabs, known for AI voice cloning and text-to-speech tools, now lets developers create customizable conversational bots. Users can fine-tune voice tone, choose LLMs like GPT or Claude, and integrate their own knowledge bases or models.
The platform leverages ElevenLabs’ text-to-speech tech, with speech-to-text capabilities in development. It supports multiple programming environments and offers advanced customization through a WebSocket API.
Targeting a $3 billion valuation, ElevenLabs aims to compete with OpenAI and other voice AI startups, focusing on flexibility and personalization.
TechCrunch
ElevenLabs now offers ability to build conversational AI agents | TechCrunch
ElevenLabs, a startup that provides AI voice cloning and a text-to-speech API, launched the ability to build conversational AI bots on Monday. The company
❤5🦄4
Stanford researchers discovers how to clone human personalities and inject them into AI Agents
This builds on last year's paper which put 1000's of fully automated agents in a simulated town.
Last year, Stanford introduced generative agents in a simulated environment, where they formed relationships, created memories, and developed unique personalities.
Now, this new paper takes this concept even further.
The latest paper shows how real human personalities can be integrated into AI agents, enabling them to live their lives in simulations with traits that mirror actual humans.
Two-hour interviews were conducted with 1,000 participants to extract their personalities and created generative agents that replicate these individuals' attitudes and behaviors.
The agents performed with 85% accuracy on the General Social Survey compared to original human responses two weeks later, demonstrating their ability to mimic real human behavior.
Using dynamic follow-up questions in interviews allowed researchers to capture the essence of a person's thoughts and behaviors more accurately than traditional surveys.
These simulations could help predict societal reactions to policies without real-world implementation, such as testing new tax plans on AI societies before rollout.
This research opens up fascinating possibilities for understanding human behavior and decision-making in complex systems. The future of AI could involve highly realistic virtual societies.
A Glimpse Into Future Applications:
1. Video games: NPCs with rich backstories and evolving personalities.
2. Research: Testing new policies in AI societies instead of on people.
3. Training: Personalized agents for education or simulations.
This builds on last year's paper which put 1000's of fully automated agents in a simulated town.
Last year, Stanford introduced generative agents in a simulated environment, where they formed relationships, created memories, and developed unique personalities.
Now, this new paper takes this concept even further.
The latest paper shows how real human personalities can be integrated into AI agents, enabling them to live their lives in simulations with traits that mirror actual humans.
Two-hour interviews were conducted with 1,000 participants to extract their personalities and created generative agents that replicate these individuals' attitudes and behaviors.
The agents performed with 85% accuracy on the General Social Survey compared to original human responses two weeks later, demonstrating their ability to mimic real human behavior.
Using dynamic follow-up questions in interviews allowed researchers to capture the essence of a person's thoughts and behaviors more accurately than traditional surveys.
These simulations could help predict societal reactions to policies without real-world implementation, such as testing new tax plans on AI societies before rollout.
This research opens up fascinating possibilities for understanding human behavior and decision-making in complex systems. The future of AI could involve highly realistic virtual societies.
A Glimpse Into Future Applications:
1. Video games: NPCs with rich backstories and evolving personalities.
2. Research: Testing new policies in AI societies instead of on people.
3. Training: Personalized agents for education or simulations.
arXiv.org
Generative Agent Simulations of 1,000 People
The promise of human behavioral simulation--general-purpose computational agents that replicate human behavior across domains--could enable broad applications in policymaking and social science....
❤5
Microsoft said its custom AI chip, Maia, is already in use on Azure OpenAI, and it plans to launch 2 more custom AI chips, adding TSMC will likely manufacture the chips, while contract chip designer, Global Unichip, will likely win some engineering work on the chips.
Microsoft already has its Maia AI chip and Cobalt CPU in mass production on TSMC’s 5nm process, and the 2nd generation of each chip are said to be in development for 3nm.
At Microsoft Ignite, the cloud giant announced 2 more chips, Azure Integrated HSM and Azure Boost DPU.
Microsoft already has its Maia AI chip and Cobalt CPU in mass production on TSMC’s 5nm process, and the 2nd generation of each chip are said to be in development for 3nm.
At Microsoft Ignite, the cloud giant announced 2 more chips, Azure Integrated HSM and Azure Boost DPU.
經濟日報
微軟自研晶片躍進「Maia」導入 AI 服務 台積電、創意受惠 | 科技產業 | 產業 | 經濟日報
全球第二大雲端服務供應商(CSP)微軟AI自研晶片開始大放異彩,昨(20)日宣布,自研AI晶片「Maia」開始導入用於旗...
H Company introduced an agent that can execute any task from a prompt
Their "Runner H" can basically turn instructions into action with human-like precision.
Features:
▸ Navigates web interfaces with pixel-level precision.
▸ Interprets pixels and text to understand screens and elements.
▸ Automates workflows for web testing, onboarding, and e-commerce.
▸ Adapts automatically to UI changes.
▸ Achieves a 67% success rate on WebVoyager, outperforming competitors.
Architecture:
▸ Powered by a 2B-paramezer LLM for function calling and coding.
▸ Includes a 3B-parameter VLM for understanding graphical and text elements.
Their "Runner H" can basically turn instructions into action with human-like precision.
Features:
▸ Navigates web interfaces with pixel-level precision.
▸ Interprets pixels and text to understand screens and elements.
▸ Automates workflows for web testing, onboarding, and e-commerce.
▸ Adapts automatically to UI changes.
▸ Achieves a 67% success rate on WebVoyager, outperforming competitors.
Architecture:
▸ Powered by a 2B-paramezer LLM for function calling and coding.
▸ Includes a 3B-parameter VLM for understanding graphical and text elements.
www.hcompany.ai
H Company
H is working on frontier action models, to boost the productivity of workers. Building AI capabilities for task automation & decision-making.
❤🔥6
Salesforce introduced MOIRAI-MoE: The AI model that outperforms OpenAI, Google, and Amazon in predicting business trends.
Key Features:
• 17% more accurate than previous models
• Faster & more cost-effective than GPT-4
• Autonomous learning & adaptation
💼 Business Applications:
• Sales forecasting
• Inventory optimization
• Delivery route planning
• Staff scheduling
• Customer demand prediction
📊 Benchmark Results:
• Beat GPT-3.5 (OpenAI)
• Outperformed TimesFK (Google)
• Surpassed Chronos (Amazon)
• Set new industry standards
🎯 Perfect for:
• Retail chains
• E-commerce
• Logistics companies
• Manufacturing
• Service industries
Code
Models.
Key Features:
• 17% more accurate than previous models
• Faster & more cost-effective than GPT-4
• Autonomous learning & adaptation
💼 Business Applications:
• Sales forecasting
• Inventory optimization
• Delivery route planning
• Staff scheduling
• Customer demand prediction
📊 Benchmark Results:
• Beat GPT-3.5 (OpenAI)
• Outperformed TimesFK (Google)
• Surpassed Chronos (Amazon)
• Set new industry standards
🎯 Perfect for:
• Retail chains
• E-commerce
• Logistics companies
• Manufacturing
• Service industries
Code
Models.
arXiv.org
Moirai-MoE: Empowering Time Series Foundation Models with Sparse...
Time series foundation models have demonstrated impressive performance as zero-shot forecasters. However, achieving effectively unified training on time series remains an open challenge. Existing...
🔥5
Vector institute introduced The Matrix: a foundation world model for generating infinite-length, hyper-realistic videos with real-time, frame-level control:
- Infinite-length video generation
- 720p high-quality rendering
- Real-time, frame-level control at 16 FPS
- Generalization to real-world video control.
Key Innovation: A brand new technique called the shift-window denoise process model, enabling auto-regressive generation for diffusion and consistency models in real-time.
Special thanks to project leader Ruili Feng and the entire Matrix team for their dedication and hard work over the year-long project.
Paper.
Code & Playable Demo: Coming soon.
- Infinite-length video generation
- 720p high-quality rendering
- Real-time, frame-level control at 16 FPS
- Generalization to real-world video control.
Key Innovation: A brand new technique called the shift-window denoise process model, enabling auto-regressive generation for diffusion and consistency models in real-time.
Special thanks to project leader Ruili Feng and the entire Matrix team for their dedication and hard work over the year-long project.
Paper.
Code & Playable Demo: Coming soon.
🔥6
OpenAI has discussed powering AI in Samsung devices, similar to a deal the AI developer has with Apple
The discussions are part of a multipronged attack by OpenAI on Google, which is Samsung’s most important partner in using the Android mobile operating system and distributing Google’s mobile apps.
OpenAI also has discussed or reached deals with a slew of website publishers and apps—including Conde Nast and Priceline—to use the company’s conversational AI so that readers or customers can interact with the sites and apps in the same conversational tone they use with ChatGPT.
The discussions are part of a multipronged attack by OpenAI on Google, which is Samsung’s most important partner in using the Android mobile operating system and distributing Google’s mobile apps.
OpenAI also has discussed or reached deals with a slew of website publishers and apps—including Conde Nast and Priceline—to use the company’s conversational AI so that readers or customers can interact with the sites and apps in the same conversational tone they use with ChatGPT.
The Information
OpenAI Talks to Samsung About AI Features, Strikes Search Deals with Apps
OpenAI has discussed powering artificial intelligence in Samsung devices, similar to a deal the AI developer has with Apple, The Information reported . The discussions are part of a multipronged attack by OpenAI on Google, which is Samsung’s most important…
Nice paper from Alibaba on building open reasoning models.
They propose Marco-o1 which is a reasoning model built for open-ended solutions.
"Marco-o1 is powered by Chain-of-Thought (CoT) fine-tuning, Monte Carlo Tree Search (MCTS), reflection mechanisms, and innovative reasoning strategies—optimized for complex real-world problem-solving tasks."
They propose Marco-o1 which is a reasoning model built for open-ended solutions.
"Marco-o1 is powered by Chain-of-Thought (CoT) fine-tuning, Monte Carlo Tree Search (MCTS), reflection mechanisms, and innovative reasoning strategies—optimized for complex real-world problem-solving tasks."
arXiv.org
Marco-o1: Towards Open Reasoning Models for Open-Ended Solutions
Currently OpenAI o1 sparks a surge of interest in the study of large reasoning models (LRM). Building on this momentum, Marco-o1 not only focuses on disciplines with standard answers, such as...
❤5🔥2
⚡️❗️ Breaking Ground in BCI: Science (Neuralink's Competitor) Unveils Revolutionary Biohybrid Neural Technology
Science, a neurotechnology company founded by former Neuralink President Max Hodak, has revealed a revolutionary approach to brain-computer interfaces (BCIs) that could fundamentally transform how we interact with the human brain.
Unlike traditional BCIs, including those developed by Neuralink, Science's innovative biohybrid approach utilizes living neurons instead of conventional electrodes.
The company has developed a unique technology where specially engineered neurons, derived from stem cells, are integrated with electronics before being implanted into the brain. The key innovation lies in keeping the neuron cell bodies within the device while allowing their axons and dendrites to naturally grow into the brain tissue, forming new connections with existing neurons.
This breakthrough approach offers several revolutionary advantages:
1. Natural Integration:
- A single implant of one million neurons can create over a billion synaptic connections
- The device occupies less than a cubic millimeter
- Forms genuine chemical synapses with brain cells
2. Versatility:
- Capability to use various neuron types (dopaminergic, cholinergic, glutamatergic)
- Ability to stimulate the brain using natural neurotransmitters
- Superior signal quality with lower power consumption
3. Scalability Potential:
- Technology can be scaled to millions of neurons
- Theoretical bandwidth comparable to the corpus callosum (the structure connecting brain hemispheres)
The development team is addressing several technical challenges:
1. Immunological Compatibility:
- Need to create immune-invisible cells
- Current personalized cell creation process is costly ($1M+) and time-consuming (months)
2. Cell Viability:
- Neurons must survive glycemic shock
- Protection from hypoxia is essential
- Proper glial support required
- Cells must mature within an active electronic device
Science has already published their first paper demonstrating this technology's capabilities.
While their biohybrid approach is still in early development, its potential is immense. It could solve the fundamental limitations of traditional BCIs - brain tissue damage during electrode implantation and limited long-term stability.
This development represents a significant departure from conventional BCI approaches, including those of Neuralink and other competitors. While Neuralink has focused on developing advanced electrode arrays, Science's biohybrid approach could potentially offer a more natural and sustainable solution for brain-computer integration.
The implications of this breakthrough extend beyond just technological advancement. It opens new possibilities for treating neurological conditions, restoring lost brain functions, and creating more natural brain-computer interfaces. If the technical challenges can be overcome, this technology could form the foundation for the next generation of neuroprosthetics and therapeutic devices.
This innovation underscores the rapid advancement in neurotechnology, with companies like Science and Neuralink pushing the boundaries of what's possible in brain-computer interfacing. The competition between these companies, led by visionary entrepreneurs like Max Hodak, continues to drive innovation in this crucial field, potentially bringing us closer to a future where seamless brain-computer integration becomes a reality.
Science's approach represents not just an incremental improvement but a paradigm shift in how we think about brain-computer interfaces, potentially offering a more biocompatible and sustainable solution for long-term neural interfacing.
Science, a neurotechnology company founded by former Neuralink President Max Hodak, has revealed a revolutionary approach to brain-computer interfaces (BCIs) that could fundamentally transform how we interact with the human brain.
Unlike traditional BCIs, including those developed by Neuralink, Science's innovative biohybrid approach utilizes living neurons instead of conventional electrodes.
The company has developed a unique technology where specially engineered neurons, derived from stem cells, are integrated with electronics before being implanted into the brain. The key innovation lies in keeping the neuron cell bodies within the device while allowing their axons and dendrites to naturally grow into the brain tissue, forming new connections with existing neurons.
This breakthrough approach offers several revolutionary advantages:
1. Natural Integration:
- A single implant of one million neurons can create over a billion synaptic connections
- The device occupies less than a cubic millimeter
- Forms genuine chemical synapses with brain cells
2. Versatility:
- Capability to use various neuron types (dopaminergic, cholinergic, glutamatergic)
- Ability to stimulate the brain using natural neurotransmitters
- Superior signal quality with lower power consumption
3. Scalability Potential:
- Technology can be scaled to millions of neurons
- Theoretical bandwidth comparable to the corpus callosum (the structure connecting brain hemispheres)
The development team is addressing several technical challenges:
1. Immunological Compatibility:
- Need to create immune-invisible cells
- Current personalized cell creation process is costly ($1M+) and time-consuming (months)
2. Cell Viability:
- Neurons must survive glycemic shock
- Protection from hypoxia is essential
- Proper glial support required
- Cells must mature within an active electronic device
Science has already published their first paper demonstrating this technology's capabilities.
While their biohybrid approach is still in early development, its potential is immense. It could solve the fundamental limitations of traditional BCIs - brain tissue damage during electrode implantation and limited long-term stability.
This development represents a significant departure from conventional BCI approaches, including those of Neuralink and other competitors. While Neuralink has focused on developing advanced electrode arrays, Science's biohybrid approach could potentially offer a more natural and sustainable solution for brain-computer integration.
The implications of this breakthrough extend beyond just technological advancement. It opens new possibilities for treating neurological conditions, restoring lost brain functions, and creating more natural brain-computer interfaces. If the technical challenges can be overcome, this technology could form the foundation for the next generation of neuroprosthetics and therapeutic devices.
This innovation underscores the rapid advancement in neurotechnology, with companies like Science and Neuralink pushing the boundaries of what's possible in brain-computer interfacing. The competition between these companies, led by visionary entrepreneurs like Max Hodak, continues to drive innovation in this crucial field, potentially bringing us closer to a future where seamless brain-computer integration becomes a reality.
Science's approach represents not just an incremental improvement but a paradigm shift in how we think about brain-computer interfaces, potentially offering a more biocompatible and sustainable solution for long-term neural interfacing.
Science Corporation
Biohybrid neural interfaces: an old idea enabling a completely new space of possibilities | Science Corporation
Science Corporation is a clinical-stage medical technology company.
🔥8👍3❤2
Perceptron: AI That Lives in the Physical World
Imagine a modern factory floor. Dozens of cameras, microphones, and sensors monitor processes. Robots assemble products. But each system operates in isolation: one monitors quality through video, another listens to equipment sounds, a third tracks sensor data, and a fourth controls robots.
Perceptron aims to unite this into a single "brain" that:
• Simultaneously analyzes all data streams. If something goes wrong, it will notice it through video, sound, and sensor readings
• Works in real-time — instantly responding to any deviations
• Learns from experience and transfers knowledge between different tasks
• Speaks human language: can explain to operators what went wrong and why
In essence, instead of a set of disparate systems, we get a unified "smart assistant" that simultaneously sees, hears, and feels everything happening around it.
It's like moving from separate programs for text, spreadsheets, and presentations to a unified office suite. Except here, we're talking about managing the physical world.
The company has already attracted investments from major funds and experts. It will be interesting to see if they can create such a universal "physical" AI.
Imagine a modern factory floor. Dozens of cameras, microphones, and sensors monitor processes. Robots assemble products. But each system operates in isolation: one monitors quality through video, another listens to equipment sounds, a third tracks sensor data, and a fourth controls robots.
Perceptron aims to unite this into a single "brain" that:
• Simultaneously analyzes all data streams. If something goes wrong, it will notice it through video, sound, and sensor readings
• Works in real-time — instantly responding to any deviations
• Learns from experience and transfers knowledge between different tasks
• Speaks human language: can explain to operators what went wrong and why
In essence, instead of a set of disparate systems, we get a unified "smart assistant" that simultaneously sees, hears, and feels everything happening around it.
It's like moving from separate programs for text, spreadsheets, and presentations to a unified office suite. Except here, we're talking about managing the physical world.
The company has already attracted investments from major funds and experts. It will be interesting to see if they can create such a universal "physical" AI.
www.perceptron.inc
A layer of intelligence for the physical world.
We are a research company building the future of Physical AGI.
We are a research company building the future of Physical AGI.
🔥5
All about AI, Web 3.0, BCI
OLMo is a new open-source LLM from Allen institute for AI that people are discussing OLMo stands out by providing the research community with access to "a truly open language model" framework. It is a holistic toolset for LM research, providing resources…
Ai2 introduced Tülu 3 a set of SOTA instruct models with fully open data, eval code, and training algorithms.
8B model
70B model
Try it out
8B model
70B model
Try it out
allenai.org
Tulu | Ai2
Ai2, a non-profit research institute founded by Paul Allen, is committed to breakthrough AI to solve the world’s biggest problems.
❤4
Hugging Face ecosystem (Nov 2024).pdf
11.4 MB
A lot of gems and insights in this fresh unreleased slide-deck by HuggingFace.
Production uses for open and closed source AI
Production uses for open and closed source AI
❤6
Anthropic has just announced a universal bridge between AI systems and various data sources the Model Context Protocol (MCP).
This open-source standard promises to revolutionize how AI assistants interact with enterprise data systems, development environments, and content repositories.
The Model Context Protocol represents a universal bridge between AI systems and various data sources. Instead of requiring custom implementations for each new data connection, MCP provides a standardized way for AI assistants to access and utilize organizational data securely.
Key Components
The initial release includes three major elements:
1. The core Model Context Protocol specification and SDKs
2. Local MCP server support integrated into Claude Desktop applications
3. An open-source repository featuring pre-built MCP servers
The protocol launches with pre-built MCP servers for popular enterprise systems including:
- Google Drive
- Slack
- GitHub
- Git
- Postgres
- Puppeteer
Major players are already embracing the technology:
- Block and Apollo have integrated MCP into their systems
- Development tools companies including Zed, Replit, Codeium, and Sourcegraph are actively working on MCP implementation
- Claude 3.5 Sonnet has demonstrated exceptional capabilities in building MCP server implementations.
This open-source standard promises to revolutionize how AI assistants interact with enterprise data systems, development environments, and content repositories.
The Model Context Protocol represents a universal bridge between AI systems and various data sources. Instead of requiring custom implementations for each new data connection, MCP provides a standardized way for AI assistants to access and utilize organizational data securely.
Key Components
The initial release includes three major elements:
1. The core Model Context Protocol specification and SDKs
2. Local MCP server support integrated into Claude Desktop applications
3. An open-source repository featuring pre-built MCP servers
The protocol launches with pre-built MCP servers for popular enterprise systems including:
- Google Drive
- Slack
- GitHub
- Git
- Postgres
- Puppeteer
Major players are already embracing the technology:
- Block and Apollo have integrated MCP into their systems
- Development tools companies including Zed, Replit, Codeium, and Sourcegraph are actively working on MCP implementation
- Claude 3.5 Sonnet has demonstrated exceptional capabilities in building MCP server implementations.
Anthropic
Introducing the Model Context Protocol
The Model Context Protocol (MCP) is an open standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. Its aim is to help frontier models produce better, more relevant…
❤7👍2
❗️Neuralink announced the approval and launch of a new feasibility trial to extend BCI control using the N1 Implant to an investigational assistive robotic arm.
This builds on their ongoing PRIME Study, which helps people with quadriplegia control digital devices through thought alone.
Now, they're taking a crucial step forward - from digital to physical control. Participants who are already enrolled in PRIME will have the opportunity to join CONVOY and potentially control an assistive robotic arm using just their thoughts.
This feasibility trial marks a pivotal moment in brain-computer interface technology. While PRIME focuses on restoring "digital freedom" - letting users control computers and smartphones mentally - CONVOY aims to restore physical independence through robotic assistance.
This builds on their ongoing PRIME Study, which helps people with quadriplegia control digital devices through thought alone.
Now, they're taking a crucial step forward - from digital to physical control. Participants who are already enrolled in PRIME will have the opportunity to join CONVOY and potentially control an assistive robotic arm using just their thoughts.
This feasibility trial marks a pivotal moment in brain-computer interface technology. While PRIME focuses on restoring "digital freedom" - letting users control computers and smartphones mentally - CONVOY aims to restore physical independence through robotic assistance.
Neuralink
Clinical Trials | Neuralink
Connect with us and learn more about Neuralink clinical trials.
👍5🔥3⚡1
Andrew Ng announced new open-source Python package: aisuite
This makes it easy for developers to use large language models from multiple providers.
Open-source code with instructions.
This makes it easy for developers to use large language models from multiple providers.
Open-source code with instructions.
GitHub
GitHub - andrewyng/aisuite: Simple, unified interface to multiple Generative AI providers
Simple, unified interface to multiple Generative AI providers - GitHub - andrewyng/aisuite: Simple, unified interface to multiple Generative AI providers
❤5
You can connect Claude to an internet search engine using MCP
Here's how you can do it too in under 5 minutes:
1. U will need to download the latest version of our Claude desktop app here. To use Brave Web Search specifically, you will need to sign up for a free API key here.
2. Open up your Claude Desktop configuration file:
- macOS: ~/Library/Application Support/Claude/claude_desktop_config.json - Windows: %APPDATA%\Claude\claude_desktop_config.json
3. Then, just add this to that file and save it:
{
"mcpServers": {
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": {
"BRAVE_API_KEY": "ADD_YOUR_API_KEY_HERE"
}
}
}
}
4. Restart your Claude desktop app for the changes to load.
You can check that the server has been configured properly if you navigate to "Claude" > "Settings" in the top bar and check the Developer tab in this window that pops up.
5. After that, just ask Claude to make a web search for you!
The server tools automatically get loaded in to the system prompt so that Claude knows it has access to them.
If you want to improve on this server, explore more servers, or make your own integrations, go check out GitHub repository.
Here's how you can do it too in under 5 minutes:
1. U will need to download the latest version of our Claude desktop app here. To use Brave Web Search specifically, you will need to sign up for a free API key here.
2. Open up your Claude Desktop configuration file:
- macOS: ~/Library/Application Support/Claude/claude_desktop_config.json - Windows: %APPDATA%\Claude\claude_desktop_config.json
3. Then, just add this to that file and save it:
{
"mcpServers": {
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": {
"BRAVE_API_KEY": "ADD_YOUR_API_KEY_HERE"
}
}
}
}
4. Restart your Claude desktop app for the changes to load.
You can check that the server has been configured properly if you navigate to "Claude" > "Settings" in the top bar and check the Developer tab in this window that pops up.
5. After that, just ask Claude to make a web search for you!
The server tools automatically get loaded in to the system prompt so that Claude knows it has access to them.
If you want to improve on this server, explore more servers, or make your own integrations, go check out GitHub repository.
👍5
OpneAI’s long awaited Sora video model may have leaked on HuggingFace.
huggingface.co
PR Puppet Sora - a Hugging Face Space by PR-Puppets
Enter a prompt to generate a video. You can specify the resolution and duration. The video will be saved and added to your generation history.
❤4
Google introduced Health AI Developer Foundations is a game-changer for healthcare AI innovation
It provides open-weight models and resources to help developers build healthcare AI tools faster and more efficiently.
The initial focus is on imaging applications in radiology, dermatology, and pathology, with models pre-trained on large, diverse datasets for powerful performance.
Whether it's chest X-rays, skin conditions, or pathology slides, these models offer a robust starting point for creating AI solutions with minimal additional data and compute.
This suite builds on community feedback and is available for download via platforms like Vertex AI Model Garden and Hugging Face.
This initiative could democratize AI development, enabling creators to innovate for real-world healthcare challenges—improving diagnostics, broadening access, and easing clinician workloads.
It provides open-weight models and resources to help developers build healthcare AI tools faster and more efficiently.
The initial focus is on imaging applications in radiology, dermatology, and pathology, with models pre-trained on large, diverse datasets for powerful performance.
Whether it's chest X-rays, skin conditions, or pathology slides, these models offer a robust starting point for creating AI solutions with minimal additional data and compute.
This suite builds on community feedback and is available for download via platforms like Vertex AI Model Garden and Hugging Face.
This initiative could democratize AI development, enabling creators to innovate for real-world healthcare challenges—improving diagnostics, broadening access, and easing clinician workloads.
research.google
Helping everyone build AI for healthcare applications with open foundation models
❤4