Meta AI researchers propose a new learning paradigm for language agents called “early experience”, a reward-free method where agents learn by interacting with environments using their own suboptimal actions.
Instead of relying solely on human demonstrations or reinforcement signals, the agent learns from future outcomes it observes after taking alternative actions.
Two key strategies power this method:
1. Implicit World Modeling – grounding behavior in environment dynamics
2. Self-Reflection – learning from mistakes by generating natural language rationales
Tested across 8 diverse environments, the approach outperforms imitation learning alone and significantly boosts generalization even improving downstream reinforcement learning.
It positions early experience as a scalable bridge between static supervised fine-tuning and full-on autonomous agents.
Instead of relying solely on human demonstrations or reinforcement signals, the agent learns from future outcomes it observes after taking alternative actions.
Two key strategies power this method:
1. Implicit World Modeling – grounding behavior in environment dynamics
2. Self-Reflection – learning from mistakes by generating natural language rationales
Tested across 8 diverse environments, the approach outperforms imitation learning alone and significantly boosts generalization even improving downstream reinforcement learning.
It positions early experience as a scalable bridge between static supervised fine-tuning and full-on autonomous agents.
🔥8🥰2👏2
OpenAI, Anthropic, and Google DeepMind jointly released a paper shows the current LLM safety defenses are extremely fragile
The paper systematically evaluates the robustness of current LLM safety defenses and finds that almost all existing methods can be bypassed by adaptive attacks.
Looks all LLM big names emphasize that reliable robustness evaluation of LLMs must incorporate adaptive attacks.
If a defense fails under a single adaptive loop, it cannot be considered robust.
1. The study tests 12 types of LLM defense mechanisms, covering jailbreak prevention and prompt-injection defenses. It shows that most current evaluation protocols rely on static or fixed attack samples, which fail to simulate a realistic adaptive attacker.
Once the attacker can adjust strategy, success rates of bypassing reach more than 90% for most models.
2. The authors propose a General Adaptive Attack Framework. It assumes attackers can systematically modify attack prompts based on defense feedback, using optimization methods such as gradient descent, reinforcement learning, random search, and human-in-the-loop exploration.
This framework successfully bypassed all 12 recently published defense methods.
3. Prompt-based defenses
can resist fixed attacks, but are ineffective against adaptive ones: Spotlighting / Prompt Sandwiching: ASR (attack success rate) > 95%, RPO: ASR ≈ 96–98%
it shows such methods lack generalization and are easily defeated once new automated or human attack variants appear.
4. Training-based defenses fine-tune models with adversarial data.
However, adaptive attacks raised success rates from below 5 % to 96–100 %.
This confirms that static adversarial training cannot cover unseen adaptive attacks; dynamic retraining is required.
5. Filter-model defenses place an external classifier before or after the main model.
These are typically fine-tuned BERT detectors.
6. Secret-knowledge defenses rely on hidden triggers or unknown “canary” information to detect injection.
All four categories: prompt optimization, adversarial training, filtering, and secret-based detection, exhibit severe weaknesses.
Static or single-shot defenses cannot resist adaptive attack loops. Only dynamically optimized and continuously co-trained systems may achieve meaningful robustness.
The paper systematically evaluates the robustness of current LLM safety defenses and finds that almost all existing methods can be bypassed by adaptive attacks.
Looks all LLM big names emphasize that reliable robustness evaluation of LLMs must incorporate adaptive attacks.
If a defense fails under a single adaptive loop, it cannot be considered robust.
1. The study tests 12 types of LLM defense mechanisms, covering jailbreak prevention and prompt-injection defenses. It shows that most current evaluation protocols rely on static or fixed attack samples, which fail to simulate a realistic adaptive attacker.
Once the attacker can adjust strategy, success rates of bypassing reach more than 90% for most models.
2. The authors propose a General Adaptive Attack Framework. It assumes attackers can systematically modify attack prompts based on defense feedback, using optimization methods such as gradient descent, reinforcement learning, random search, and human-in-the-loop exploration.
This framework successfully bypassed all 12 recently published defense methods.
3. Prompt-based defenses
can resist fixed attacks, but are ineffective against adaptive ones: Spotlighting / Prompt Sandwiching: ASR (attack success rate) > 95%, RPO: ASR ≈ 96–98%
it shows such methods lack generalization and are easily defeated once new automated or human attack variants appear.
4. Training-based defenses fine-tune models with adversarial data.
However, adaptive attacks raised success rates from below 5 % to 96–100 %.
This confirms that static adversarial training cannot cover unseen adaptive attacks; dynamic retraining is required.
5. Filter-model defenses place an external classifier before or after the main model.
These are typically fine-tuned BERT detectors.
6. Secret-knowledge defenses rely on hidden triggers or unknown “canary” information to detect injection.
All four categories: prompt optimization, adversarial training, filtering, and secret-based detection, exhibit severe weaknesses.
Static or single-shot defenses cannot resist adaptive attack loops. Only dynamically optimized and continuously co-trained systems may achieve meaningful robustness.
👍3🔥3👏2
Ubyx_Corporate_Treasury_in_a_World_of_Wallets_1760531818.pdf
2.7 MB
Wallets are the new cash rail for enterprise. And they’re changing the way liquidity is managed.
A new report from Ubyx and Finmo highlights that while a bank account anchors value to a single institution, a wallet can hold tokenized deposits, regulated #stablecoins, tokenized #MMFs, and other instruments across multiple blockchains.
This enables treasurers to consolidate hundreds of accounts into programmable, multi-asset wallets with 24/7 settlement, automated liquidity optimization, and transparent auditability.
Wallet-based architectures promise radical simplification, continuous #yield optimization, and reduced counterparty dependence.
Legally and from an accounting perspective, tokenized deposits and regulated stablecoins are now being recognized as cash equivalents under IAS 7, removing a key barrier to adoption.
Technically, wallets bring programmability and instant settlement, but also new operational risks (key management, custody resilience).
The also bring regulatory challenges requiring phased pilots, hybrid coexistence with traditional rails, and rigorous counterparty due diligence.
What we see is a competitive landscape that’s slowly coalescing:
1. Banks that tokenize deposits can defend client relationships
2. Tech providers are building ERP-integrated wallet rails
3. Digital-native firms supply programmability and scale.
A new report from Ubyx and Finmo highlights that while a bank account anchors value to a single institution, a wallet can hold tokenized deposits, regulated #stablecoins, tokenized #MMFs, and other instruments across multiple blockchains.
This enables treasurers to consolidate hundreds of accounts into programmable, multi-asset wallets with 24/7 settlement, automated liquidity optimization, and transparent auditability.
Wallet-based architectures promise radical simplification, continuous #yield optimization, and reduced counterparty dependence.
Legally and from an accounting perspective, tokenized deposits and regulated stablecoins are now being recognized as cash equivalents under IAS 7, removing a key barrier to adoption.
Technically, wallets bring programmability and instant settlement, but also new operational risks (key management, custody resilience).
The also bring regulatory challenges requiring phased pilots, hybrid coexistence with traditional rails, and rigorous counterparty due diligence.
What we see is a competitive landscape that’s slowly coalescing:
1. Banks that tokenize deposits can defend client relationships
2. Tech providers are building ERP-integrated wallet rails
3. Digital-native firms supply programmability and scale.
👍4🔥2👏2
US approves Erebor bank, the cryptobank backed By Palmer Lucky and Peter Thiel
Ft
US approves new bank backed by billionaires with ties to Donald Trump
Target market for Erebor will be businesses that are part of America’s ‘innovation economy’
Google rolling out Veo 3.1, updated video generation model, alongside improved creative controls for filmmakers, storytellers, and developers - many of them with audio.
It brings a deeper understanding of the narrative you want to tell, capturing textures that look and feel even more real, and improved image-to-video capabilities.
Give multiple reference images with different people and objects, and watch how Veo integrates these into a fully-formed scene - complete with sound.
Create longer clips, even lasting for a minute or more, that continue the action from your original shot.
Each video generated is based on the final second of the previous clip to help continue the story, and keeps the background and people consistent.
Give the first and last frames and Veo will bring the entire scene to life, helping you create a seamless video with epic transitions.
It brings a deeper understanding of the narrative you want to tell, capturing textures that look and feel even more real, and improved image-to-video capabilities.
Give multiple reference images with different people and objects, and watch how Veo integrates these into a fully-formed scene - complete with sound.
Create longer clips, even lasting for a minute or more, that continue the action from your original shot.
Each video generated is based on the final second of the previous clip to help continue the story, and keeps the background and people consistent.
Give the first and last frames and Veo will bring the entire scene to life, helping you create a seamless video with epic transitions.
Google
Introducing Veo 3.1 and advanced capabilities in Flow
Today, we’re introducing new and enhanced creative capabilities to edit your clips.
🔥3👏3🥰2
Anthropic introduced new model Haiku 4.5 is a workhorse that makes the coding experience in Claude Code feel really fast.
While Sonnet 4.5 remains the default, Haiku 4.5 now powers the Explore subagent which can rapidly gather context on your codebase to build apps even faster.
You can select Haiku 4.5 to be your default model in /model. When selected, you’ll automatically use Sonnet 4.5 in Plan mode and Haiku 4.5 for execution for smarter plans and faster results.
To enter Plan mode, hit
Haiku 4.5 is $1 per million input tokens and $5 per million output tokens, which means it is priced 3x lower than Sonnet 4.5 and slightly higher than Haiku 3.5.
While Sonnet 4.5 remains the default, Haiku 4.5 now powers the Explore subagent which can rapidly gather context on your codebase to build apps even faster.
You can select Haiku 4.5 to be your default model in /model. When selected, you’ll automatically use Sonnet 4.5 in Plan mode and Haiku 4.5 for execution for smarter plans and faster results.
To enter Plan mode, hit
Shift + Tab + Tab.Haiku 4.5 is $1 per million input tokens and $5 per million output tokens, which means it is priced 3x lower than Sonnet 4.5 and slightly higher than Haiku 3.5.
Anthropic
Introducing Claude Haiku 4.5
Claude Haiku 4.5, our latest small model, is available today to all users.
👍4🔥3👏2
Google introduced Coral NPU: an open Edge AI platform
Optimized to run small transformer models and LLMs on wearables with support TensorFlow, JAX, and PyTorch via IREE and TFLM compilers.
Optimized to run small transformer models and LLMs on wearables with support TensorFlow, JAX, and PyTorch via IREE and TFLM compilers.
research.google
Coral NPU: A full-stack platform for Edge AI
Chinese company Qiyunfang, (SiCarrier) unveiled 2 fully domestic EDA software platforms: one for schematic and one for PCB design.
HW & Empyrean already had advanced IC EDAs last yr, so this provides more domestic option for PCB design.
Product performance is 30% higher than industry benchmark & shortened h/w dev cycle by 40%.
HW & Empyrean already had advanced IC EDAs last yr, so this provides more domestic option for PCB design.
Product performance is 30% higher than industry benchmark & shortened h/w dev cycle by 40%.
Ithome
新凯来旗下启云方国产 EDA 新品已有超 2 万名工程师使用,下游市场反馈良好 - IT之家
据《科创板日报》报道,启云方电子工程 EDABU 总裁袁夷在接受采访时表示,“今天发布的相关产品目前已投入市场,已有超过 2 万名工程师使用,目前下游市场反馈良好。产品性能较行业标杆提升 30%,产品硬件开发周期可缩短 40%。”
🔥3👍2👏2
Anthropic launched Claude Agent Skills, a filesystem-based approach to extending Claude's capabilities.
Progressive disclosure means agents load only relevant context. Bundle instructions, scripts, and resources in a folder. Claude discovers and executes what it needs.
Progressive disclosure means agents load only relevant context. Bundle instructions, scripts, and resources in a folder. Claude discovers and executes what it needs.
Claude
Introducing Agent Skills | Claude
Claude can now use Skills to improve how it performs specific tasks. Skills are folders that include instructions, scripts, and resources that Claude can load when needed. Claude will only access a skill when it's relevant to the task at hand.
👍4🔥4❤2
2510.13786v1.pdf
3.9 MB
Meta dropped this paper that spills the secret sauce of RL on LLMs.
It lays out an RL recipe, uses 400,000 GPU hrs and posits a scaling law for performance with more compute in RL, like the classic pretraining scaling laws.
It lays out an RL recipe, uses 400,000 GPU hrs and posits a scaling law for performance with more compute in RL, like the classic pretraining scaling laws.
🔥6🆒3👏2🥰1
Stablecoins & lending protocols will be the foundation of global credit.
Visa's new whitepaper on stablecoin lending highlights several Morpho-powered use cases that have put billions of stablecoins to work.
Visa's new whitepaper on stablecoin lending highlights several Morpho-powered use cases that have put billions of stablecoins to work.
❤5🆒3🔥2👏2
Alibaba introduced a new GPU pooling system Aegaeon that makes AI model serving much more efficient.
Claims an 82% cut in Nvidia GPU use for serving LLMs by pooling compute across models.
In a 3+ month beta on Alibaba Cloud’s marketplace, H20 GPUs dropped from 1,192 to 213 while serving dozens of models up to 72B parameters.
The regular Cloud model hubs skew toward a few hot models, so many GPUs sit idle for cold models, and Alibaba measured 17.7% of GPUs handling only 1.35% of requests.
Aegaeon addresses this with token-level auto-scaling, which lets a GPU switch between models during generation instead of waiting for a full response to finish.
By slicing work at token boundaries and scheduling small bursts quickly, the system keeps memory warm and compute busy with little waste.
With Aegaeon, a single GPU supports up to 7 models versus 2 to 3 in other pooling systems, and switching latency drops by 97%.
Cold models load weights just in time when a request lands, then borrow a brief slice of compute without locking an entire GPU.
Hot models keep priority, so heavy traffic stays smooth while sporadic models borrow capacity in short bursts.
The wins apply to inference, not training, because generation happens token by token and fits fine-grained scheduling.
The timing suits China’s chip limits, where H20 targets inference workloads and domestic GPUs are ramping, so fewer chips can cover more traffic.
If Aegaeon generalizes, operators can lower cost per token, raise fleet utilization, and delay new GPU purchases without hurting latency for popular models.
Tradeoffs still exist, like uneven memory needs across models, long sequences that reduce preemption points, and scheduler overhead during traffic spikes.
Claims an 82% cut in Nvidia GPU use for serving LLMs by pooling compute across models.
In a 3+ month beta on Alibaba Cloud’s marketplace, H20 GPUs dropped from 1,192 to 213 while serving dozens of models up to 72B parameters.
The regular Cloud model hubs skew toward a few hot models, so many GPUs sit idle for cold models, and Alibaba measured 17.7% of GPUs handling only 1.35% of requests.
Aegaeon addresses this with token-level auto-scaling, which lets a GPU switch between models during generation instead of waiting for a full response to finish.
By slicing work at token boundaries and scheduling small bursts quickly, the system keeps memory warm and compute busy with little waste.
With Aegaeon, a single GPU supports up to 7 models versus 2 to 3 in other pooling systems, and switching latency drops by 97%.
Cold models load weights just in time when a request lands, then borrow a brief slice of compute without locking an entire GPU.
Hot models keep priority, so heavy traffic stays smooth while sporadic models borrow capacity in short bursts.
The wins apply to inference, not training, because generation happens token by token and fits fine-grained scheduling.
The timing suits China’s chip limits, where H20 targets inference workloads and domestic GPUs are ramping, so fewer chips can cover more traffic.
If Aegaeon generalizes, operators can lower cost per token, raise fleet utilization, and delay new GPU purchases without hurting latency for popular models.
Tradeoffs still exist, like uneven memory needs across models, long sequences that reduce preemption points, and scheduler overhead during traffic spikes.
South China Morning Post
Alibaba Cloud claims to slash Nvidia GPU use by 82% with new pooling system
The new Aegaeon system can serve dozens of large language models using a fraction of the GPUs previously required, potentially reshaping AI workloads.
❤3🔥3🥰3
DeepSeek released an OCR model
Their motivation is really interesting: they want to use visual modality as an efficient compression medium for textual information, and use this to solve long-context challenges in LLMs.
Of course, they are using it to get more training data for their models as well.
"DeepSeek-OCR can generate training data for LLMs/VLMs at a scale of 200k+ pages per day (a single A100-40G)."
HuggingFace.
Their motivation is really interesting: they want to use visual modality as an efficient compression medium for textual information, and use this to solve long-context challenges in LLMs.
Of course, they are using it to get more training data for their models as well.
"DeepSeek-OCR can generate training data for LLMs/VLMs at a scale of 200k+ pages per day (a single A100-40G)."
HuggingFace.
GitHub
DeepSeek-OCR/DeepSeek_OCR_paper.pdf at main · deepseek-ai/DeepSeek-OCR
Contexts Optical Compression. Contribute to deepseek-ai/DeepSeek-OCR development by creating an account on GitHub.
🔥4🥰2👏2
Anthropic launched Claude for Life Sciences to support the entire life sciences process from early discovery through translation and commercialization, with Claude Sonnet 4.5 showing improved performance on protocol understanding and bioinformatics tasks, and new connectors to Benchling, BioRender, PubMed, Scholar Gateway, Synapse, and 10x Genomics
Anthropic is also developing life sciences-specific Agent Skills, beginning with single-cell-rna-qc that performs quality control and filtering on single-cell RNA sequencing data using scverse best practices
Anthropic is also developing life sciences-specific Agent Skills, beginning with single-cell-rna-qc that performs quality control and filtering on single-cell RNA sequencing data using scverse best practices
Anthropic
Claude for Life Sciences
Discover how Claude accelerates life sciences research with new scientific connectors, skills, and improved performance for drug discovery and clinical work.
❤6🔥2👏2
Princeton introduced Skill-Targeted Adaptive Training(STAT)
STAT uses a supervisor model and a skill catalog to construct a Missing-Skill-Profile for each student model, and then modifies training to squeeze out >=7% more performance.
The intervention can be as simple as reweighting existing training sets.
You can also think of this as a more effective distillation method.
STAT shows that leveraging skills during training can greatly help too e.g., Qwen can continue to learn new tricks from Hendrycks MATH, which it had been over-trained on.
GitHub.
STAT uses a supervisor model and a skill catalog to construct a Missing-Skill-Profile for each student model, and then modifies training to squeeze out >=7% more performance.
The intervention can be as simple as reweighting existing training sets.
You can also think of this as a more effective distillation method.
STAT shows that leveraging skills during training can greatly help too e.g., Qwen can continue to learn new tricks from Hendrycks MATH, which it had been over-trained on.
GitHub.
🔥3👏3🥰2
Anthropic launched a sandbox within Claude Code that allows you to define exactly which directories and network hosts your agent can access.
also open sourced this sandbox tool so you can use it to sandbox other parts of your agent workflows.
In particular sandbox the bash tool with file and networking isolation to ensure that Claude only accesses files and networks you approve of.
When enabled this should significantly improve Claude's resistance to prompt injection, both in the CLI & SDK.
also open sourced this sandbox tool so you can use it to sandbox other parts of your agent workflows.
In particular sandbox the bash tool with file and networking isolation to ensure that Claude only accesses files and networks you approve of.
When enabled this should significantly improve Claude's resistance to prompt injection, both in the CLI & SDK.
Anthropic
Making Claude Code more secure and autonomous with sandboxing
Learn how Claude Code's new sandboxing feature protects developers with filesystem and network isolation, reducing permission prompts and increasing user safety.
❤4🔥4👏3
Nexar introduced a new AI model designed to predict and prevent car crashes — BADAS 1.0.
It beat SOTA models by learning from 10B+ real miles and 60M+ real events, not simulations
Based on Meta FAIR's V-JEPA 2.
It beat SOTA models by learning from 10B+ real miles and 60M+ real events, not simulations
Based on Meta FAIR's V-JEPA 2.
Nexar-Ai
Nexar l The Edge-to-Edge Operating System for Autonomous AI
We operate the world’s largest open video driving dataset, enabling us to source, structure and build the models for next-generation AV training & real-time road intelligence.
🔥3🥰3👏3
OpenAI just dropped browser—ChatGPT Atlas.
Agent mode in Atlas completes tasks faster as you browse the web.
Available in preview for Plus, Pro, and Business users.
Available today on macOS. ChatGPT can see the page you’re on and answer your questions right there via the Ask ChatGPT sidebar.
ChatGPT can offer suggestions wherever you’re typing on the web. Ask ChatGPT to open, close, reopen, bookmark or revisit any of your tabs.
Agent mode in Atlas completes tasks faster as you browse the web.
Available in preview for Plus, Pro, and Business users.
Available today on macOS. ChatGPT can see the page you’re on and answer your questions right there via the Ask ChatGPT sidebar.
ChatGPT can offer suggestions wherever you’re typing on the web. Ask ChatGPT to open, close, reopen, bookmark or revisit any of your tabs.
Chatgpt
ChatGPT Atlas
ChatGPT Atlas, the browser with ChatGPT built in. Get instant answers, summaries, and smart web help—right from any page. With privacy settings you can control. Available now for MacOS.
👍4🔥3👏3
Meta showed how sparsely finetuning memory layers enables targeted updates for continual learning, w/ minimal interference with existing knowledge.
While full finetuning and LoRA see drastic drops in held-out task performance, memory layers learn the same amount with far less forgetting (-11%).
While full finetuning and LoRA see drastic drops in held-out task performance, memory layers learn the same amount with far less forgetting (-11%).
arXiv.org
Continual Learning via Sparse Memory Finetuning
Modern language models are powerful, but typically static after deployment. A major obstacle to building models that continually learn over time is catastrophic forgetting, where updating on new...
🔥5🥰2👏2
Airbnb CEO Brian Chesky: “We’re relying a lot on Alibaba’s Qwen model.
It’s very good. It’s also fast and cheap... We use OpenAI’s latest models, but we typically don’t use them that much in production because there are faster and cheaper models.”
It’s very good. It’s also fast and cheap... We use OpenAI’s latest models, but we typically don’t use them that much in production because there are faster and cheaper models.”
Bloomberg.com
Chesky Says OpenAI Tools Not Ready for ChatGPT Tie-Up With Airbnb App
Airbnb Inc. Chief Executive Officer Brian Chesky said he didn’t integrate his company’s online travel app with OpenAI’s ChatGPT because the startup’s connective tools aren’t “quite ready” yet.
🦄5👍2🔥2👏1
All about AI, Web 3.0, BCI
Morgan_Stanley_BCI_Primer_Next_Big_MedTech_Opportunity_1728489687.pdf
Morgan_Stanley_BCI_report_blockchainrf.pdf
2.8 MB
A new Morgan Stanley report on BCI reveals a future that's closer than we think. A report from 2024 here.
Here are the key takeaways:
1. The core thesis isn't just medical. It's existential. As AI accelerates exponentially, BCI is seen as humanity's "chance to keep up." The ultimate goal is a seamless symbiosis, merging human consciousness with machine intelligence.
2. The path to mass adoption runs through medicine. With a US healthcare TAM of ~$400 Billion, BCIs will first restore sight to the blind, movement to the paralyzed, and speech to the voiceless. This addresses a dire need, creates a willing patient base, and accelerates regulatory approval.
3. Neuralink isn't just a player; it's the pacesetter. With 12 human patients already using its "Telepathy" device to control computers with their minds, the company is demonstrating a viable product.
Roadmap: From "Telepathy" (mind-control of devices) to "Blindsight" (restoring vision) by 2030.
Vertical Integration: Their secret sauce is controlling the entire stack—the chip, the surgical robot, and the software.
Funding & Hype: Recently raised $650M at a $9BN valuation, backed by top-tier VCs.
4. Key competitors are taking different, less invasive approaches:
Synchron: Uses blood vessels to place its Stentrode implant (no open-brain surgery).
Precision Neuroscience: Places a thin film on the brain's surface.
Merge Labs (by Sam Altman): Exploring non-invasive sonogenetics (using ultrasound).
5. The Inevitable Challenges & Risks
The "Neuro-Elite": Will this create a new class divide between enhanced and non-enhanced humans?
Data Security: How do we protect the most personal data imaginable—our neural signals from hacking?
Ethical Quagmire: The transition from therapy to human enhancement will be the defining ethical debate of the coming decades.
Here are the key takeaways:
1. The core thesis isn't just medical. It's existential. As AI accelerates exponentially, BCI is seen as humanity's "chance to keep up." The ultimate goal is a seamless symbiosis, merging human consciousness with machine intelligence.
2. The path to mass adoption runs through medicine. With a US healthcare TAM of ~$400 Billion, BCIs will first restore sight to the blind, movement to the paralyzed, and speech to the voiceless. This addresses a dire need, creates a willing patient base, and accelerates regulatory approval.
3. Neuralink isn't just a player; it's the pacesetter. With 12 human patients already using its "Telepathy" device to control computers with their minds, the company is demonstrating a viable product.
Roadmap: From "Telepathy" (mind-control of devices) to "Blindsight" (restoring vision) by 2030.
Vertical Integration: Their secret sauce is controlling the entire stack—the chip, the surgical robot, and the software.
Funding & Hype: Recently raised $650M at a $9BN valuation, backed by top-tier VCs.
4. Key competitors are taking different, less invasive approaches:
Synchron: Uses blood vessels to place its Stentrode implant (no open-brain surgery).
Precision Neuroscience: Places a thin film on the brain's surface.
Merge Labs (by Sam Altman): Exploring non-invasive sonogenetics (using ultrasound).
5. The Inevitable Challenges & Risks
The "Neuro-Elite": Will this create a new class divide between enhanced and non-enhanced humans?
Data Security: How do we protect the most personal data imaginable—our neural signals from hacking?
Ethical Quagmire: The transition from therapy to human enhancement will be the defining ethical debate of the coming decades.
❤6🥰4🔥2