OpenAI will unveil its first AI earbuds, dubbed “Sweetpea”, in September this year and shipments are expected to reach 40-50 million units in 2027
Taiwan’s Foxconn will do assembly for the buds.
Taiwan’s Foxconn will do assembly for the buds.
經濟日報
OpenAI 硬體裝置 鴻海代工 第一年出貨量上看5,000萬台 | 產業熱點 | 產業 | 經濟日報
OpenAI旗下首款硬體要來了,該公司全球事務長勒漢19日透露,預定今年下半年發表一款AI裝置。據爆料,OpenAI目標9月推出AI音訊耳機,第一年出貨量預計達4,000萬至5,000萬台,由鴻海代工。
🔥3🥰3👍2
Anthropic published a new constitution for Claude.
The new constitution discusses Claude in terms previously reserved for humans—incorporating concepts like virtue, psychological security, and ethical maturity.
The new constitution discusses Claude in terms previously reserved for humans—incorporating concepts like virtue, psychological security, and ethical maturity.
Anthropic
Claude's Constitution
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
❤4🔥3👏2
Amazon is rolling out Health AI for One Medical members where an AI assistant, built on Amazon Bedrock, uses your medical records, labs & meds.
It can answer health questions, manage prescriptions & book appointments pushing Amazon deeper into this space now too.
It can answer health questions, manage prescriptions & book appointments pushing Amazon deeper into this space now too.
❤2
China has launched its first open-source, vertical LLM dedicated to the general agricultural sector, marking a significant breakthrough in foundational AI model research and its applications for agriculture in the country.
The model, Sinong, which is named after the ancient Chinese officials overseeing agriculture and finance, integrates content from nearly 9,000 books, over 240,000 academic papers, approximately 20,000 policy documents and standards, and extensive web-based knowledge.
Sinong is now fully open-sourced on platforms like ModelScope and GitHub.
The model, Sinong, which is named after the ancient Chinese officials overseeing agriculture and finance, integrates content from nearly 9,000 books, over 240,000 academic papers, approximately 20,000 policy documents and standards, and extensive web-based knowledge.
Sinong is now fully open-sourced on platforms like ModelScope and GitHub.
GitHub
GitHub - njauzzx/Sinong
Contribute to njauzzx/Sinong development by creating an account on GitHub.
👏6🔥3🥰2
This paper from Google DeepMind, Meta, Amazon, and Yale University quietly explains why most AI agents feel smart in demos and dumb in real work.
The authors formalize agentic reasoning as a loop, not a prompt:
observe → plan → act → reflect → update state → repeat.
Instead of one long chain-of-thought, the model maintains an internal task state. It decides what to think about next, not just how to finish the sentence.
This is why classic tricks like longer CoT plateau. You get more words, not better decisions.
One of the most important insights: reasoning quality collapses when control and reasoning are mixed. When the same prompt tries to plan, execute, critique, and finalize, errors compound silently. Agentic setups separate these roles.
Planning is explicit. Execution is scoped. Reflection is delayed and structured.
The paper shows that even strong frontier models improve dramatically when given:
• explicit intermediate goals
• checkpoints for self-evaluation
• the ability to abandon bad paths
• memory of past attempts
The takeaway is brutal for the industry: scaling tokens and parameters won’t give us reliable agents. Architecture will. Agentic reasoning isn’t a feature it’s the missing operating system for LLMs.
The authors formalize agentic reasoning as a loop, not a prompt:
observe → plan → act → reflect → update state → repeat.
Instead of one long chain-of-thought, the model maintains an internal task state. It decides what to think about next, not just how to finish the sentence.
This is why classic tricks like longer CoT plateau. You get more words, not better decisions.
One of the most important insights: reasoning quality collapses when control and reasoning are mixed. When the same prompt tries to plan, execute, critique, and finalize, errors compound silently. Agentic setups separate these roles.
Planning is explicit. Execution is scoped. Reflection is delayed and structured.
The paper shows that even strong frontier models improve dramatically when given:
• explicit intermediate goals
• checkpoints for self-evaluation
• the ability to abandon bad paths
• memory of past attempts
The takeaway is brutal for the industry: scaling tokens and parameters won’t give us reliable agents. Architecture will. Agentic reasoning isn’t a feature it’s the missing operating system for LLMs.
🔥6👍4👏3
Google DeepMind looking to hire a Senior Economist to lead a small team investigating post-AGI economics.
job-boards.greenhouse.io
Chief AGI Economist
London, UK
🔥5👏2🤩2💯2
How to get AI to make discoveries on open scientific problems?
Most methods just improve the prompt with more attempts. But the AI itself doesn't improve.
With test-time training, AI can continue to learn on the problem it’s trying to solve.
Meet TTT-Discover, which enables open models to beat the prior art from both humans and AI based on closed frontier models:
1. Mathematics: new bounds on Erdős' minimum overlap problem and an autocorrelation inequality
2. Kernel Engineering: 2× faster than top humans in GPUMode
3. Algorithms: top scores on past AtCoder contests
4. Biology: SOTA for single-cell RNA-seq denoising.
All of code is public + results are reproducible here.
Everyone can now discover new SOTA in science with a few hundred $.
Test-Time Training + open model > prompt engineering + closed frontier model (Gemini, GPT-5), for discovery problems in Mathematics, Kernel Engineering, Algorithms and Biology.
Most methods just improve the prompt with more attempts. But the AI itself doesn't improve.
With test-time training, AI can continue to learn on the problem it’s trying to solve.
Meet TTT-Discover, which enables open models to beat the prior art from both humans and AI based on closed frontier models:
1. Mathematics: new bounds on Erdős' minimum overlap problem and an autocorrelation inequality
2. Kernel Engineering: 2× faster than top humans in GPUMode
3. Algorithms: top scores on past AtCoder contests
4. Biology: SOTA for single-cell RNA-seq denoising.
All of code is public + results are reproducible here.
Everyone can now discover new SOTA in science with a few hundred $.
Test-Time Training + open model > prompt engineering + closed frontier model (Gemini, GPT-5), for discovery problems in Mathematics, Kernel Engineering, Algorithms and Biology.
❤4👍4🔥4
LLM in sandbox elicits general agentic intelligence
Giving LLMs access to a code sandbox unlocks emergent capabilities for non-code tasks.
Emergent capabilities for non-code tasks.
Contributions:
1. LLMs spontaneously exploit sandbox capabilities (external access, file I/O, code execution) without training
2. RL with non-agentic data enables agentic generalization
3. Efficient deployment: up to 8× token savings
HuggingFace
GitHub
Giving LLMs access to a code sandbox unlocks emergent capabilities for non-code tasks.
Emergent capabilities for non-code tasks.
Contributions:
1. LLMs spontaneously exploit sandbox capabilities (external access, file I/O, code execution) without training
2. RL with non-agentic data enables agentic generalization
3. Efficient deployment: up to 8× token savings
HuggingFace
GitHub
arXiv.org
LLM-in-Sandbox Elicits General Agentic Intelligence
We introduce LLM-in-Sandbox, enabling LLMs to explore within a code sandbox (i.e., a virtual computer), to elicit general intelligence in non-code domains. We first demonstrate that strong LLMs,...
❤3🔥3👏3
A new work from Yoshua Bengio’s lab: Recursive Self-Aggregation > Gemini DeepThink.
it really is the best test-time scaling algorithm. Just crushed ARC-AGI 2 public evals with Gemini 3 Flash and RSA.
it really is the best test-time scaling algorithm. Just crushed ARC-AGI 2 public evals with Gemini 3 Flash and RSA.
Recursive Self-Aggregation Research
Recursive Self-Aggregation (RSA) for LLM Reasoning
Hybrid test-time scaling for LLMs: recursive aggregation of chains-of-thought, plus aggregation-aware RL.
❤5🔥4🥰4
All about AI, Web 3.0, BCI
Nvidia will acquire assets and key talent from chipmaking startup Groq for $20B Groq co-founder and CEO Jonathan Ross was lead designer and artchitect for the first generation of Google’s TPU chips. He’ll join Nvidia along with president Sunny Madra and…
Nvidia investing an additional $2 billion in to Corweave, to accelerate capacity buildout.
Nvidia will also make Vera CPU available as standlone offering, with Coreweave to deploy first. “Many” design wins to come.
Nvidia will also make Vera CPU available as standlone offering, with Coreweave to deploy first. “Many” design wins to come.
Bloomberg.com
Nvidia Invests $2 Billion More in CoreWeave, Offers New Chip
Nvidia Corp., the dominant maker of artificial intelligence chips, invested an additional $2 billion in the cloud computing firm and key customer CoreWeave Inc., marking the latest example of the circular financing deals that have lifted valuations of AI…
❤3🔥3👏3
Nvidia introduced 3 new open source models in the NV Earth-2 family, enabling weather forecasting with tools for data assimilation, forecasting, nowcasting, and downscaling.
Developers can also build climate simulations using PhysicsNeMo and create inference pipelines with the open source Earth2Studio framework.
Developers can also build climate simulations using PhysicsNeMo and create inference pipelines with the open source Earth2Studio framework.
👍4🔥4❤3
DeepSeek just released #DeepSeek-OCR 2
Now, AI could "see" an image in the same logical order as a human!
Its new method, the DeepEncoder V2, teaches the AI to dynamically reorder the pieces of an image based on its meaning, instead of just scanning it rigidly from left to right. This mimics how humans follow the logical flow of a scene.
The result is a model that outperforms conventional vision-language models, especially on images with complex layouts like documents or diagrams, by enabling more intelligent, causally-informed visual understanding.
Now, AI could "see" an image in the same logical order as a human!
Its new method, the DeepEncoder V2, teaches the AI to dynamically reorder the pieces of an image based on its meaning, instead of just scanning it rigidly from left to right. This mimics how humans follow the logical flow of a scene.
The result is a model that outperforms conventional vision-language models, especially on images with complex layouts like documents or diagrams, by enabling more intelligent, causally-informed visual understanding.
GitHub
GitHub - deepseek-ai/DeepSeek-OCR-2: Visual Causal Flow
Visual Causal Flow. Contribute to deepseek-ai/DeepSeek-OCR-2 development by creating an account on GitHub.
🔥4❤3👍2
The “One Person Company” (OPC) model is booming, especially in innovation hubs like Shenzhen, where AI-powered entrepreneurship is reshaping the business landscape.
These OPCs, often led by a single founder supported by AI and minimal staff, offer fast decision-making, low costs, and high flexibility. Shenzhen is building dedicated OPC hubs, attracting creators nationwide.
These OPCs, often led by a single founder supported by AI and minimal staff, offer fast decision-making, low costs, and high flexibility. Shenzhen is building dedicated OPC hubs, attracting creators nationwide.
🔥7💯4👏3🤡1
Moonshot AI released Kimi K2.5, Open-Source Visual Agentic Intelligence
Global SOTA on Agentic Benchmarks: HLE full set (50.2%), BrowseComp (74.9%)
Open-source SOTA on Vision and Coding: MMMU Pro (78.5%), VideoMMMU (86.6%), SWE-bench Verified (76.8%)
Code with Taste: turn chats, images & videos into aesthetic websites with expressive motion.
Agent Swarm (Beta): self-directed agents working in parallel, at scale. Up to 100 sub-agents, 1,500 tool calls, 4.5× faster compared with single-agent setup.
K2.5 is now live on kimi.com in chat mode and agent mode.
K2.5 Agent Swarm in beta for high-tier users.
For production-grade coding, you can pair K2.5 with Kimi Code
Weights & code.
Global SOTA on Agentic Benchmarks: HLE full set (50.2%), BrowseComp (74.9%)
Open-source SOTA on Vision and Coding: MMMU Pro (78.5%), VideoMMMU (86.6%), SWE-bench Verified (76.8%)
Code with Taste: turn chats, images & videos into aesthetic websites with expressive motion.
Agent Swarm (Beta): self-directed agents working in parallel, at scale. Up to 100 sub-agents, 1,500 tool calls, 4.5× faster compared with single-agent setup.
K2.5 is now live on kimi.com in chat mode and agent mode.
K2.5 Agent Swarm in beta for high-tier users.
For production-grade coding, you can pair K2.5 with Kimi Code
Weights & code.
❤🔥7🔥4👏3
Qwen released Qwen3-Max-Thinking, its flagship reasoning model and DeepPlanning
It says demonstrates performance comparable to models such as GPT-5.2 Thinking and Opus 4.5 (Qwen).
Key innovations:
1. Adaptive tool-use: intelligently leverages Search, Memory & Code Interpreter without manual selection
2. Test-time scaling: multi-round self-reflection beats Gemini 3 Pro on reasoning
3. From complex math (98.0 on HMMT Feb) to agentic search (49.8 on HLE)—it just thinks better.
DeepPlanning is a new benchmark for long-horizon agent planning in real-world scenarios.
HF
ModelScope.
It says demonstrates performance comparable to models such as GPT-5.2 Thinking and Opus 4.5 (Qwen).
Key innovations:
1. Adaptive tool-use: intelligently leverages Search, Memory & Code Interpreter without manual selection
2. Test-time scaling: multi-round self-reflection beats Gemini 3 Pro on reasoning
3. From complex math (98.0 on HMMT Feb) to agentic search (49.8 on HLE)—it just thinks better.
DeepPlanning is a new benchmark for long-horizon agent planning in real-world scenarios.
HF
ModelScope.
❤5🔥5👍3
OpenAI introduced Prism a free, AI-native workspace for scientists to write and collaborate on research, powered by GPT-5.2.
Accelerating science requires progress on two fronts:
1. Frontier AI models that use scientific tools and can tackle the hardest problems
2. Integrating that AI into the products scientists use every day
Prism is free to anyone with a ChatGPT account, with unlimited projects and collaborators.
Accelerating science requires progress on two fronts:
1. Frontier AI models that use scientific tools and can tackle the hardest problems
2. Integrating that AI into the products scientists use every day
Prism is free to anyone with a ChatGPT account, with unlimited projects and collaborators.
Openai
Prism | A free, LaTeX-native workspace for scientists
Write, edit, and collaborate on scientific documents in LaTeX with Prism—a free workspace integrating GPT-5.2 into research and writing.
❤6🔥2👏2
Google introduced ATLAS: new scaling laws for massively multilingual language models.
Practical, data-driven guidance to balance data mix and model size, helping global developers better serve billions of non-English speakers.
Practical, data-driven guidance to balance data mix and model size, helping global developers better serve billions of non-English speakers.
research.google
ATLAS: Practical scaling laws for multilingual models
❤2🔥2🆒2👏1
Big news in clinical AI: Aidoc secured FDA clearance for healthcare’s first comprehensive AI triage solution for body CT, powered by their CARE foundation model.
Healthcare AI | Aidoc Always-on AI
Aidoc Secures New FDA Clearance
Aidoc announced 11 newly cleared indications, combined with three existing ones, to introduce an AI safety net for crowded Emergency Departments and imaging backlogs.
👏3🔥2💯2
Fidelity to launch dollar-backed stablecoin FIDD on Ethereum in coming weeks
The firm first said it was testing a stablecoin in early 2025, but had not committed to a launch at the time.
The token will be issued by Fidelity Digital Assets’ national trust bank and is expected to roll out to both retail and institutional customers.
Fidelity said it will oversee issuance and management of reserves for the stablecoin, leaning on its asset management arm, Fidelity Management & Research Company LLC, to handle reserve assets.
Customers will be able to purchase or redeem FIDD for $1 through Fidelity Digital Assets, Fidelity Crypto and Fidelity Crypto for Wealth Managers, with the stablecoin also transferable to any Ethereum mainnet address and available on major crypto exchanges where it is listed.
The firm first said it was testing a stablecoin in early 2025, but had not committed to a launch at the time.
The token will be issued by Fidelity Digital Assets’ national trust bank and is expected to roll out to both retail and institutional customers.
Fidelity said it will oversee issuance and management of reserves for the stablecoin, leaning on its asset management arm, Fidelity Management & Research Company LLC, to handle reserve assets.
Customers will be able to purchase or redeem FIDD for $1 through Fidelity Digital Assets, Fidelity Crypto and Fidelity Crypto for Wealth Managers, with the stablecoin also transferable to any Ethereum mainnet address and available on major crypto exchanges where it is listed.
The Block
Fidelity to launch dollar-backed stablecoin FIDD on Ethereum in coming weeks
Fidelity Investments plans to launch its own Ethereum-based stablecoin, FIDD, as U.S. stablecoin regulation comes into focus.
❤3🔥2👏2
In the last month, 1X, Skild, and Physical Intelligence all signaled a shift to human data.
Robotics is caught in a tug-of-war between quality and scale, where reality is the referee.
This essay explains why the robot models that best navigate the “Data Pareto Frontier” will win in 2026.
Robotics is caught in a tug-of-war between quality and scale, where reality is the referee.
This essay explains why the robot models that best navigate the “Data Pareto Frontier” will win in 2026.
vincentliu.org
The Robotics Data Pareto Frontier ― Vincent Liu
The defining narrative of robotics in 2025 was not a new model architecture, but an enthusiasm for data. Despite a consensus around teleoperation as the gold...
🔥3🥰2👏2