Trump just now signs an Executive Order establishing the Strategic Bitcoin Reserve and U.S. Digital Asset Stockpile
The Reserve will be capitalized with Bitcoin owned by the federal government that was forfeited as part of criminal or civil asset forfeiture proceedings. This means it will not cost taxpayers a dime.
It is estimated that the U.S. government owns about 200,000 bitcoin; however, there has never been a complete audit. The E.O. directs a full accounting of the federal government’s digital asset holdings.
The U.S. will not sell any bitcoin deposited into the Reserve. It will be kept as a store of value. The Reserve is like a digital Fort Knox for the cryptocurrency often called “digital gold.”
Premature sales of bitcoin have already cost U.S. taxpayers over $17 billion in lost value. Now the federal government will have a strategy to maximize the value of its holdings.
The Secretaries of Treasury and Commerce are authorized to develop budget-neutral strategies for acquiring additional bitcoin, provided that those strategies have no incremental costs on American taxpayers.
IN ADDITION, the Executive Order establishes a U.S. Digital Asset Stockpile, consisting of digital assets other than bitcoin forfeited in criminal or civil proceedings.
The government will not acquire additional assets for the Stockpile beyond those obtained through forfeiture proceedings.
The purpose of the Stockpile is responsible stewardship of the government’s digital assets under the Treasury Department.
The Reserve will be capitalized with Bitcoin owned by the federal government that was forfeited as part of criminal or civil asset forfeiture proceedings. This means it will not cost taxpayers a dime.
It is estimated that the U.S. government owns about 200,000 bitcoin; however, there has never been a complete audit. The E.O. directs a full accounting of the federal government’s digital asset holdings.
The U.S. will not sell any bitcoin deposited into the Reserve. It will be kept as a store of value. The Reserve is like a digital Fort Knox for the cryptocurrency often called “digital gold.”
Premature sales of bitcoin have already cost U.S. taxpayers over $17 billion in lost value. Now the federal government will have a strategy to maximize the value of its holdings.
The Secretaries of Treasury and Commerce are authorized to develop budget-neutral strategies for acquiring additional bitcoin, provided that those strategies have no incremental costs on American taxpayers.
IN ADDITION, the Executive Order establishes a U.S. Digital Asset Stockpile, consisting of digital assets other than bitcoin forfeited in criminal or civil proceedings.
The government will not acquire additional assets for the Stockpile beyond those obtained through forfeiture proceedings.
The purpose of the Stockpile is responsible stewardship of the government’s digital assets under the Treasury Department.
X (formerly Twitter)
David Sacks (@davidsacks47) on X
Just a few minutes ago, President Trump signed an Executive Order to establish a Strategic Bitcoin Reserve.
The Reserve will be capitalized with Bitcoin owned by the federal government that was forfeited as part of criminal or civil asset forfeiture proceedings.…
The Reserve will be capitalized with Bitcoin owned by the federal government that was forfeited as part of criminal or civil asset forfeiture proceedings.…
🔥5❤2👍2👎1
Google co-founder Larry Page is building a new company called Dynatomics that’s focused on applying AI to product manufacturing
Page is reportedly working with a small group of engineers on AI that can create “highly optimized” designs for objects and then have a factory build them.
Chris Anderson, previously the CTO of Page-backed electric airplane startup Kittyhawk, is running the stealth effort.
Page is reportedly working with a small group of engineers on AI that can create “highly optimized” designs for objects and then have a factory build them.
Chris Anderson, previously the CTO of Page-backed electric airplane startup Kittyhawk, is running the stealth effort.
The Information
Larry Page Has a New AI Startup
Google co-founder Larry Page has formed a new company, Dynatomics, to upend manufacturing with artificial intelligence. Page and a small group of engineers are working on ways to use large language models to design flying cars and other types of planes—and…
🆒3
a16z introduced the Top 100 Gen AI Consumer Apps
In just 6 months, the consumer AI landscape has shifted—some AI apps surged, others stalled, and a few unexpected players vaulted over the competition.
A few key insights:
• DeepSeek is outpacing competing general assistant LLMs — in growth and engagement
• The AI apps people are willing to pay for diverge from the most popular
• After a year-long plateau, ChatGPT's growth has come roaring back.
• AI video has finally broken through, with some high-quality new players
• So-called “vibecoding” tools are reshaping who can create with AI, not just who can use it
In just 6 months, the consumer AI landscape has shifted—some AI apps surged, others stalled, and a few unexpected players vaulted over the competition.
A few key insights:
• DeepSeek is outpacing competing general assistant LLMs — in growth and engagement
• The AI apps people are willing to pay for diverge from the most popular
• After a year-long plateau, ChatGPT's growth has come roaring back.
• AI video has finally broken through, with some high-quality new players
• So-called “vibecoding” tools are reshaping who can create with AI, not just who can use it
👍5❤4🔥2
Cortical Labs announced the world's first biocomputer
CL1 merges real living neurons with a chip to solve complex problems and redefine research.
CL1 merges real living neurons with a chip to solve complex problems and redefine research.
Tom's Hardware
World's first 'body in a box' biological computer uses human brain cells with silicon-based computing
Cortical Labs said the CL1 will be available from June, priced at around $35,000.
❤4👍3🔥2
FoundationStereo: A Revolutionary Breakthrough in Stereo Vision
NVIDIA researchers have just introduced FoundationStereo, a groundbreaking foundation model for stereo depth estimation that works right out of the box without any fine-tuning.
Stereo vision enables computer systems to perceive depth and create 3D representations of the world using images from two or more cameras - similar to how humans perceive depth through two eyes.
Code.
This capability is crucial for:
- Robotics and automation
- Autonomous vehicles
- AR/VR applications
- Industrial inspection
- Medical imaging
The Breakthrough
Until now, high-quality depth estimation from low-cost cameras has been a persistent challenge in industrial applications, often requiring investments of $10,000+ in specialized structured-light scanners.
FoundationStereo changes the game by:
Zero-shot generalization: The model works across diverse scenarios without domain-specific fine-tuning
Handling challenging cases: Exceptional performance on transparent objects, reflective surfaces, and complex lighting conditions
Leveraging monocular priors: Adapting rich geometric priors from vision foundation models
Massive training dataset: Built on a 1-million stereo pair synthetic dataset with unprecedented diversity and photorealism
Technical Innovation
The researchers developed several novel components:
Side-Tuning Adapter that leverages internet-scale knowledge from monocular depth estimation models
Attentive Hybrid Cost Filtering with 3D Axial-Planar Convolution
Disparity Transformer for long-range context reasoning
NVIDIA researchers have just introduced FoundationStereo, a groundbreaking foundation model for stereo depth estimation that works right out of the box without any fine-tuning.
Stereo vision enables computer systems to perceive depth and create 3D representations of the world using images from two or more cameras - similar to how humans perceive depth through two eyes.
Code.
This capability is crucial for:
- Robotics and automation
- Autonomous vehicles
- AR/VR applications
- Industrial inspection
- Medical imaging
The Breakthrough
Until now, high-quality depth estimation from low-cost cameras has been a persistent challenge in industrial applications, often requiring investments of $10,000+ in specialized structured-light scanners.
FoundationStereo changes the game by:
Zero-shot generalization: The model works across diverse scenarios without domain-specific fine-tuning
Handling challenging cases: Exceptional performance on transparent objects, reflective surfaces, and complex lighting conditions
Leveraging monocular priors: Adapting rich geometric priors from vision foundation models
Massive training dataset: Built on a 1-million stereo pair synthetic dataset with unprecedented diversity and photorealism
Technical Innovation
The researchers developed several novel components:
Side-Tuning Adapter that leverages internet-scale knowledge from monocular depth estimation models
Attentive Hybrid Cost Filtering with 3D Axial-Planar Convolution
Disparity Transformer for long-range context reasoning
arXiv.org
FoundationStereo: Zero-Shot Stereo Matching
Tremendous progress has been made in deep stereo matching to excel on benchmark datasets through per-domain fine-tuning. However, achieving strong zero-shot generalization - a hallmark of...
🔥4❤2👏1
Ex-DeepMind researchers Misha Laskin and Ioannis Antonoglou launched Reflection AI with $130M funding
The startup plans to build superintelligent AI, starting with coding systems. A focus on building autonomous coding systems. Equal emphasis on research and product.
Previously, their team helped build AI systems like AphaGo & GPT-4.
The startup plans to build superintelligent AI, starting with coding systems. A focus on building autonomous coding systems. Equal emphasis on research and product.
Previously, their team helped build AI systems like AphaGo & GPT-4.
Reflection AI
Building frontier open intelligence.
👍4🔥2👏1
OpenAI has evolved their thinking about AGI development.
Rather than viewing AGI as a sudden leap, they now see it as a continuous series of increasingly capable systems.
Key Risks They're Addressing
1. Human Misuse: People using AI in ways that violate laws and democratic values.
2. Misaligned AI: AI systems acting in ways that don't align with human values and intentions.
3. Societal Disruption: Rapid AI-driven changes causing unpredictable effects on society and inequality.
Their Core Safety Principles
Embracing Uncertainty
They treat safety as a science, learning through real-world deployment rather than just theory. This includes rigorous measurement of risks and proactive mitigation strategies.
Defense in Depth
OpenAI applies multiple layers of safeguards, similar to other safety-critical fields. They teach models to understand safety values, follow instructions, and remain reliable even under uncertainty.
Methods that Scale
They develop safety approaches that become more effective as AI capabilities increase, even using current AI systems to help align more advanced ones.
Human Control
OpenAI places humans at the center of their alignment approach, creating transparent systems that people can meaningfully supervise. They incorporate public feedback into policy formation and work on interfaces that help humans guide AI effectively.
Community Effort
They recognize that ensuring safe AGI requires collaboration across industry, academia, government, and the public. OpenAI shares research, provides resources to the field, funds external research, and engages with policymakers.
While OpenAI has a clear vision for safety, they remain open to being wrong about how AI progress will unfold and welcome diverse perspectives on AI risk management.
Rather than viewing AGI as a sudden leap, they now see it as a continuous series of increasingly capable systems.
Key Risks They're Addressing
1. Human Misuse: People using AI in ways that violate laws and democratic values.
2. Misaligned AI: AI systems acting in ways that don't align with human values and intentions.
3. Societal Disruption: Rapid AI-driven changes causing unpredictable effects on society and inequality.
Their Core Safety Principles
Embracing Uncertainty
They treat safety as a science, learning through real-world deployment rather than just theory. This includes rigorous measurement of risks and proactive mitigation strategies.
Defense in Depth
OpenAI applies multiple layers of safeguards, similar to other safety-critical fields. They teach models to understand safety values, follow instructions, and remain reliable even under uncertainty.
Methods that Scale
They develop safety approaches that become more effective as AI capabilities increase, even using current AI systems to help align more advanced ones.
Human Control
OpenAI places humans at the center of their alignment approach, creating transparent systems that people can meaningfully supervise. They incorporate public feedback into policy formation and work on interfaces that help humans guide AI effectively.
Community Effort
They recognize that ensuring safe AGI requires collaboration across industry, academia, government, and the public. OpenAI shares research, provides resources to the field, funds external research, and engages with policymakers.
While OpenAI has a clear vision for safety, they remain open to being wrong about how AI progress will unfold and welcome diverse perspectives on AI risk management.
Openai
How we think about safety and alignment
The mission of OpenAI is to ensure artificial general intelligence (AGI) benefits all of humanity. Safety—the practice of enabling AI’s positive impacts by mitigating the negative ones—is thus core to our mission.
🔥7❤3👏2👍1
Ex-CEO Google Eric Schmidt is taking a controlling stake in Relativity Space and taking over as CEO.
Company makes reusable rockets like SpaceX, has raised $2.4B to date
YC company from W16.
Company makes reusable rockets like SpaceX, has raised $2.4B to date
YC company from W16.
NY Times
Eric Schmidt Joins Relativity Space, a Rocket Start-Up, as C.E.O.
The former Google chief executive is taking a controlling interest in Relativity Space, which aims to build low-cost, reusable rockets to compete against Elon Musk’s SpaceX and to reach Mars.
👍5❤3👏2
The OWASP Top 10 Agentic AI Threats and Mitigations report has put together a threat-model-based framework to secure autonomous AI systems powered by GenAI agents.
🔥4❤3👏2
LeanAgent: the first lifelong learning agent for formal theorem proving in Lean.
LLMs have been integrated with interactive proof assistants like Lean for theorem proving with 100% accuracy.
So far, these LLMs cannot continuously generalize to new knowledge and struggle with advanced mathematics.
LeanAgent continuously learns and improves on ever-expanding mathematical knowledge without forgetting what it learned before. It has a curriculum learning strategy that optimizes the learning trajectory in terms of mathematical difficulty, a dynamic database for efficient management of evolving mathematical knowledge, and progressive training to balance stability and plasticity.
LeanAgent successfully proves 155 theorems across 23 diverse Lean repositories where formal proofs were previously missing, many from advanced mathematics. It proves challenging theorems in domains like abstract algebra and algebraic topology while showcasing a clear progression of learning from basic concepts to advanced topics.
LeanAgent achieves exceptional scores in stability and backward transfer, where learning new tasks improves performance on previously learned tasks.
Code.
LLMs have been integrated with interactive proof assistants like Lean for theorem proving with 100% accuracy.
So far, these LLMs cannot continuously generalize to new knowledge and struggle with advanced mathematics.
LeanAgent continuously learns and improves on ever-expanding mathematical knowledge without forgetting what it learned before. It has a curriculum learning strategy that optimizes the learning trajectory in terms of mathematical difficulty, a dynamic database for efficient management of evolving mathematical knowledge, and progressive training to balance stability and plasticity.
LeanAgent successfully proves 155 theorems across 23 diverse Lean repositories where formal proofs were previously missing, many from advanced mathematics. It proves challenging theorems in domains like abstract algebra and algebraic topology while showcasing a clear progression of learning from basic concepts to advanced topics.
LeanAgent achieves exceptional scores in stability and backward transfer, where learning new tasks improves performance on previously learned tasks.
Code.
arXiv.org
LeanAgent: Lifelong Learning for Formal Theorem Proving
Large Language Models (LLMs) have been successful in mathematical reasoning tasks such as formal theorem proving when integrated with interactive proof assistants like Lean. Existing approaches...
🆒4❤3🥰1👏1
OpenAI launches new tools to help businesses build AI agents
A new tools: RAG/file search, web search, and operator/computer use. All packaged together in the Responses API.
Plus,they’re upgraded Swarm to the (open source) Agents SDK to make it easy to build, orchestrate, and monitor systems of agents
Web search in the OpenAI API—get fast, up-to-date answers with links to relevant web sources. Powered by the same model used for ChatGPT search.
Now with the Responses API, OpenAI wants to sell access to the components that power AI agents, allowing developers to build their own Operator- and deep research-style agentic applications.
OpenAI hopes that developers can create some applications with its agent technology that feel more autonomous than what’s available today.
A new tools: RAG/file search, web search, and operator/computer use. All packaged together in the Responses API.
Plus,they’re upgraded Swarm to the (open source) Agents SDK to make it easy to build, orchestrate, and monitor systems of agents
Web search in the OpenAI API—get fast, up-to-date answers with links to relevant web sources. Powered by the same model used for ChatGPT search.
Now with the Responses API, OpenAI wants to sell access to the components that power AI agents, allowing developers to build their own Operator- and deep research-style agentic applications.
OpenAI hopes that developers can create some applications with its agent technology that feel more autonomous than what’s available today.
Openai
New tools for building agents
We’re evolving our platform to help developers and enterprises build useful and reliable agents.
❤5👍4👏2
Google introduced Gemma 3 and ShieldGemma 2
Google DeepMind released. Gemma 3 family of open models
Available in various sizes: 1B, 4B, 12B and 27B
Key Capabilities:
1. World's best single-accelerator model: outperforms Llama-405B, DeepSeek-V3 and o3-mini in preliminary human preference evaluations on LMArena's leaderboard
2. Global language support: out-of-the-box support for over 35 languages and pretrained support for over 140 languages
3. Multimodal reasoning: analyze images, text, and short videos
4. Expanded context: 128k-token context window
5. Function calling: supports function calling and structured output for AI-driven workflows
6. Quantized models: official quantized versions to reduce model size and computational requirements
Alongside Gemma 3, Google is launching ShieldGemma 2 — a powerful 4B image safety checker built on the Gemma 3 foundation that provides:
- A ready-made solution for image safety
- Safety labels across three categories: dangerous content, sexually explicit content, and violence
- Customization options for specific safety needs
Paper.
HuggingFace.
Google DeepMind released. Gemma 3 family of open models
Available in various sizes: 1B, 4B, 12B and 27B
Key Capabilities:
1. World's best single-accelerator model: outperforms Llama-405B, DeepSeek-V3 and o3-mini in preliminary human preference evaluations on LMArena's leaderboard
2. Global language support: out-of-the-box support for over 35 languages and pretrained support for over 140 languages
3. Multimodal reasoning: analyze images, text, and short videos
4. Expanded context: 128k-token context window
5. Function calling: supports function calling and structured output for AI-driven workflows
6. Quantized models: official quantized versions to reduce model size and computational requirements
Alongside Gemma 3, Google is launching ShieldGemma 2 — a powerful 4B image safety checker built on the Gemma 3 foundation that provides:
- A ready-made solution for image safety
- Safety labels across three categories: dangerous content, sexually explicit content, and violence
- Customization options for specific safety needs
Paper.
HuggingFace.
Google
Introducing Gemma 3: The most capable model you can run on a single GPU or TPU
Today, we're introducing Gemma 3, our most capable, portable and responsible open model yet.
🔥7❤2👍2
This media is not supported in your browser
VIEW IN TELEGRAM
Claude just bought the Solana team a 5th bday gift on Amazon.
Paid in solana USDC, shipped to their NY office.
Agents can now buy anything on Amazon using the goat MCP adapter + Crossmint Checkout.
Crossmint’s Headless Checkout plugin gives Claude access to Amazon’s entire product catalog.
Paid in solana USDC, shipped to their NY office.
Agents can now buy anything on Amazon using the goat MCP adapter + Crossmint Checkout.
Crossmint’s Headless Checkout plugin gives Claude access to Amazon’s entire product catalog.
HuggingFace trained a new open model Open-R1 that outperforms DeepSeek R1 on the International Olympiad in Informatics.
The 7B variant beats Claude 3.7 Sonnet. The dataset, training recipe and benchmark are all public.
The 7B variant beats Claude 3.7 Sonnet. The dataset, training recipe and benchmark are all public.
🆒4👍2❤1🔥1
All about AI, Web 3.0, BCI
OpenAI launches new tools to help businesses build AI agents A new tools: RAG/file search, web search, and operator/computer use. All packaged together in the Responses API. Plus,they’re upgraded Swarm to the (open source) Agents SDK to make it easy to build…
Summary of insights from OpenAI AMA on X following the launch of Agent Tools and APIs
Responses API and Tools
1. Operator functionality (CUA model) is available starting today through the Responses API
2. Responses API is stateful by default, supports retrieving past responses, chaining them, and will soon reintroduce threads
3. Code Interpreter tool is planned as the next built-in tool in the Responses API
4. Web search can be used together with structured outputs by defining a JSON schema explicitly
5. Assistants API won't be deprecated until migration to Responses API is possible without data loss
6. "Assistants" and "agents" terms are interchangeable, both describing systems independently accomplishing user tasks
7. Curl documentation is provided in API references, with more examples coming soon
Agents SDK and Compatibility
- Agents SDK supports external API calls through custom-defined function tools
- SDK compatible with external open-source models that expose a Chat Completions-compatible API endpoint (no Responses API compatibility required)
- JavaScript and TypeScript SDKs coming soon; Java SDK may be prioritized based on demand
- Tracing functionality covers external Chat Completions-compatible models as a "generation span"
- Agents SDK supports MCP connections through custom-defined function tools
- Asynchronous operations aren't natively supported yet; interim solution is immediately returning "success" followed by updating via user message later
- Agent tools can include preset variables either hardcoded or through a context object
- Privacy can be managed using guardrails and an
- Agents SDK workflows can combine external Chat Completions-compatible models and OpenAI models, including built-in tools like CUA
- Agentic "deep research" functionality can be built using Responses API or Agents SDK
File and Vector Store Features
- File search returns citation texts via the "annotations" parameter
- Vector stores already support custom chunking and hybrid search, with further improvements planned
- Images are not yet supported in vector stores, but entire PDFs can be directly uploaded into the Responses API for small-document use cases
Computer Use Model (CUA) and Environment
- Docker environments for computer use must be managed by developers; recommended third-party cloud services are Browserbase and Scrapybara, with provided sample apps
- Predefined Ubuntu environments and company-specific setups can be created using OpenAI's CUA starter app
- Integrated VMs or fully managed cloud environments for CUA are not planned yet; developers are encouraged to use sample apps with third-party hosting providers
- CUA model primarily trained on web tasks but shows promising performance in desktop applications; still early stage
Realtime Usage Tracking
- OpenAI currently doesn't provide a built-in solution for tracking realtime usage via WebRTC ephemeral tokens; using a relay/proxy is recommended as an interim solution
OpenAI Models and Roadmap
- o1-pro will be available soon in Responses API
- o3 model development continues with API release planned; details forthcoming
Strategy and Positioning
OpenAI identifies as both a product and model company, noting ChatGPT's 400M weekly users help improve model quality, and acknowledges they won't build all needed AI products themselves
Unexpected Use Cases
Early Responses API tests revealed use cases such as art generation, live event summarization, apartment finding, and belief simulations
Responses API and Tools
1. Operator functionality (CUA model) is available starting today through the Responses API
2. Responses API is stateful by default, supports retrieving past responses, chaining them, and will soon reintroduce threads
3. Code Interpreter tool is planned as the next built-in tool in the Responses API
4. Web search can be used together with structured outputs by defining a JSON schema explicitly
5. Assistants API won't be deprecated until migration to Responses API is possible without data loss
6. "Assistants" and "agents" terms are interchangeable, both describing systems independently accomplishing user tasks
7. Curl documentation is provided in API references, with more examples coming soon
Agents SDK and Compatibility
- Agents SDK supports external API calls through custom-defined function tools
- SDK compatible with external open-source models that expose a Chat Completions-compatible API endpoint (no Responses API compatibility required)
- JavaScript and TypeScript SDKs coming soon; Java SDK may be prioritized based on demand
- Tracing functionality covers external Chat Completions-compatible models as a "generation span"
- Agents SDK supports MCP connections through custom-defined function tools
- Asynchronous operations aren't natively supported yet; interim solution is immediately returning "success" followed by updating via user message later
- Agent tools can include preset variables either hardcoded or through a context object
- Privacy can be managed using guardrails and an
input_filter to constrain context during agent handoffs- Agents SDK workflows can combine external Chat Completions-compatible models and OpenAI models, including built-in tools like CUA
- Agentic "deep research" functionality can be built using Responses API or Agents SDK
File and Vector Store Features
- File search returns citation texts via the "annotations" parameter
- Vector stores already support custom chunking and hybrid search, with further improvements planned
- Images are not yet supported in vector stores, but entire PDFs can be directly uploaded into the Responses API for small-document use cases
Computer Use Model (CUA) and Environment
- Docker environments for computer use must be managed by developers; recommended third-party cloud services are Browserbase and Scrapybara, with provided sample apps
- Predefined Ubuntu environments and company-specific setups can be created using OpenAI's CUA starter app
- Integrated VMs or fully managed cloud environments for CUA are not planned yet; developers are encouraged to use sample apps with third-party hosting providers
- CUA model primarily trained on web tasks but shows promising performance in desktop applications; still early stage
Realtime Usage Tracking
- OpenAI currently doesn't provide a built-in solution for tracking realtime usage via WebRTC ephemeral tokens; using a relay/proxy is recommended as an interim solution
OpenAI Models and Roadmap
- o1-pro will be available soon in Responses API
- o3 model development continues with API release planned; details forthcoming
Strategy and Positioning
OpenAI identifies as both a product and model company, noting ChatGPT's 400M weekly users help improve model quality, and acknowledges they won't build all needed AI products themselves
Unexpected Use Cases
Early Responses API tests revealed use cases such as art generation, live event summarization, apartment finding, and belief simulations
👍3🔥2❤1
Google introduced Gemini Robotics is the most advanced VLA in the world
Gemini Robotics featured in the post, builds on Gemini 2.0, introducing advanced vision-language-action capabilities to control robots physically.
The technology enables robots to understand and react to the physical world, performing tasks like desk cleanup through voice commands, as part of a broader push toward embodied AI.
Gemini Robotics-ER, a related model, enhances spatial understanding, allowing robots to adapt to dynamic environments and interact seamlessly with humans.
Tech report.
Gemini Robotics featured in the post, builds on Gemini 2.0, introducing advanced vision-language-action capabilities to control robots physically.
The technology enables robots to understand and react to the physical world, performing tasks like desk cleanup through voice commands, as part of a broader push toward embodied AI.
Gemini Robotics-ER, a related model, enhances spatial understanding, allowing robots to adapt to dynamic environments and interact seamlessly with humans.
Tech report.
👍5❤3🔥2
Hugging Face (LeRobot) & Yaak released the worlds largest open source self driving dataset
To search the data, Yaak is launching Nutron - A tool that is revolutionizing natural language search of robotics data. Check out the video to see how it works.
Natural language search of multi-modal data.
Open sourcing L2D dataset - 5,000 hours of multi-modal self-driving data.
Try Nutron.
To search the data, Yaak is launching Nutron - A tool that is revolutionizing natural language search of robotics data. Check out the video to see how it works.
Natural language search of multi-modal data.
Open sourcing L2D dataset - 5,000 hours of multi-modal self-driving data.
Try Nutron.
huggingface.co
LeRobot goes to driving school: World’s largest open-source self-driving dataset
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
❤4🔥4👍1
The first quantum supremacy for a useful application D-Wave’s quantum computer performed a complex simulation in minutes and with a level of accuracy that would take nearly a million years using the DOE supercomputer built with GPUs.
In addition, it would require more than the world’s annual electricity consumption to solve this problem using the classical supercomputer.
In addition, it would require more than the world’s annual electricity consumption to solve this problem using the classical supercomputer.
Dwavequantum
Beyond Classical: D-Wave First to Demonstrate Quantum Supremacy on Useful, Real-World Problem
🔥5👍2❤1
IOSCO_AI_1741862094.pdf
1.5 MB
IOSCO Report: AI in Capital Markets - Uses, Risks, and Regulatory Responses
This report delves into the current and potential applications of AI within financial markets, outlines the associated risks and challenges, and examines how regulators and market participants are adapting to these changes.
The report cites regulatory approaches from Hong Kong, the EU, Canada, the US, Singapore, the Netherlands, the UK, Greece, Japan, Brazil, and Australia.
Key AI Applications in Financial Markets:
Decision-Making Support:
- Robo-advising (automated investment advice)
- Algorithmic trading
- Investment research and market sentiment analysis
Specific AI Use Cases.
Nasdaq:
Developed the Dynamic M-ELO AI-driven trading order, optimizing order holding time for improved execution efficiency.
Broker-Dealers.
Customer interaction via chatbots
Algorithmic trading enhancements
Fraud and anomaly detection
Asset Managers.
Automated investment advice
Investment research
Portfolio construction and optimization
This report delves into the current and potential applications of AI within financial markets, outlines the associated risks and challenges, and examines how regulators and market participants are adapting to these changes.
The report cites regulatory approaches from Hong Kong, the EU, Canada, the US, Singapore, the Netherlands, the UK, Greece, Japan, Brazil, and Australia.
Key AI Applications in Financial Markets:
Decision-Making Support:
- Robo-advising (automated investment advice)
- Algorithmic trading
- Investment research and market sentiment analysis
Specific AI Use Cases.
Nasdaq:
Developed the Dynamic M-ELO AI-driven trading order, optimizing order holding time for improved execution efficiency.
Broker-Dealers.
Customer interaction via chatbots
Algorithmic trading enhancements
Fraud and anomaly detection
Asset Managers.
Automated investment advice
Investment research
Portfolio construction and optimization
🔥4👍3👏1