All about AI, Web 3.0, BCI
3.24K subscribers
727 photos
26 videos
161 files
3.1K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
State of AI: China - Key Insights from Q1 2025

While the US maintains an overall lead in the intelligence frontier, China is no longer far behind. Few other countries have demonstrated frontier-class training.

1. Closing the AI Gap

- Chinese AI models have rapidly caught up to US capabilities
- DeepSeek R1 achieved 89 on Intelligence Index, approaching OpenAI's o3 (94)
- Multiple Chinese models now match frontier US model performance

2. Leading Players & Performance
Top Chinese Models:


- DeepSeek R1: 89
- Qwen 2.5 Max (Alibaba): 79
- DeepSeek V3: 80
Market Leaders:
- Big Tech: Alibaba, Baidu, ByteDance, Tencent
- Rising Stars: DeepSeek, MoonShot AI, Zhipu

3. Export Control Impact

- US restrictions on high-end AI chips continue
- NVIDIA H100 (989 TFLOPs) banned for export
- China-approved NVIDIA H20 limited to 148 TFLOPs
- New restrictions expected in May 2025

4. Investment & Scale
Major Funding:

- MoonShot AI: $1.67B
- Zhipu: $1.12B
- Baichuan: $1.04B

5. Emerging Trends
- Focus on reasoning capabilities
- Open-source model development
- Multiple companies releasing frontier-level models
- Strong emphasis on domestic technology development
Trump family company TMTG announced plans to launch Made in America ETF, U.S. Energy Independence ETF and Bitcoin Plus ETF (SMA).

The plan aims to provide investors with investment channels in the U.S. energy, manufacturing and Bitcoin sectors.
Agentic Object Detection

Given a text prompt like “unripe strawberries” or “Kellogg’s branded cereal” and an image, use an agentic workflow to reason at length and detect the specified objects.

No need to label any training data.
GitHub infused the power of agentic AI into the GitHub Copilot

With agent mode in VS Code, Copilot goes beyond your initial request, completing all necessary subtasks and even inferring unspecified tasks.

Agent mode allows Copilot to iterate on its own code, propose and guide terminal commands, and analyze and resolve run-time errors. Available today for VS Code Insiders.

GitHub Copilot Edits is now GA multi-file editing tool that combines the best of inline edits and chat, allowing you to make changes across multiple files by prompting in natural language. And chose the model you prefer, from OpenAI’s GPT-4o, o1, or o3-mini, Anthropic’s Claude 3.5 Sonnet, and now, Gemini 2.0 Flash. 
🔥42👍2
Google's AI just solved 84% of the International Math Olympiad (IMO) problems from 2000-24 with Alpha Geometry 2.

These are math problems most professors couldn't solve.
🔥4😁1
Apple's AI researchers introduced EMOTION, a framework for expressive humanoid gestures.

1. LLMs interpret social context and generate motions from human demos
2. Robots execute motions via inverse kinematics
3. Gestures are refined through human feedback

Hardware: Fourier GR-1
👍4
Prime intellect introduced SYNTHETIC-1: Collaboratively generating the largest synthetic dataset of verified reasoning traces for math, coding and science using DeepSeek-R1.

SYNTHETIC-1:

- 1.4 million high-quality tasks & verifiers
- Public synthetic data run - allowing anyone to contribute compute
- GENESYS: open, extendable synthetic data generation framework + call for crowdsourcing tasks & verifiers.
🔥7
Meta has unveiled PARTNR, a open-source research framework that aims to revolutionize how humans and robots work together

This release marks a significant milestone in making social robotics more accessible to the research community.

Key Components of the Release:

1. A comprehensive benchmark for evaluating human-robot interaction

2. A large-scale dataset for training social robots

3. An advanced planning model designed for collaborative scenarios

Building upon Meta's successful Habitat platform, which has been instrumental in training AI agents for navigation and interaction in virtual environments, PARTNR takes the next step in creating more socially adept robots.

The benchmark system provides a consistent way to measure improvements in human-robot interaction, helping researchers objectively assess their progress.

The large-scale dataset will allow researchers to train robots more effectively, potentially leading to more natural and intuitive human-robot interactions.

Implications for the Future:

• More intuitive human-robot collaboration in various settings
• Faster development of social robotics applications
• Better standardization of robotics research methodologies
• Potential breakthroughs in robot social intelligence
New open source reasoning model:
Huginn-3.5B reasons implicitly in latent space

Unlike O1 and R1, latent reasoning doesn’t need special chain-of-thought training data, and doesn't produce extra CoT tokens at test time.

Huginn was built for reasoning from the ground up, not just fine-tuned on CoT.

Huginn is just a proof of concept.

Still, Huggin-3.5B can beat OLMo-7B-0724 (with CoT) at GSM8K by a wide margin (42% vs 29%).

Huginn has half the parameters, 1/3 the training tokens, no explicit fine-tuning, and the LR was never annealed.

Latent reasoning still wins.
HuggingFace published the second OpenR1 update with OpenR1-220k-Math, a new large-scale dataset for mathematical reasoning generated by DeepSeek R1.

They’re generated 800k+ reasoning traces on 512 H100s in 3 days.

Datasets.
Reddit user metaprompts a simple question to o3 mini then o1 pro then Deep Research and gets incredible results.

It built a ~10,000 word software architecture design on making a Python interpreter in Kubernetes.

The answer is better than 99% of tech teams, imo.

Source.
Full step by step.
🆒6
2025.02.05.636287v1.full.pdf
12.2 MB
Important in Brain-Movement Understanding: Beyond the Motor Cortex

Scientists from Maastricht University and their international colleagues have made a revolutionary discovery that changes our understanding of how the brain controls movement.

Their new research, published in 2025, reveals that movement control is far more distributed throughout the brain than previously thought.

Key Discoveries:

1. Whole-Brain Movement Control
- Movement information isn't just in the motor cortex
- Successfully decoded movement from 1,903 recording points across 119 brain areas
- Even deep brain structures like basal ganglia contain movement information

2. Technical Achievement
- First-ever simultaneous decoding of 12 different movement parameters
- Used breakthrough PSID algorithm for real-time analysis
- Recorded from unprecedented number of brain areas simultaneously

3. Goal-Directed Framework
- Revolutionary finding: brain encodes movements relative to goals
- Movement planning is fundamentally goal-oriented
- This changes how we think about motor control

4. Clinical Impact
- Opens new possibilities for stroke patients
- Suggests alternative pathways for movement restoration
- Could revolutionize brain-computer interfaces

This research fundamentally changes our approach to treating movement disorders. By showing that movement information exists throughout the brain, it offers hope for patients with damaged motor cortex, suggesting we can tap into alternative brain regions for movement recovery.

The future implications are enormous: from better neural prosthetics to more effective stroke rehabilitation strategies. This work doesn't just advance science - it opens new doors for treating patients who have lost motor function.
3
Statement from Dario Amodei on Paris:
'Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage'.
Microsoft and Anduril Industries are partnering to take over development and production of the U.S. Army’s Integrated Visual Augmentation System (IVAS) program.

This is Palmer Luckey posting on his blog, obviously this is a big moment for him.
When multiple AI agents work together on a task, even one problematic agent can cause the entire system to fail.

In this paper, researchers propose a monitoring and intervention framework that can detect when an agent is likely to make a mistake and prevent that mistake before it happens by monitoring agents' uncertainty levels during action prediction.
3
OpenAI dropped Competitive Programming with Large Reasoning Models

Notably, o3 achieves a gold medal at the 2024 IOI and obtains a Codeforces rating on par with elite human competitors.

Overall, these results indicate that scaling general-purpose reinforcement learning, rather than relying on domain-specific techniques, offers a robust path toward state-of-the-art AI in reasoning domains, such as competitive programming.
Microsoft Research presented
NatureLM: Deciphering the Language of Nature for Scientific Discovery


- Presents a sequence-based science foundation model designed for scientific discovery
- Can generat and optimize small molecules, proteins, RNA, and materials using text instructions
- SotA perf in tasks like SMILES-to-IUPAC translation and retrosynthesis on USPTO-50k
Building websites / apps with AI - new a16z’s thesis and market map

There's been an explosion of products that help users "vibe code" a web app from text prompts.

Thousands of users - from consumers to experienced developers - are sharing what they've made with these tools.

Most use an LLM to generate code based on the prompt, and then run it through middleware logic for things like tracking files and API calls.

The agents then push the code to a browser execution environment that streams the display to the user.

but do they really work?

Yes and no. They excel at simple builds. And if you can't code otherwise, they can feel like magic.

But there's a limit to what they can reliably generate. Integrations are difficult, bugs persist, and code can get "too big" quickly.
HuggingFace released 8GB of high quality reasoning math

They temporarily commandeered the HF cluster to generate 1.2 million reasoning-filled solutions for 500,000 NuminaMath problems using the DeepSeek-R1 model.

This is significant for AI development because:

1. It creates a large dataset of mathematical reasoning examples
2. These solutions can be used to train future AI models
3. It demonstrates the current capabilities of AI in solving complex mathematical problems

Datasets.
OpenAI roadmap update for GPT-4.5 and GPT-5

Sam Altman just dropped major news about the future of their AI development. The company is making a dramatic shift toward unified intelligence with GPT-4.5 and GPT-5.

Key highlights from the announcement:

• GPT-4.5 (codenamed "Orion") will be their final traditional model before a complete system overhaul

• GPT-5 represents a groundbreaking merger of all OpenAI technologies, including o3, marking the end of standalone models

• The most surprising part? Free ChatGPT users will get unlimited access to GPT-5's standard intelligence setting

They're moving toward what they call "magic unified intelligence" - essentially making AI that "just works" without users needing to understand the technical details.

Premium features for Plus and Pro users will include enhanced intelligence levels plus access to voice, canvas, search, and deep research capabilities.
Scientists Make Breakthrough in Understanding How the Brain Learns

Researchers have made an extraordinary discovery about how our brains create mental maps during learning. Using cutting-edge technology, they watched thousands of neurons in mice brains as they learned to navigate a virtual reality maze over several weeks.

Key Discoveries:

1. For the first time, scientists witnessed the entire process of how the brain forms a cognitive map in real-time. They observed thousands of neurons in the hippocampus (the brain's memory center) as they gradually learned to distinguish between similar-looking but functionally different locations.

2. The research revealed that the brain creates something similar to a "state machine" - a sophisticated mental model that helps navigate complex environments. Think of it as your brain creating a GPS that not only knows where you are but also understands what that location means for what you should do next.

3. Most surprisingly, once the brain builds this mental model, it can quickly adapt it to new but similar situations. The mice learned modified versions of the task much faster than the original one, showing how our brains can efficiently transfer knowledge to new situations.

Why This Matters:

- This research could revolutionize our understanding of learning and memory
- It may lead to better treatments for memory-related diseases
- The findings could help develop more efficient artificial intelligence systems that learn more like biological brains

The study, conducted using state-of-the-art brain imaging technology, represents a significant step forward in neuroscience, offering new insights into how our brains create and store memories.
7👍3👏2