First comprehensive framework for how AI agents actually improve through adaptation.
Researchers from many universities surveyed the rapidly expanding landscape of agentic AI adaptation.
What they found: a fragmented field with no unified understanding of how agents learn to use tools, when to adapt the agent versus the tool, and which strategies work for which scenarios.
These are all important for building production-ready AI agents.
Adaptation in agentic AI follows four distinct paradigms that most practitioners conflate or ignore entirely.
The framework organizes all adaptation strategies into two dimensions.
- Agent Adaptation (A1, A2): modifying the agent's parameters, representations, or policies.
- Tool Adaptation (T1, T2): optimizing external components like retrievers, planners, and memory modules while keeping the agent frozen.
Researchers from many universities surveyed the rapidly expanding landscape of agentic AI adaptation.
What they found: a fragmented field with no unified understanding of how agents learn to use tools, when to adapt the agent versus the tool, and which strategies work for which scenarios.
These are all important for building production-ready AI agents.
Adaptation in agentic AI follows four distinct paradigms that most practitioners conflate or ignore entirely.
The framework organizes all adaptation strategies into two dimensions.
- Agent Adaptation (A1, A2): modifying the agent's parameters, representations, or policies.
- Tool Adaptation (T1, T2): optimizing external components like retrievers, planners, and memory modules while keeping the agent frozen.
GitHub
Awesome-Adaptation-of-Agentic-AI/paper.pdf at main · pat-jj/Awesome-Adaptation-of-Agentic-AI
Repo for "Adaptation of Agentic AI". Contribute to pat-jj/Awesome-Adaptation-of-Agentic-AI development by creating an account on GitHub.
🔥3❤2👏2
Diffusion LLMs are the new frontier? InclusionAI has released LLaDA 2.0—the first diffusion model to scale to 100B params, matching frontier LLMs while achieving 2× faster inference
LLaDA is 2.3x faster on average. We see unique high-TPF advantages in Coding via parallel decoding.
The Challenge: AR models had a 3-year head start.
GitHub.
GitHub.
LLaDA is 2.3x faster on average. We see unique high-TPF advantages in Coding via parallel decoding.
The Challenge: AR models had a 3-year head start.
GitHub.
GitHub.
GitHub
GitHub - inclusionAI/dFactory: Easy and Efficient dLLM Fine-Tuning
Easy and Efficient dLLM Fine-Tuning. Contribute to inclusionAI/dFactory development by creating an account on GitHub.
❤5🔥5👏3
NVIDIA launched the open Nemotron 3 model family, starting with Nano (30B-3A), which pushes the frontier of accuracy and inference efficiency with a novel hybrid SSM Mixture of Experts architecture.
Super and Ultra are coming in the next few months.
Nemotron 3 Super (~4X bigger than Nano) and Ultra (~16X bigger than Nano) are pretrained using NVFP4, a new "Latent Mixture of Experts" architecture that allows us to use 4X more experts for the same inference cost, and Multi-Token Prediction.
Super and Ultra are coming in the next few months.
Nemotron 3 Super (~4X bigger than Nano) and Ultra (~16X bigger than Nano) are pretrained using NVFP4, a new "Latent Mixture of Experts" architecture that allows us to use 4X more experts for the same inference cost, and Multi-Token Prediction.
❤4🔥4🥰2
a16z released 17 crypto predictions for 2026. Most are obvious. A few are not.
The ones worth paying attention to:
1. Privacy becomes the strongest moat
Bridging tokens is easy. Bridging secrets is hard. Users on private chains are less likely to leave.
Winner-take-most dynamics emerge.
2. Know Your Agent (KYA)
Non-human identities outnumber human employees 96-to-1 in financial services.
The agent economy's bottleneck is identity.
3. AI agents are taxing the open web
They extract value from ad-supported sites while bypassing revenue streams.
The web needs real-time, usage-based compensation or content creation collapses.
The ones worth paying attention to:
1. Privacy becomes the strongest moat
Bridging tokens is easy. Bridging secrets is hard. Users on private chains are less likely to leave.
Winner-take-most dynamics emerge.
2. Know Your Agent (KYA)
Non-human identities outnumber human employees 96-to-1 in financial services.
The agent economy's bottleneck is identity.
3. AI agents are taxing the open web
They extract value from ad-supported sites while bypassing revenue streams.
The web needs real-time, usage-based compensation or content creation collapses.
a16z crypto
17 things we're excited about for crypto in 2026 - a16z crypto
❤3🔥3💯3
DeepCode: Open Agentic Coding
DeepCode, an open agentic coding framework that treats repository synthesis as a channel optimization problem, maximizing task-relevant signals under finite context budgets.
How does this work?
Scientific papers are high-entropy specifications with scattered multimodal constraints, equations, pseudocode, and hyperparameters. Naive approaches that concatenate raw documents with growing code history cause channel saturation, where redundant tokens mask critical algorithmic details and signal-to-noise ratio collapses.
The results on OpenAI's PaperBench benchmark are impressive. DeepCode achieves 73.5% replication score, a 70% relative improvement over the best LLM agent baseline (o1 at 43.3%). It decisively outperforms commercial agents: Cursor at 58.4%, Claude Code at 58.7%, and Codex at 40.0%.
Most notably, DeepCode surpasses human experts. On a 3-paper subset evaluated by ML PhD students from Berkeley, Cambridge, and Carnegie Mellon, humans scored 72.4%. DeepCode scored 75.9%.
Principled information-flow management yields significantly larger performance gains than merely scaling model size or context length. The framework is fully open source.
DeepCode, an open agentic coding framework that treats repository synthesis as a channel optimization problem, maximizing task-relevant signals under finite context budgets.
How does this work?
Scientific papers are high-entropy specifications with scattered multimodal constraints, equations, pseudocode, and hyperparameters. Naive approaches that concatenate raw documents with growing code history cause channel saturation, where redundant tokens mask critical algorithmic details and signal-to-noise ratio collapses.
The results on OpenAI's PaperBench benchmark are impressive. DeepCode achieves 73.5% replication score, a 70% relative improvement over the best LLM agent baseline (o1 at 43.3%). It decisively outperforms commercial agents: Cursor at 58.4%, Claude Code at 58.7%, and Codex at 40.0%.
Most notably, DeepCode surpasses human experts. On a 3-paper subset evaluated by ML PhD students from Berkeley, Cambridge, and Carnegie Mellon, humans scored 72.4%. DeepCode scored 75.9%.
Principled information-flow management yields significantly larger performance gains than merely scaling model size or context length. The framework is fully open source.
arXiv.org
DeepCode: Open Agentic Coding
Recent advances in large language models (LLMs) have given rise to powerful coding agents, making it possible for code assistants to evolve into code engineers. However, existing methods still...
🔥4🆒4❤3🥰2
NEW Research from Meta Superintelligence Labs and collaborators
This new research introduces Parallel-Distill-Refine (PDR), a framework that treats LLMs as improvement operators rather than single-pass reasoners.
Instead of one long reasoning chain, PDR operates in phases:
- Generate diverse drafts in parallel.
- Distill them into a bounded textual workspace.
- Refine conditioned on this workspace.
- Repeat.
Context length becomes controllable via degree of parallelism, no longer conflated with total tokens generated. The model accumulates wisdom across rounds through compact summaries rather than replaying full histories.
The researchers also trained an 8B model with operator-consistent RL to make training match the PDR inference interface. Mixing standard and operator RL yields an additional 5% improvement on both AIME benchmarks.
Bounded memory iteration can substitute for long reasoning traces while holding latency fixed. Strategic parallelism and distillation is shown to beat brute-force sequence extension.
This new research introduces Parallel-Distill-Refine (PDR), a framework that treats LLMs as improvement operators rather than single-pass reasoners.
Instead of one long reasoning chain, PDR operates in phases:
- Generate diverse drafts in parallel.
- Distill them into a bounded textual workspace.
- Refine conditioned on this workspace.
- Repeat.
Context length becomes controllable via degree of parallelism, no longer conflated with total tokens generated. The model accumulates wisdom across rounds through compact summaries rather than replaying full histories.
The researchers also trained an 8B model with operator-consistent RL to make training match the PDR inference interface. Mixing standard and operator RL yields an additional 5% improvement on both AIME benchmarks.
Bounded memory iteration can substitute for long reasoning traces while holding latency fixed. Strategic parallelism and distillation is shown to beat brute-force sequence extension.
arXiv.org
Rethinking Thinking Tokens: LLMs as Improvement Operators
Reasoning training incentivizes LLMs to produce long chains of thought (long CoT), which among other things, allows them to explore solution strategies with self-checking. This results in higher...
💯4🔥2🥰2❤1
Demis Hassabis, CEO Google DeepMind laid out the clearest roadmap to AGI
1/ AGI won’t come from scaling alone. Demis Hassabis says it’s 50% scaling, 50% innovation. Bigger models matter, but new ideas matter just as much.
2/ Today’s AI is powerful but jagged. Gold-medal level at Olympiad math. Yet still fails basic logic and consistency tests. That gap is why we’re not at AGI.
3/ The missing ingredient isn’t intelligence. It’s reliability, reasoning, and self-awareness of uncertainty. AI needs to know what it doesn’t know.
4/ Hallucinations aren’t random. They often happen because models are forced to answer when they should say “I’m not confident.”
5/ AlphaFold showed the playbook. Solve a root problem once, unlock entire industries downstream. Now DeepMind is targeting materials, fusion, and climate.
6/ Fusion is the ultimate root node. Clean, abundant energy would reshape water, food, climate, and even space travel. AI could help crack it.
7/ Language models surprised us. They understand more about the world than expected. But language alone isn’t enough.
8/ That’s why world models matter. To understand physics, space, causality, and action, AI must experience worlds, not just read about them.
9/ Simulation is the next frontier. If an AI can generate a realistic world, it likely understands its mechanics.
10/ Drop agents into those worlds and let curiosity drive learning. Now you have infinite training data, created on the fly.
11/ This could be how AI learns like humans do. Exploration first. Understanding second. Generalization last.
12/ Hassabis believes simulation may also unlock science. Weather. Biology. Materials. Even the origins of life.
13/ Why simulations matter philosophically: If you can simulate something, you’ve understood it.
14/ That leads to the deepest question. Is there anything in the universe that’s non-computable?
15/ So far, we haven’t found one. Protein folding. Go. Complex biology. All computable.
16/ Consciousness might be next. AGI could become a mirror that shows us what, if anything, is unique about the human mind.
17/ If creativity, emotion, or dreaming are computable, machines may have them too. If not, we’ll finally learn where the boundary is.
18/ AGI isn’t just a tech problem. It’s an economic, social, and philosophical one.
19/ The industrial revolution took a century. AGI may unfold in a decade. The disruption will be faster and bigger.
20/ Hassabis’ core belief: The universe runs on information. And intelligence may be the ultimate way to understand it.
1/ AGI won’t come from scaling alone. Demis Hassabis says it’s 50% scaling, 50% innovation. Bigger models matter, but new ideas matter just as much.
2/ Today’s AI is powerful but jagged. Gold-medal level at Olympiad math. Yet still fails basic logic and consistency tests. That gap is why we’re not at AGI.
3/ The missing ingredient isn’t intelligence. It’s reliability, reasoning, and self-awareness of uncertainty. AI needs to know what it doesn’t know.
4/ Hallucinations aren’t random. They often happen because models are forced to answer when they should say “I’m not confident.”
5/ AlphaFold showed the playbook. Solve a root problem once, unlock entire industries downstream. Now DeepMind is targeting materials, fusion, and climate.
6/ Fusion is the ultimate root node. Clean, abundant energy would reshape water, food, climate, and even space travel. AI could help crack it.
7/ Language models surprised us. They understand more about the world than expected. But language alone isn’t enough.
8/ That’s why world models matter. To understand physics, space, causality, and action, AI must experience worlds, not just read about them.
9/ Simulation is the next frontier. If an AI can generate a realistic world, it likely understands its mechanics.
10/ Drop agents into those worlds and let curiosity drive learning. Now you have infinite training data, created on the fly.
11/ This could be how AI learns like humans do. Exploration first. Understanding second. Generalization last.
12/ Hassabis believes simulation may also unlock science. Weather. Biology. Materials. Even the origins of life.
13/ Why simulations matter philosophically: If you can simulate something, you’ve understood it.
14/ That leads to the deepest question. Is there anything in the universe that’s non-computable?
15/ So far, we haven’t found one. Protein folding. Go. Complex biology. All computable.
16/ Consciousness might be next. AGI could become a mirror that shows us what, if anything, is unique about the human mind.
17/ If creativity, emotion, or dreaming are computable, machines may have them too. If not, we’ll finally learn where the boundary is.
18/ AGI isn’t just a tech problem. It’s an economic, social, and philosophical one.
19/ The industrial revolution took a century. AGI may unfold in a decade. The disruption will be faster and bigger.
20/ Hassabis’ core belief: The universe runs on information. And intelligence may be the ultimate way to understand it.
YouTube
The future of intelligence | Demis Hassabis (Co-founder and CEO of DeepMind)
In our final episode of the season, Professor Hannah Fry sits down with Google DeepMind Co-founder and CEO Demis Hassabis for their annual check-in. Together, they look beyond the product launches to the scientific and technological questions that will define…
🔥9❤5🥰3
Anthropic will add 5 different starting points to its upcoming Tasks Mode: Research, Analyse, Write, Build, and Do More. Tons of granular controls
A new sidebar for tracking tasks' progress and working with Claude's context has also been added.
A new sidebar for tracking tasks' progress and working with Claude's context has also been added.
TestingCatalog
Anthropic preparing new Agentic Tasks Mode for Claude
Anthropic testing Claude's Agent mode with a new interface for tasks, to introduce new modes for research, analysis, writing, and building.
🔥2👏2💯2
Meta introduced SAM Audio, the first unified model that isolates any sound from complex audio mixtures using text, visual, or span prompts.
This is a cool model, because always struggled with finding good scenarios that combine audio and vision, where audio plays a larger role than just "like language in vlms, but, you know, as sound wave instead".
This is a cool model, because always struggled with finding good scenarios that combine audio and vision, where audio plays a larger role than just "like language in vlms, but, you know, as sound wave instead".
Meta
SAM Audio
With SAM Audio, you can use simple text prompts to accurately separate any sound from any audio or audio-visual source.
🔥2🥰2👏2
VC firms are building their own AI tools to compete for the best startup deals. And for founders, that's changing the relationships game.
This summer, venture capitalist Aubrie Pagano snagged the chance to invest in a buzzy funding round with a major assist from AI.
For their crucial pitch meeting with a frontier science lab, Pagano brought a list of 50 high-value prospects – academics, pharma execs and former FDA leaders – and the exact route by which her firm, Alpaca VC, could connect its founders to each.
The startup made room for Alpaca to invest $1 million. It was only afterward that its founders found out that Pagano had used an agent from the firm’s proprietary AI system, known internally as Gordon, to help secure the deal.
Seemingly every VC firm has that partner (or several) who drafts LinkedIn ‘thought leadership’ posts in Claude, runs meeting notes from Granola through NotebookLM, or calculates market projections in a custom GPT.
But Alpaca, investing out of a $78 million fund, and a growing number of boutique and emerging VC firms are looking to compete – and punch above their weight with founders – by making outsized bets on building and investing through their own advanced AI tools.
These firms are fine-tuning their own models and setting up MCP servers, and managing long-running agents that automate entire processes, from back office reporting to their investment memos and content calendars.
At DVC, a $75 million fund that’s an early backer of Perplexity, AI recommendations have helped the firm write preemptive checks into some of the firm’s fastest-growing companies, like Higgsfield AI, just before revenue or valuations soared.
And at Topology Ventures, a frontier tech firm that raised a $75 million fund last year, managing partner Casey Caruso believes her internal AI CRM, called Fiber, is so good at predicting founder movements that it could raise millions in its own right.
This summer, venture capitalist Aubrie Pagano snagged the chance to invest in a buzzy funding round with a major assist from AI.
For their crucial pitch meeting with a frontier science lab, Pagano brought a list of 50 high-value prospects – academics, pharma execs and former FDA leaders – and the exact route by which her firm, Alpaca VC, could connect its founders to each.
The startup made room for Alpaca to invest $1 million. It was only afterward that its founders found out that Pagano had used an agent from the firm’s proprietary AI system, known internally as Gordon, to help secure the deal.
Seemingly every VC firm has that partner (or several) who drafts LinkedIn ‘thought leadership’ posts in Claude, runs meeting notes from Granola through NotebookLM, or calculates market projections in a custom GPT.
But Alpaca, investing out of a $78 million fund, and a growing number of boutique and emerging VC firms are looking to compete – and punch above their weight with founders – by making outsized bets on building and investing through their own advanced AI tools.
These firms are fine-tuning their own models and setting up MCP servers, and managing long-running agents that automate entire processes, from back office reporting to their investment memos and content calendars.
At DVC, a $75 million fund that’s an early backer of Perplexity, AI recommendations have helped the firm write preemptive checks into some of the firm’s fastest-growing companies, like Higgsfield AI, just before revenue or valuations soared.
And at Topology Ventures, a frontier tech firm that raised a $75 million fund last year, managing partner Casey Caruso believes her internal AI CRM, called Fiber, is so good at predicting founder movements that it could raise millions in its own right.
Upstartsmedia
Deep Dive: As Smaller VC Firms Build AI Tools To Compete, Founders Should (Mostly) Benefit
Challengers like Alpaca VC, DVC and Topology Ventures are going beyond ChatGPT: “We want to give everybody their own personal AI analyst, so they can look at companies the way a partner at a16z would do.”
🔥5👏4🥰2
Gemini 3 flash is out. The fast mode from the model picker in the GeminiApp - it’s shockingly speedy AND smart.
What an OP model. Also mind blowing how even just flash is competitive with the best models.
What an OP model. Also mind blowing how even just flash is competitive with the best models.
Science announced Vessel, a project focused on rethinking perfusion from the ground up, extending how long life can be sustained, and expanding what’s possible in transplantation and critical care.
Life-support technologies like ECMO can keep patients alive when the heart or lungs fail, but they aren’t designed for long-term use.
Vessel exists to close the gap between what perfusion technology is fundamentally capable of and how it is deployed in daily practice.
More about Science here.
Life-support technologies like ECMO can keep patients alive when the heart or lungs fail, but they aren’t designed for long-term use.
Vessel exists to close the gap between what perfusion technology is fundamentally capable of and how it is deployed in daily practice.
More about Science here.
WIRED
Former Neuralink Exec Launches Organ Preservation Effort
Science Corporation, founded by Neuralink’s first president, Max Hodak, has unveiled a prototype machine to extend the life of organs for longer periods.
👍4🔥4🥰2
Shunyu Yao, a rising star in AI agents and one of the key minds behind OpenAI’s Deep Research and Computer-Using Agent (CUA), has just been appointed Chief AI Scientist at Tencent.
😁4👍2🔥2❤1
Anthropic introduced a first-party plugins marketplace, making it easier to discover and install popular plugins.
Run /plugins to browse and batch install available plugins from the directory. You can install plugins at user, project, or local scope.
If you maintain a Claude Code plugin that you’d like to see in the marketplace, you can submit it to the team here.
Max users can now share guest passes with friends.
All Max users have 3 guest passes to share, and each can be redeemed for 1 week of free Pro access.
Run /passes to access your guest pass links. All 3 features + guest passes are now available. Run
Run /plugins to browse and batch install available plugins from the directory. You can install plugins at user, project, or local scope.
If you maintain a Claude Code plugin that you’d like to see in the marketplace, you can submit it to the team here.
Max users can now share guest passes with friends.
All Max users have 3 guest passes to share, and each can be redeemed for 1 week of free Pro access.
Run /passes to access your guest pass links. All 3 features + guest passes are now available. Run
claude update for the latest.Google Docs
Claude Code Plugin Submission Form
Thank you for your interest in having your plugin considered for Anthropic's Plugin Directory! This form helps us collect information about your Claude Code plugin to evaluate it for potential inclusion in our directory. We review all submissions to ensure…
👌4👍2🔥2💯1
It turns out that VLAs learn to align human and robot behavior as we scale up pre-training with more robot data.
In a new study at Physical Intelligence, team explored this "emergent" human-robot alignment and found that researchers could add human videos without any transfer learning.
In a new study at Physical Intelligence, team explored this "emergent" human-robot alignment and found that researchers could add human videos without any transfer learning.
www.pi.website
Emergence of Human to Robot Transfer in Vision-Language-Action Models
Exploring how transfer from human videos to robotic tasks emerges in robotic foundation models as they scale.
🔥2🥰2👏2
All about AI, Web 3.0, BCI
Meet Gauss the first autoformalization agent that just completed Terry Tao & Alex Kontorovich's Strong Prime Number Theorem project in 3 weeks—an effort that took human experts 18+ months of partial progress. GitHub. Early access.
AI agent Gauss autoformalized the proof of the Kakeya conjecture for finite fields
The proof Gauss wrote was surprisingly efficient, in just 6 hours.
The Kakeya conjecture was originally posed in 1917. It was solved in 2 dimensions almost 100 years ago. But just this year in 2025, Wang & Zahl solved it for 3 dimensions.
The Kakeya conjecture for finite fields was solved in all dimensions simultaneously by Dvir in 2008.
Dvir’s proof came as a shock to the math world.
Before, both Terence Tao (Fields medal 2006) and Jean Bourgain (Fields medal 1994) had been stuck making minor improvements on the problem for years.
At the time, Dvir was just a PhD student!
The proof Gauss wrote was surprisingly efficient, in just 6 hours.
The Kakeya conjecture was originally posed in 1917. It was solved in 2 dimensions almost 100 years ago. But just this year in 2025, Wang & Zahl solved it for 3 dimensions.
The Kakeya conjecture for finite fields was solved in all dimensions simultaneously by Dvir in 2008.
Dvir’s proof came as a shock to the math world.
Before, both Terence Tao (Fields medal 2006) and Jean Bourgain (Fields medal 1994) had been stuck making minor improvements on the problem for years.
At the time, Dvir was just a PhD student!
GitHub
GitHub - math-inc/KakeyaFiniteFields: A complete Lean 4 formalization of the Kakeya set problem over finite fields
A complete Lean 4 formalization of the Kakeya set problem over finite fields - math-inc/KakeyaFiniteFields
❤5🔥5💯4
Google introduced Gemma Scope 2
-Largest open release of interpretability tools (over 1 trillion parameters trained!)
-Works as a microscope to analyze all Gemma 3 models' internal activations
-Advanced tools for analyzing chat behaviors.
Paper.
HuggingFace.
-Largest open release of interpretability tools (over 1 trillion parameters trained!)
-Works as a microscope to analyze all Gemma 3 models' internal activations
-Advanced tools for analyzing chat behaviors.
Paper.
HuggingFace.
www.neuronpedia.org
Gemma Scope 2: Comprehensive Suite of SAEs and Transcoders for Gemma 3
Language Model Interpretability Team, Google DeepMind
❤2🔥2🥰2
Anthropic released Bloom, an open-source tool for generating behavioral misalignment evals for frontier AI models.
Bloom lets researchers specify a behavior and then quantify its frequency and severity across automatically generated scenarios.
Bloom lets researchers specify a behavior and then quantify its frequency and severity across automatically generated scenarios.
Anthropic
Introducing Bloom: an open source tool for automated behavioral evaluations
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
❤2🔥2👏2
Google introduced A2UI: Agent-to-User Interface
- Protocol for agent-driven interfaces
- Enables agents to generate interactive user interfaces
- Open source
- Protocol for agent-driven interfaces
- Enables agents to generate interactive user interfaces
- Open source
GitHub
GitHub - google/A2UI
Contribute to google/A2UI development by creating an account on GitHub.
👏3🔥2🥰2
Researchers from U. Michigan, NYU, Princeton & U. Virginia presented Next-Embedding Prediction (NEPA).
Instead of reconstructing pixels, the model learns by predicting the next "embedding" in a visual sequence.
It outperforms complex methods, hitting 85.3% accuracy on ImageNet and excelling at segmentation, all with a simple, scalable approach.
GitHub.
Instead of reconstructing pixels, the model learns by predicting the next "embedding" in a visual sequence.
It outperforms complex methods, hitting 85.3% accuracy on ImageNet and excelling at segmentation, all with a simple, scalable approach.
GitHub.
GitHub
GitHub - SihanXU/nepa: PyTorch implementation of NEPA
PyTorch implementation of NEPA. Contribute to SihanXU/nepa development by creating an account on GitHub.
❤2🔥2👏2
Stripe Atlas 2025 Startups: Year in Review – Key Insights
Into Stripe's latest report on startups via their Atlas platform, and it's packed with exciting trends for 2025. Here's a breakdown of the essential highlights:
1. Explosive Growth in Registrations: Startup formations surged 36% YoY. Europe led the charge with a whopping 48% increase, likely due to easier US incorporation amid local red tape. Time for EU reforms?
2. Global Teams on the Rise: Multi-national founder teams are up 79% since 2017, thanks to remote work magic. Borders are blurring – talent knows no limits.
3. Faster Monetization Than Ever: New startups hit revenue milestones quicker: Median revenue in the first 6 months jumped 39% YoY. 20% snagged their first customer within 30 days, and top performers reached $100K revenue 11% faster (around 108 days). AI and stablecoins are supercharging this.
4. Polarization in Success: While averages are up, the top 1% grew revenue 67% faster. Shoutout to rockstars like Cursor AI and Lovable for insane traction. Winners are winning bigger.
This report shows 2025 as a year of acceleration in the startup world – more companies, quicker cash, and global vibes. If you're building something, Stripe Atlas is making it easier for founders worldwide.
Into Stripe's latest report on startups via their Atlas platform, and it's packed with exciting trends for 2025. Here's a breakdown of the essential highlights:
1. Explosive Growth in Registrations: Startup formations surged 36% YoY. Europe led the charge with a whopping 48% increase, likely due to easier US incorporation amid local red tape. Time for EU reforms?
2. Global Teams on the Rise: Multi-national founder teams are up 79% since 2017, thanks to remote work magic. Borders are blurring – talent knows no limits.
3. Faster Monetization Than Ever: New startups hit revenue milestones quicker: Median revenue in the first 6 months jumped 39% YoY. 20% snagged their first customer within 30 days, and top performers reached $100K revenue 11% faster (around 108 days). AI and stablecoins are supercharging this.
4. Polarization in Success: While averages are up, the top 1% grew revenue 67% faster. Shoutout to rockstars like Cursor AI and Lovable for insane traction. Winners are winning bigger.
This report shows 2025 as a year of acceleration in the startup world – more companies, quicker cash, and global vibes. If you're building something, Stripe Atlas is making it easier for founders worldwide.
Stripe
Stripe Atlas startups in 2025: Year in review
2025 was a breakout year for early-stage startups, as founders generated revenue faster than ever. Three shifts stand out: customer bases are becoming more global, time-to-revenue has compressed, and founders are turning their focus to AI agents.
🔥5❤2👏2