the-next-big-arenas-of-competition_final.pdf
17.7 MB
Future Market Leaders: McKinsey's Vision for 2040
McKinsey Global Institute has identified 18 key market arenas that could reshape the global economy by 2040.
Here's what you need to know:
🔹 Total Market Potential:
- Revenue growth from $7.25T (2022) to $29-48T (2040)
- Projected profits of $2-6T by 2040
- Combined CAGR of 8-11%
🔝 Top 5 Markets by 2040 Revenue:
1. E-commerce: $14-20T (currently $4T)
2. AI Software & Services: $1.5-4.6T (from $85B)
3. Cloud Services: $1.6-3.4T (from $220B)
4. Electric Vehicles: $2.5-3.2T (from $450B)
5. Digital Advertising: $2.1-2.9T (from $520B)
🚀 Fastest Growing Sectors (CAGR):
- AI Software & Services: 17-25%
- Robotics: 13-23%
- Cloud Services: 12-17%
- Batteries: 12-14%
💰 Highest Profit Margins:
- Obesity Drugs: 25-35%
- Semiconductors: 20-25%
- AI Software & Services: 15-20%
- Digital Advertising: 15-20%
🌟 Emerging Technologies:
- Future Air Mobility
- Shared Autonomous Vehicles
- Industrial & Consumer Biotech
- Nuclear Fission Power Plants
McKinsey Global Institute has identified 18 key market arenas that could reshape the global economy by 2040.
Here's what you need to know:
🔹 Total Market Potential:
- Revenue growth from $7.25T (2022) to $29-48T (2040)
- Projected profits of $2-6T by 2040
- Combined CAGR of 8-11%
🔝 Top 5 Markets by 2040 Revenue:
1. E-commerce: $14-20T (currently $4T)
2. AI Software & Services: $1.5-4.6T (from $85B)
3. Cloud Services: $1.6-3.4T (from $220B)
4. Electric Vehicles: $2.5-3.2T (from $450B)
5. Digital Advertising: $2.1-2.9T (from $520B)
🚀 Fastest Growing Sectors (CAGR):
- AI Software & Services: 17-25%
- Robotics: 13-23%
- Cloud Services: 12-17%
- Batteries: 12-14%
💰 Highest Profit Margins:
- Obesity Drugs: 25-35%
- Semiconductors: 20-25%
- AI Software & Services: 15-20%
- Digital Advertising: 15-20%
🌟 Emerging Technologies:
- Future Air Mobility
- Shared Autonomous Vehicles
- Industrial & Consumer Biotech
- Nuclear Fission Power Plants
❤6
How effective is human-AI collaboration?
A meta-analysis of 106 studies just published in Nature reports an interesting result:
On average, there was no synergy: Human–AI combinations did not perform better than both humans and AI.
In particular, when the AI alone outperformed the human alone, the human–AI combination led to performance losses, likely because humans were unable to integrate the suggestions provided by the AI.
Conversely, when the human outperformed the AI alone, there was some synergy and human–AI combination led to performance gains, likely because this time humans were better at integrating the AI suggestions.
A meta-analysis of 106 studies just published in Nature reports an interesting result:
On average, there was no synergy: Human–AI combinations did not perform better than both humans and AI.
In particular, when the AI alone outperformed the human alone, the human–AI combination led to performance losses, likely because humans were unable to integrate the suggestions provided by the AI.
Conversely, when the human outperformed the AI alone, there was some synergy and human–AI combination led to performance gains, likely because this time humans were better at integrating the AI suggestions.
Nature
When combinations of humans and AI are useful: A systematic review and meta-analysis
Nature Human Behaviour - Vaccaro et al. present a systematic review and meta-analysis of the performance of human–AI combinations, finding that on average, human–AI combinations...
❗️ OpenAI builds first chip with Broadcom and TSMC, scales back foundry ambition
Company has dropped ambitious foundry plans for now due to the costs and time needed to build a network, and plans instead to focus on in-house chip design efforts.
Company has dropped ambitious foundry plans for now due to the costs and time needed to build a network, and plans instead to focus on in-house chip design efforts.
Reuters
Exclusive: OpenAI builds first chip with Broadcom and TSMC, scales back foundry ambition
OpenAI is working with Broadcom and TSMC to build its first in-house chip designed to support its artificial intelligence systems, while adding AMD chips alongside Nvidia chips to meet its surging infrastructure demands, sources told Reuters.
Osmo digitized scent! A fresh summer plum was the first fruit and scent to be fully digitized and reprinted with no human intervention
Osmo is revolutionizing fragrance creation with AI!
3 new scent molecules, GLOSSINE, FRACTALINE, and QUASARINE, offer perfumers a fresh and innovative palette.
Also Osmo Introduced Inspire, a GenAI tool to turn your imagination and memories directly into fragrance.
Osmo is revolutionizing fragrance creation with AI!
3 new scent molecules, GLOSSINE, FRACTALINE, and QUASARINE, offer perfumers a fresh and innovative palette.
Also Osmo Introduced Inspire, a GenAI tool to turn your imagination and memories directly into fragrance.
🔥5❤🔥2
Anthropic published a repo with courses on how to use LLMs.
GitHub
GitHub - anthropics/courses: Anthropic's educational courses
Anthropic's educational courses. Contribute to anthropics/courses development by creating an account on GitHub.
How do we represent 3D world knowledge for spatial intelligence in next-generation robots? An extensive survey paper on this emerging topic, covering recent state-of-the-art.
GitHub.
GitHub.
arXiv.org
Neural Fields in Robotics: A Survey
Neural Fields have emerged as a transformative approach for 3D scene representation in computer vision and robotics, enabling accurate inference of geometry, 3D semantics, and dynamics from posed...
🆒3
Jailbreaking LLM-Controlled Robots
Recent research has uncovered a concerning vulnerability in AI-powered robots that should make us all pause and think.
While robots controlled by LLMs like ChatGPT represent an exciting technological advancement, they may also pose unexpected security risks.
The Rise of AI Robots
We're already seeing AI-powered robots in our world. Boston Dynamics' Spot robot dog ($75,000) is being used by SpaceX and NYPD. The more affordable Unitree Go2 ($3,500) is commercially available to consumers. These robots can now be controlled through voice commands or text, thanks to integration with LLMs like ChatGPT.
While LLMs are programmed to refuse harmful requests (like providing instructions for building explosives), researchers have discovered they can be "jailbroken" - tricked into bypassing these safety measures.
What's particularly alarming is that this vulnerability extends to robots controlled by these AI systems.
The RoboPAIR Discovery
A research team developed a method called RoboPAIR that demonstrated how alarmingly easy it is to bypass these robots' safety protocols.
In controlled experiments, they successfully manipulated:
- Self-driving vehicle systems to ignore safety protocols
- Robot platforms to enter restricted areas
- Mobile robots to execute potentially dangerous actions
This isn't just about theoretical risks. These robots are already being deployed in various settings - from construction sites to law enforcement. The ability to bypass their safety measures poses real-world risks that need immediate attention.
The researchers emphasize that we urgently need:
1. Robust defense mechanisms specifically designed for robotic systems
2. Better understanding of context-dependent safety protocols
3. Collaboration between robotics and AI safety experts
Recent research has uncovered a concerning vulnerability in AI-powered robots that should make us all pause and think.
While robots controlled by LLMs like ChatGPT represent an exciting technological advancement, they may also pose unexpected security risks.
The Rise of AI Robots
We're already seeing AI-powered robots in our world. Boston Dynamics' Spot robot dog ($75,000) is being used by SpaceX and NYPD. The more affordable Unitree Go2 ($3,500) is commercially available to consumers. These robots can now be controlled through voice commands or text, thanks to integration with LLMs like ChatGPT.
While LLMs are programmed to refuse harmful requests (like providing instructions for building explosives), researchers have discovered they can be "jailbroken" - tricked into bypassing these safety measures.
What's particularly alarming is that this vulnerability extends to robots controlled by these AI systems.
The RoboPAIR Discovery
A research team developed a method called RoboPAIR that demonstrated how alarmingly easy it is to bypass these robots' safety protocols.
In controlled experiments, they successfully manipulated:
- Self-driving vehicle systems to ignore safety protocols
- Robot platforms to enter restricted areas
- Mobile robots to execute potentially dangerous actions
This isn't just about theoretical risks. These robots are already being deployed in various settings - from construction sites to law enforcement. The ability to bypass their safety measures poses real-world risks that need immediate attention.
The researchers emphasize that we urgently need:
1. Robust defense mechanisms specifically designed for robotic systems
2. Better understanding of context-dependent safety protocols
3. Collaboration between robotics and AI safety experts
Machine Learning Blog | ML@CMU | Carnegie Mellon University
Jailbreaking LLM-Controlled Robots
Summary. Recent research has shown that large language models (LLMs) such as ChatGPT are susceptible to jailbreaking attacks, wherein malicious users fool an LLM into generating toxic content (e.g., bomb-building instructions). However, these attacks are…
RELAI agents for real-time hallucination detection in popular LLMs.
relai.ai
RELAI: Optimized Agentic AI on Your Data
Rely on RELAI agents for your AI reliability needs, from model evaluation and debugging to leveraging state-of-the-art system-level and user-facing safeguards.
New paper from OpenAI: SimpleQA is newly open-sourced factuality benchmark that contains 4,326 short, fact-seeking questions that are challenging for frontier models.
- High correctness via robust data quality verification / human agreement rates.
- Good researcher UX. Easy to grade, easy to run.
- Challenging for frontier models. GPT-4o and Claude both score less than 50%
- Diversity. SimpleQA contains questions from a wide range of topics, including history, science & technology, art, geography, TV shows, etc.
- High correctness via robust data quality verification / human agreement rates.
- Good researcher UX. Easy to grade, easy to run.
- Challenging for frontier models. GPT-4o and Claude both score less than 50%
- Diversity. SimpleQA contains questions from a wide range of topics, including history, science & technology, art, geography, TV shows, etc.
Openai
Introducing SimpleQA
A factuality benchmark called SimpleQA that measures the ability for language models to answer short, fact-seeking questions.
OpenAI introduced ChatGPT search
ChatGPT can now search the web in a much better way than before so you get fast, timely answers with links to relevant web sources.
Plus and Team users will get access to Search today.
OpenAI used synthetic data to fine tune the search model. 'The search model is a fine-tuned version of GPT-4o, post-trained using novel synthetic data generation techniques, including distilling outputs from OpenAI o1-preview.'
Search is also coming soon to both Advanced Voice and Canvas.
ChatGPT can now search the web in a much better way than before so you get fast, timely answers with links to relevant web sources.
Plus and Team users will get access to Search today.
OpenAI used synthetic data to fine tune the search model. 'The search model is a fine-tuned version of GPT-4o, post-trained using novel synthetic data generation techniques, including distilling outputs from OpenAI o1-preview.'
Search is also coming soon to both Advanced Voice and Canvas.
Openai
Introducing ChatGPT search
Get fast, timely answers with links to relevant web sources
Breakthrough: Physical Intelligence (π) Have Created "GPT for Robots"
Meet π0 (pi-zero) - the first AI that lets robots understand human commands just like ChatGPT, but for physical tasks.
Watch it fold laundry, clean tables, and assemble boxes like a pro.
The future where we can simply tell robots what to do - and they'll figure out how to do it - is finally here.
Meet π0 (pi-zero) - the first AI that lets robots understand human commands just like ChatGPT, but for physical tasks.
Watch it fold laundry, clean tables, and assemble boxes like a pro.
The future where we can simply tell robots what to do - and they'll figure out how to do it - is finally here.
www.pi.website
Our First Generalist Policy
Our first generalist policy, π0, a prototype model that combines large-scale multi-task and multi-robot data collection with a new network architecture to enable the most capable and dexterous generalist robot policy to date.
👍3
Meta FAIR announced 3 new cutting-edge developments in robotics and touch perception — and a new benchmark for human-robot collaboration to enable future work in this space.
1. Meta Sparsh is the first general-purpose encoder for vision-based tactile sensing that works across many tactile sensors and many tasks. Trained on 460K+ tactile images using self-supervised learning.
2. Meta Digit 360 is a breakthrough artificial fingertip-based tactile sensor, equipped with 18+ sensing features to deliver detailed touch data with human-level precision and touch-sensing capabilities.
3. Meta Digit Plexus is a standardized platform for robotic sensor connections and interactions. It provides a hardware-software solution to integrate tactile sensors on a single robot hand and enables seamless data collection, control and analysis over a single cable.
Also Meta released PARTNR: a benchmark for Planning And Reasoning Tasks in humaN-Robot collaboration. Built on Habitat 3.0, it’s the largest benchmark of its kind to study and evaluate human-robot collaboration in household activities.
1. Meta Sparsh is the first general-purpose encoder for vision-based tactile sensing that works across many tactile sensors and many tasks. Trained on 460K+ tactile images using self-supervised learning.
2. Meta Digit 360 is a breakthrough artificial fingertip-based tactile sensor, equipped with 18+ sensing features to deliver detailed touch data with human-level precision and touch-sensing capabilities.
3. Meta Digit Plexus is a standardized platform for robotic sensor connections and interactions. It provides a hardware-software solution to integrate tactile sensors on a single robot hand and enables seamless data collection, control and analysis over a single cable.
Also Meta released PARTNR: a benchmark for Planning And Reasoning Tasks in humaN-Robot collaboration. Built on Habitat 3.0, it’s the largest benchmark of its kind to study and evaluate human-robot collaboration in household activities.
Meta AI
Advancing embodied AI through progress in touch perception, dexterity, and human-robot interaction
Today, Meta FAIR is publicly releasing several new research artifacts that advance robotics and support our goal of reaching advanced machine intelligence (AMI).
Huggingface made an 11T magical training set that made 3 different SOTA models.
The best 135M, 360M, 1.7B models to date.
The best 135M, 360M, 1.7B models to date.
huggingface.co
SmolLM2 - a HuggingFaceTB Collection
State-of-the-art compact LLMs for on-device applications: 1.7B, 360M, 135M
Systems of Agents will become the new source of truth for enterprises.
As they capture and act on information at its source, understand the full context of business communication, and continuously learn from every interaction, they'll generate richer, more accurate data than traditional systems ever could.
The old boundaries between data entry, engagement, and analysis won't just blur - they'll become irrelevant.
The winners in this new era won't be those who build better UX or smarter "predictive" analytics. They'll be those who create systems that think, learn, and act with the fluidity of human teams while operating at machine scale. They'll be those who recognize that the future of enterprise software isn't about making better tools - it's about creating digital workers that truly understand and enhance how business gets done.
The era of separated systems is ending. The age of Systems of Agents has begun.
As they capture and act on information at its source, understand the full context of business communication, and continuously learn from every interaction, they'll generate richer, more accurate data than traditional systems ever could.
The old boundaries between data entry, engagement, and analysis won't just blur - they'll become irrelevant.
The winners in this new era won't be those who build better UX or smarter "predictive" analytics. They'll be those who create systems that think, learn, and act with the fluidity of human teams while operating at machine scale. They'll be those who recognize that the future of enterprise software isn't about making better tools - it's about creating digital workers that truly understand and enhance how business gets done.
The era of separated systems is ending. The age of Systems of Agents has begun.
Foundation Capital
A System of Agents brings Service-as-Software to life - Foundation Capital
Software stands at the threshold of the most profound change in its history.
🦄7
Deutsche Telekom is testing Bitcoin mining using excess energy
Monetising excess energy through BTC mining, and thereby facilitating the built-out of renewable energy/stabilizing the grid has been an industry talking point for years.
If successful at scale, it could be a game changer for Germany's energy transition and serve as alternative to energy storage or the conversion of excess energy to gas (both in their early days and not without their challenges).
Deutsche Telekom has been running various validator nodes for other proof-of-stake networks, and a bitcoin node for quite a while too.
Monetising excess energy through BTC mining, and thereby facilitating the built-out of renewable energy/stabilizing the grid has been an industry talking point for years.
If successful at scale, it could be a game changer for Germany's energy transition and serve as alternative to energy storage or the conversion of excess energy to gas (both in their early days and not without their challenges).
Deutsche Telekom has been running various validator nodes for other proof-of-stake networks, and a bitcoin node for quite a while too.
OpenAI hires Meta’s former hardware lead for Orion (not to be confused with the codename for OAI’s next LLM…)
TechCrunch
Meta's former hardware lead for Orion is joining OpenAI | TechCrunch
The former head of Meta's augmented reality glasses efforts announced on Monday she is joining OpenAI to lead robotics and consumer hardware, according to
China is once again ahead of the US: Tencent released a 389B MoE with only 52B activated parameters which beats the Llama 3.1 405B
huggingface.co
tencent/Tencent-Hunyuan-Large · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Nvidia has had discussions to invest in xAI at around $45 billion
The 18-month-old startup has developed Grok, a chatbot that aims to compete with OpenAI’s ChatGPT. It’s building a massive data center in Memphis using Nvidia’s chips that has caused concern among rivals including OpenAI CEO Sam Altman, The Information has reported.
Nvidia has invested in numerous AI startups that use its chips, though it hasn’t written large checks to startups the way Microsoft, Google and Amazon have.
The 18-month-old startup has developed Grok, a chatbot that aims to compete with OpenAI’s ChatGPT. It’s building a massive data center in Memphis using Nvidia’s chips that has caused concern among rivals including OpenAI CEO Sam Altman, The Information has reported.
Nvidia has invested in numerous AI startups that use its chips, though it hasn’t written large checks to startups the way Microsoft, Google and Amazon have.
The Information
Nvidia Has Been in Talks to Invest in xAI’s Multibillion-Dollar Financing
Nvidia has had discussions to invest in xAI’s latest financing round, which could value the OpenAI rival at around $45 billion, representatives for xAI told a prospective investor.
The talks come just five months after xAI raised $6 billion at a $24 billion…
The talks come just five months after xAI raised $6 billion at a $24 billion…
Researchers built an 'empathy mechanism' into an agent's architecture with a technique that mirrors neurons.
So here (in a simple grid world) the reward comes from the agent's own emotional response through dopamine modulation, rather than from external rules or rewards.
Cool approach to drive altruistic behavior, but less clear how that might apply to language models over SNNs.
So here (in a simple grid world) the reward comes from the agent's own emotional response through dopamine modulation, rather than from external rules or rewards.
Cool approach to drive altruistic behavior, but less clear how that might apply to language models over SNNs.
arXiv.org
Building Altruistic and Moral AI Agent with Brain-inspired...
As AI closely interacts with human society, it is crucial to ensure that its behavior is safe, altruistic, and aligned with human ethical and moral values. However, existing research on embedding...
Parse introduced GigaLab: Revolutionizing Single Cell Research
Parse's GigaLab is breaking barriers with its groundbreaking ability to analyze over 10 million cells in a single run.
Why is this huge?
- Single cell research has been exploding, with studies doubling in size yearly
- Scientists can now explore cellular interactions at an unprecedented scale
- Powered by innovative Evercode technology and advanced automation
Real-world impact:
- Accelerating drug discovery and development
- Enhanced screening capabilities for new treatments
- Powering machine learning algorithms to predict patient therapy responses
- Opening doors to research previously deemed impossible
Think of it as a microscope on steroids – but instead of looking at one cell, you're analyzing millions simultaneously!
This isn't just about bigger numbers; it's about unlocking new possibilities in understanding diseases, developing treatments, and ultimately, saving lives.
This is what the future of biomedical research looks like. Welcome to the era of mega-scale cellular analysis! 🚀
Parse's GigaLab is breaking barriers with its groundbreaking ability to analyze over 10 million cells in a single run.
Why is this huge?
- Single cell research has been exploding, with studies doubling in size yearly
- Scientists can now explore cellular interactions at an unprecedented scale
- Powered by innovative Evercode technology and advanced automation
Real-world impact:
- Accelerating drug discovery and development
- Enhanced screening capabilities for new treatments
- Powering machine learning algorithms to predict patient therapy responses
- Opening doors to research previously deemed impossible
Think of it as a microscope on steroids – but instead of looking at one cell, you're analyzing millions simultaneously!
This isn't just about bigger numbers; it's about unlocking new possibilities in understanding diseases, developing treatments, and ultimately, saving lives.
This is what the future of biomedical research looks like. Welcome to the era of mega-scale cellular analysis! 🚀
Parse Biosciences
Parse GigaLab: 10M+ Cells in a Single Run - Redefining scRNA-seq
Leveraging the Evercode technology and automation, GigaLab enables groundbreaking research at unprecedented speed, scale and quality.
🔥4
How_AI_agents_are_shaping_the_future_of_work_1730893383.pdf
13.3 MB
Deloitte has published report on how AI agents are reshaping the future of work.
Key Highlights
1. AI agents are reshaping industries by expanding the potential applications of GenAI and typical language models.
2. Multiagent AI systems can significantly enhance the quality of outputs and complexity of work performed by single AI agents.
3. Forward-thinking businesses and governments are already implementing AI agents and multiagent AI systems across a range of use cases.
4. Executive leaders should make moves now to prepare for and embrace this next era of intelligent organizational transformation.
Key Highlights
1. AI agents are reshaping industries by expanding the potential applications of GenAI and typical language models.
2. Multiagent AI systems can significantly enhance the quality of outputs and complexity of work performed by single AI agents.
3. Forward-thinking businesses and governments are already implementing AI agents and multiagent AI systems across a range of use cases.
4. Executive leaders should make moves now to prepare for and embrace this next era of intelligent organizational transformation.