Arc institute introduced the world's largest single-cell dataset
They’re launched the Arc Virtual Cell Atlas, a growing resource for computation-ready single-cell measurements.
As the initial contributions, Vevo Therapeutics has open sourced Tahoe-100M, the world's largest single-cell dataset, mapping 60,000 drug-cell interactions, and announced scBaseCamp, the first RNA sequencing data repository curated using AI agents. Combined, the release includes data from over 300 million cells.
They’re launched the Arc Virtual Cell Atlas, a growing resource for computation-ready single-cell measurements.
As the initial contributions, Vevo Therapeutics has open sourced Tahoe-100M, the world's largest single-cell dataset, mapping 60,000 drug-cell interactions, and announced scBaseCamp, the first RNA sequencing data repository curated using AI agents. Combined, the release includes data from over 300 million cells.
arcinstitute.org
Virtual Cell Atlas | Arc Institute
Arc Institute is a independent nonprofit research organization headquartered in Palo Alto, California.
👍6
#DeepSeek makes 2 major announcements
1. Starting today, DeepSeek is offering significant discounts on their API Platform during off-peak hours (16:30-00:30 UTC daily):
• DeepSeek-V3: 50% OFF
• DeepSeek-R1: Massive 75% OFF
This means you can access powerful AI models at a fraction of the cost during these hours. For example, DeepSeek-R1 output cost drops from $2.19 to just $0.550 per 1M tokens!
2. DeepSeek has also released DeepGEMM - an impressive FP8 GEMM library that supports both dense and MoE GEMMs, powering their V3/R1 models.
Key features:
- Up to 1350+ FP8 TFLOPS on Hopper GPUs
- Lightweight with no heavy dependencies
- Fully Just-In-Time compiled
- Core logic at just ~300 lines of code
- Outperforms expert-tuned kernels on most matrix sizes
- Supports dense layout and two MoE layouts
1. Starting today, DeepSeek is offering significant discounts on their API Platform during off-peak hours (16:30-00:30 UTC daily):
• DeepSeek-V3: 50% OFF
• DeepSeek-R1: Massive 75% OFF
This means you can access powerful AI models at a fraction of the cost during these hours. For example, DeepSeek-R1 output cost drops from $2.19 to just $0.550 per 1M tokens!
2. DeepSeek has also released DeepGEMM - an impressive FP8 GEMM library that supports both dense and MoE GEMMs, powering their V3/R1 models.
Key features:
- Up to 1350+ FP8 TFLOPS on Hopper GPUs
- Lightweight with no heavy dependencies
- Fully Just-In-Time compiled
- Core logic at just ~300 lines of code
- Outperforms expert-tuned kernels on most matrix sizes
- Supports dense layout and two MoE layouts
🔥8❤6
This is huge: Oklahoma, a state in the USA, has passed the House of Representatives Committee and entered the full vote.
The bill allows the state to invest up to 10% of public funds in BTC or digital assets with a market value of more than $500 billion.
The bill allows the state to invest up to 10% of public funds in BTC or digital assets with a market value of more than $500 billion.
❤3🔥3👏2
Good news for devs: Anthropic shipped a more token-efficient tool use implementation for 3.7 Sonnet that uses on average 14% less tokens under-the-hood and shows marked improvement in tool use performance.
Use this beta header: "token-efficient-tools-2025-02-19
Use this beta header: "token-efficient-tools-2025-02-19
Anthropic
Token-efficient tool use (beta) - Anthropic
❤3👍3🔥2
New announcements from #DeepSeek Optimized Parallelism Strategies
1. DualPipe - a bidirectional pipeline parallelism algorithm for computation-communication overlap in V3/R1 training.
2. EPLB - an expert-parallel load balancer for V3/R1.
3. Analyze computation-communication overlap in V3/R1.
1. DualPipe - a bidirectional pipeline parallelism algorithm for computation-communication overlap in V3/R1 training.
2. EPLB - an expert-parallel load balancer for V3/R1.
3. Analyze computation-communication overlap in V3/R1.
GitHub
GitHub - deepseek-ai/DualPipe: A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek…
A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training. - deepseek-ai/DualPipe
🔥7👍4👏2
Amazon announced they have developed their own quantum chip; the Ocelot.
This follows Microsoft's reveal last week of their Majorana 1.
'We believe that scaling Ocelot to a full-fledged quantum computer capable of transformative societal impact would require as little as one-tenth as many resources as common approaches, helping bring closer the age of practical quantum computing.'
'We believe that Ocelot's architecture, with its hardware-efficient approach to error correction, positions us well to tackle the next phase of quantum computing: learning how to scale.'
Paper.
This follows Microsoft's reveal last week of their Majorana 1.
'We believe that scaling Ocelot to a full-fledged quantum computer capable of transformative societal impact would require as little as one-tenth as many resources as common approaches, helping bring closer the age of practical quantum computing.'
'We believe that Ocelot's architecture, with its hardware-efficient approach to error correction, positions us well to tackle the next phase of quantum computing: learning how to scale.'
Paper.
❤3🔥3👍2
OpenAI will soon introduce the new GPT-4.5 model. Here's what is known about it.
"GPT-4.5 is not a frontier model, but it is OpenAI's largest LLM, improving on GPT-4's computational efficiency by more than 10x."
It offers:
— increased world knowledge
— improved writing ability
— refined personality
2-7% lift on 4o at SWE-Bench
"GPT-4.5 is not a frontier model, but it is OpenAI's largest LLM, improving on GPT-4's computational efficiency by more than 10x."
It offers:
— increased world knowledge
— improved writing ability
— refined personality
2-7% lift on 4o at SWE-Bench
🔥5❤2👍2
GPT-4.5 is out! Knowledge Still Stuck in October 2023, it’s not going to blow your mind, but it might befriend you.
It's more like a personality, communication, and creativity upgrade than a huge intelligence leap. It's like OpenAI is pivoting its base model from "bland assistant" to "AI bestie."
What it does do well:
- OpenAI says it scores 64% on SimpleQA (double GPT-4's score)
- Much better writing with cleaner, better structured, more human-like prose
- Genuinely warmer and more emotionally intelligent (gave me some good advice!)
- Less robotic, more opinionated responses
4.5 is more extroverted, agreeable, and less neurotic than 4o.
It's sometimes worse at following instructions and because it's less sycophantic and more creative.
The model received approximately 10x more computational resources during pre-training compared to GPT-4. Training occurred simultaneously across multiple data centers.
Pricing $75 per million input tokens and $150 per million output tokens – 15-30x more expensive than GPT-4o! This pricing reflects the model's scale and resource requirements.
Performance and Context Generation is noticeably slower than its predecessors, context length remains at 128K tokens. Knowledge cutoff stays at October 2023, which is disappointing for many users.
Functionality Supports Canvas, search, and file uploads. Currently lacks multimodal features like voice mode or video.
Availability:
Already available to Pro users and developers of all API tiers
Coming to Plus subscribers ($20) next week
OpenAI plans to add "tens of thousands of GPUs" next week to expand access
Independent Benchmark Results:
Aider Polyglot Coding Benchmark: Recent tests show that GPT-4.5 Preview significantly outperforms its predecessor but lags behind specialized models:
Claude 3.7 Sonnet with thinking mode (32k tokens) — 65%
Claude 3.7 Sonnet without thinking mode — 60%
DeepSeek V3 — 48%
GPT-4.5 Preview — 45%
ChatGPT-4o — 27%
GPT-4o — 23%
It's more like a personality, communication, and creativity upgrade than a huge intelligence leap. It's like OpenAI is pivoting its base model from "bland assistant" to "AI bestie."
What it does do well:
- OpenAI says it scores 64% on SimpleQA (double GPT-4's score)
- Much better writing with cleaner, better structured, more human-like prose
- Genuinely warmer and more emotionally intelligent (gave me some good advice!)
- Less robotic, more opinionated responses
4.5 is more extroverted, agreeable, and less neurotic than 4o.
It's sometimes worse at following instructions and because it's less sycophantic and more creative.
The model received approximately 10x more computational resources during pre-training compared to GPT-4. Training occurred simultaneously across multiple data centers.
Pricing $75 per million input tokens and $150 per million output tokens – 15-30x more expensive than GPT-4o! This pricing reflects the model's scale and resource requirements.
Performance and Context Generation is noticeably slower than its predecessors, context length remains at 128K tokens. Knowledge cutoff stays at October 2023, which is disappointing for many users.
Functionality Supports Canvas, search, and file uploads. Currently lacks multimodal features like voice mode or video.
Availability:
Already available to Pro users and developers of all API tiers
Coming to Plus subscribers ($20) next week
OpenAI plans to add "tens of thousands of GPUs" next week to expand access
Independent Benchmark Results:
Aider Polyglot Coding Benchmark: Recent tests show that GPT-4.5 Preview significantly outperforms its predecessor but lags behind specialized models:
Claude 3.7 Sonnet with thinking mode (32k tokens) — 65%
Claude 3.7 Sonnet without thinking mode — 60%
DeepSeek V3 — 48%
GPT-4.5 Preview — 45%
ChatGPT-4o — 27%
GPT-4o — 23%
Openai
Introducing GPT-4.5
We’re releasing a research preview of GPT‑4.5—our largest and best model for chat yet. GPT‑4.5 is a step forward in scaling up pre-training and post-training.
👍4❤2🔥2🤣2
#DeepSeek to built a new file system to train their AI model more efficiently
Fire-Flyer File System (3FS) - a parallel file system that utilizes the full bandwidth of modern SSDs and RDMA networks.
- 6.6 TiB/s aggregate read throughput in a 180-node cluster
- 3.66 TiB/min throughput on GraySort benchmark in a 25-node cluster
- 40+ GiB/s peak throughput per client node for KVCache lookup
- Disaggregated architecture with strong consistency semantics.
Training data preprocessing, dataset loading, checkpoint saving/reloading, embedding vector search & KVCache lookups for inference in V3/R1.
3FS
Smallpond - data processing framework on 3FS.
Fire-Flyer File System (3FS) - a parallel file system that utilizes the full bandwidth of modern SSDs and RDMA networks.
- 6.6 TiB/s aggregate read throughput in a 180-node cluster
- 3.66 TiB/min throughput on GraySort benchmark in a 25-node cluster
- 40+ GiB/s peak throughput per client node for KVCache lookup
- Disaggregated architecture with strong consistency semantics.
Training data preprocessing, dataset loading, checkpoint saving/reloading, embedding vector search & KVCache lookups for inference in V3/R1.
3FS
Smallpond - data processing framework on 3FS.
GitHub
GitHub - deepseek-ai/3FS: A high-performance distributed file system designed to address the challenges of AI training and inference…
A high-performance distributed file system designed to address the challenges of AI training and inference workloads. - GitHub - deepseek-ai/3FS: A high-performance distributed file system design...
❤4🔥4👏2
Reasoning models lack atomic thought. Unlike humans using independent units, they store full histories
Researchers Introduced Atom of Thoughts (AOT): lifts gpt-4o-mini to 80.6% F1 on HotpotQA, surpassing o3-mini and DeepSeek-R1
Code.
Researchers Introduced Atom of Thoughts (AOT): lifts gpt-4o-mini to 80.6% F1 on HotpotQA, surpassing o3-mini and DeepSeek-R1
Code.
❤5👏3🔥2
Nvidia presented Sim-to-Real Reinforcement Learning for Vision-Based Dexterous Manipulation on Humanoids
Learning humanoid dexterous manipulation using sim-to-real RL, achieving robust generalization and high performance w/o the need for human demonstration
Learning humanoid dexterous manipulation using sim-to-real RL, achieving robust generalization and high performance w/o the need for human demonstration
🔥3❤2👏2
#DeepSeek introduced DeepSeek-V3/R1 Inference System Overview
Optimized throughput and latency via:
1. Cross-node EP-powered batch scaling
2. Computation-communication overlap
3. Load balancing
Statistics of DeepSeek's Online Service:
- 73.7k/14.8k input/output tokens per second per H800 node
- Cost profit margin 545%
Optimized throughput and latency via:
1. Cross-node EP-powered batch scaling
2. Computation-communication overlap
3. Load balancing
Statistics of DeepSeek's Online Service:
- 73.7k/14.8k input/output tokens per second per H800 node
- Cost profit margin 545%
GitHub
open-infra-index/202502OpenSourceWeek/day_6_one_more_thing_deepseekV3R1_inference_system_overview.md at main · deepseek-ai/open…
Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation - deepseek-ai/open-infra-index
🥰3👍2👏2
New BIG-Bench Extra Hard benchmark released by Google DeepMind: average accuracy for general-purpose models is 9.8%, and 44.8% for reasoning models.
arXiv.org
BIG-Bench Extra Hard
Large language models (LLMs) are increasingly deployed in everyday applications, demanding robust general reasoning capabilities and diverse reasoning skillset. However, current LLM reasoning...
🔥3❤2👏2
Deutsche Telekom and Perplexity announced new ‘AI Phone’ priced at under $1K
Deutsche Telekom said that it is building an “AI Phone,” a low-cost handset created in close collaboration with Perplexity, along with Picsart and others, plus a new AI assistant app it’s calling “Magenta AI.”
Deutsche Telekom said that it is building an “AI Phone,” a low-cost handset created in close collaboration with Perplexity, along with Picsart and others, plus a new AI assistant app it’s calling “Magenta AI.”
TechCrunch
Deutsche Telekom and Perplexity announce new 'AI Phone' priced at under $1K | TechCrunch
It was inevitable that this year at MWC in Barcelona, at least one carrier would announce a major effort at building a smartphone with a top AI company.
❤3🔥3👍2
ReSearch: Teaching LLMs to Make Better Decisions Through Search
Baichuan AI has unveiled an exciting open-source project called ReSearch.
This innovative system teaches Large Language Models to improve their reasoning capabilities by actively searching for information when needed.
How ReSearch Works:
ReSearch combines Reinforcement Learning (RL) with Retrieval-Augmented Generation (RAG) to empower LLMs with a crucial skill: determining when to search for external information.
Similar to how humans look up facts when uncertain, these enhanced models learn to:
Identify knowledge gaps requiring external information
Formulate effective search queries
Execute multi-step, multi-hop searches for complex problems
Integrate search results into their reasoning process.
What makes this approach particularly impressive is that the model learns these search patterns without direct supervision.
Baichuan AI has unveiled an exciting open-source project called ReSearch.
This innovative system teaches Large Language Models to improve their reasoning capabilities by actively searching for information when needed.
How ReSearch Works:
ReSearch combines Reinforcement Learning (RL) with Retrieval-Augmented Generation (RAG) to empower LLMs with a crucial skill: determining when to search for external information.
Similar to how humans look up facts when uncertain, these enhanced models learn to:
Identify knowledge gaps requiring external information
Formulate effective search queries
Execute multi-step, multi-hop searches for complex problems
Integrate search results into their reasoning process.
What makes this approach particularly impressive is that the model learns these search patterns without direct supervision.
GitHub
GitHub - Agent-RL/ReCall: ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning & ReCall: Learning to Reason…
ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning & ReCall: Learning to Reason with Tool Call for LLMs via Reinforcement Learning - Agent-RL/ReCall
🔥4❤3👍3
Sophgo has introduced 1st RISC-V servers that support #DeepSeek R1 models (1.5B to 70B)
Can do 11.8tps for 70B
SRA3-40 computing server use Sophgo's latest SG2044 64-core server CPU.
It also released SRB3-40 storage server & SRM3-40 convergence server on SG2044.
Can do 11.8tps for 70B
SRA3-40 computing server use Sophgo's latest SG2044 64-core server CPU.
It also released SRB3-40 storage server & SRM3-40 convergence server on SG2044.
Ithome
算能推出 SRA3-40:全球首款支持 DeepSeek 的 RISC-V 众核服务器 - IT之家
SRA3-40 属于计算服务器范畴,基于算能旗下算丰团队开发的新一代服务器级 64 核心 RISC-V 处理器 SG2044。
👍3🔥3❤2
Huge VLM release from Cohere for AI is just in
Aya-Vision is a new VLM family based on SigLIP and Aya, and it outperforms many larger models.
> 8B and 32B models covering 23 languages and two new benchmark dataset
> supported by HF transformers from get-go
Aya-Vision is a new VLM family based on SigLIP and Aya, and it outperforms many larger models.
> 8B and 32B models covering 23 languages and two new benchmark dataset
> supported by HF transformers from get-go
huggingface.co
Cohere Labs Aya Vision - a CohereLabs Collection
Aya Vision is a state-of-the-art family of vision models that brings multimodal capabilities to 23 languages.
👍3👏3🔥2
Forwarded from kurilo.md (Dmitri)
Looking for exceptionally strong engineers.
iOS (native, swift, obj-c)
Backend (GCP, Node.js, NestJS, Nx, Kubernetes)
Location: Europe. Remote is OK.
DM for more info @masterrr
Products I'm hiring for:
https://bereal.com/
https://carrotcare.health/
iOS (native, swift, obj-c)
Backend (GCP, Node.js, NestJS, Nx, Kubernetes)
Location: Europe. Remote is OK.
DM for more info @masterrr
Products I'm hiring for:
https://bereal.com/
https://carrotcare.health/
carrotcare.health
Organise your blood test data | Carrot Care
❤3👍3🔥3🤡1
Cohere released Aya Vision on Hugging Face
Aya Vision outperforms the leading open-weight models in multilingual text generation and image understanding.
In its parameter class, Aya Vision 8B achieves the best performance in combined multilingual multimodal tasks, outperforming Qwen2.5-VL 7B, Gemini Flash 1.5 8B, Llama-3.2 11B Vision, and Pangea 7B by up to 70% win rates on AyaVisionBench and 79% on m-WildVision.
Aya Vision 32B sets a new frontier in multilingual vision open-weights models, outperforming Llama-3.2 90B Vision, Molmo 72B and Qwen2-VL 72B by up to 64% win rates on AyaVisionBench and 72% win rates on m-WildVision.
Aya Vision outperforms the leading open-weight models in multilingual text generation and image understanding.
In its parameter class, Aya Vision 8B achieves the best performance in combined multilingual multimodal tasks, outperforming Qwen2.5-VL 7B, Gemini Flash 1.5 8B, Llama-3.2 11B Vision, and Pangea 7B by up to 70% win rates on AyaVisionBench and 79% on m-WildVision.
Aya Vision 32B sets a new frontier in multilingual vision open-weights models, outperforming Llama-3.2 90B Vision, Molmo 72B and Qwen2-VL 72B by up to 64% win rates on AyaVisionBench and 72% win rates on m-WildVision.
huggingface.co
Cohere Labs Aya Vision - a CohereLabs Collection
Aya Vision is a state-of-the-art family of vision models that brings multimodal capabilities to 23 languages.
🔥4❤3👏2