Can an organism understand the code it is programmed in? Humans are getting close to this with new Generative AI models trained directly on biological data.
Anyone reading this post is programmed by the biological code - DNA, RNA, and Proteins.
With LLMs now being trained directly on biological code, we are rapidly moving towards empowering ourselves, as a species, with the toolset to decipher our own programming language better.
So, how exactly are LLMs trained directly on biological data? Let's take protein data as an example, but the same paradigm applies to DNA or RNA.
This is a sneak peek into work at Converge Bio. Here are the five steps:
1. ๐๐๐๐ฒ๐บ๐ฏ๐น๐ฒ ๐ฎ ๐บ๐ฎ๐๐๐ถ๐๐ฒ ๐ฝ๐ฟ๐ผ๐๐ฒ๐ถ๐ป ๐๐ฒ๐พ๐๐ฒ๐ป๐ฐ๐ฒ ๐ฑ๐ฎ๐๐ฎ๐ฏ๐ฎ๐๐ฒ: With genome sequencing becoming cheaper every year, we now have billions of publicly available protein sequences for building these databases.
2. ๐ง๐ผ๐ธ๐ฒ๐ป๐ถ๐๐ฒ ๐๐ต๐ฒ ๐ฝ๐ฟ๐ผ๐๐ฒ๐ถ๐ป ๐ฑ๐ฎ๐๐ฎ๐ฏ๐ฎ๐๐ฒ: In this step, they build a dictionary of the "words" of the protein language. These words are called tokens and they are the atomic elements that LLMs learn from. There is a huge amount of research we and others are doing on how to best divide the protein language into tokens.
3. ๐จ๐ป๐๐๐ฝ๐ฒ๐ฟ๐๐ถ๐๐ฒ๐ฑ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐๐ฒ ๐ฃ๐ฟ๐ฒ-๐ง๐ฟ๐ฎ๐ถ๐ป๐ถ๐ป๐ด (๐๐ฃ๐ง): Train a transformer-based model by hiding part of the sequence and training the model to fill in the missing tokens. They now have a model that deeply understands the statistical distribution of information in our massive database. At this stage, the trained model is often referred to as a foundational model.
4. ๐ฆ๐๐ฝ๐ฒ๐ฟ๐๐ถ๐๐ฒ๐ฑ ๐บ๐๐น๐๐ถ-๐๐ฎ๐๐ธ ๐๐ฟ๐ฎ๐ถ๐ป๐ถ๐ป๐ด. They now train the foundational model to understand not only the statistical distribution of information in the data but also how that information translates into real-world biological traits. This is done by training the model on any labeled trusted dataset you can get your hands on that connects a protein sequence with a biological outcome. In the protein context, here are a few examples - Protein structures, Protein annotations, and binding affinity experimental data. Research in AI shows that by using this paradigm, the model becomes "smarter" when introduced to multiple diverse tasks (similar to a child learning new skills).
5. ๐๐ถ๐ป๐ฒ ๐๐๐ป๐ถ๐ป๐ด: Given a new biological question, you can now fine-tune the model with a relatively small dataset and use it to predict complex biological interactions, explain the model's decision in a protein sequence context, and generate novel and better-performing proteins.
Anyone reading this post is programmed by the biological code - DNA, RNA, and Proteins.
With LLMs now being trained directly on biological code, we are rapidly moving towards empowering ourselves, as a species, with the toolset to decipher our own programming language better.
So, how exactly are LLMs trained directly on biological data? Let's take protein data as an example, but the same paradigm applies to DNA or RNA.
This is a sneak peek into work at Converge Bio. Here are the five steps:
1. ๐๐๐๐ฒ๐บ๐ฏ๐น๐ฒ ๐ฎ ๐บ๐ฎ๐๐๐ถ๐๐ฒ ๐ฝ๐ฟ๐ผ๐๐ฒ๐ถ๐ป ๐๐ฒ๐พ๐๐ฒ๐ป๐ฐ๐ฒ ๐ฑ๐ฎ๐๐ฎ๐ฏ๐ฎ๐๐ฒ: With genome sequencing becoming cheaper every year, we now have billions of publicly available protein sequences for building these databases.
2. ๐ง๐ผ๐ธ๐ฒ๐ป๐ถ๐๐ฒ ๐๐ต๐ฒ ๐ฝ๐ฟ๐ผ๐๐ฒ๐ถ๐ป ๐ฑ๐ฎ๐๐ฎ๐ฏ๐ฎ๐๐ฒ: In this step, they build a dictionary of the "words" of the protein language. These words are called tokens and they are the atomic elements that LLMs learn from. There is a huge amount of research we and others are doing on how to best divide the protein language into tokens.
3. ๐จ๐ป๐๐๐ฝ๐ฒ๐ฟ๐๐ถ๐๐ฒ๐ฑ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐๐ฒ ๐ฃ๐ฟ๐ฒ-๐ง๐ฟ๐ฎ๐ถ๐ป๐ถ๐ป๐ด (๐๐ฃ๐ง): Train a transformer-based model by hiding part of the sequence and training the model to fill in the missing tokens. They now have a model that deeply understands the statistical distribution of information in our massive database. At this stage, the trained model is often referred to as a foundational model.
4. ๐ฆ๐๐ฝ๐ฒ๐ฟ๐๐ถ๐๐ฒ๐ฑ ๐บ๐๐น๐๐ถ-๐๐ฎ๐๐ธ ๐๐ฟ๐ฎ๐ถ๐ป๐ถ๐ป๐ด. They now train the foundational model to understand not only the statistical distribution of information in the data but also how that information translates into real-world biological traits. This is done by training the model on any labeled trusted dataset you can get your hands on that connects a protein sequence with a biological outcome. In the protein context, here are a few examples - Protein structures, Protein annotations, and binding affinity experimental data. Research in AI shows that by using this paradigm, the model becomes "smarter" when introduced to multiple diverse tasks (similar to a child learning new skills).
5. ๐๐ถ๐ป๐ฒ ๐๐๐ป๐ถ๐ป๐ด: Given a new biological question, you can now fine-tune the model with a relatively small dataset and use it to predict complex biological interactions, explain the model's decision in a protein sequence context, and generate novel and better-performing proteins.
A neural modeling tour de force.
New paper "Towards enhanced functionality of vagus neuroprostheses through in silico optimized stimulation"
Paper.
A main contributions:
1. Researchers developed a realistic and anatomically accurate #model of the vagus nerve.
2. Developed #computational methods to make simulations efficient (from days to minutes of computation).
3. Devised a method using #physiological experiments to functionalize the models in vivo during animal experiments.
4. Optimized #neurosimulation in silico reducing side effects related to laryngeal contractions by ~70% when using VNS to reduce heart rate.
5. Shared the entire modeling framework here for public reuse and developed an online platform to showcase the models.
This multidisciplinary work, featuring histological data, computational modeling and animal experiments.
New paper "Towards enhanced functionality of vagus neuroprostheses through in silico optimized stimulation"
Paper.
A main contributions:
1. Researchers developed a realistic and anatomically accurate #model of the vagus nerve.
2. Developed #computational methods to make simulations efficient (from days to minutes of computation).
3. Devised a method using #physiological experiments to functionalize the models in vivo during animal experiments.
4. Optimized #neurosimulation in silico reducing side effects related to laryngeal contractions by ~70% when using VNS to reduce heart rate.
5. Shared the entire modeling framework here for public reuse and developed an online platform to showcase the models.
This multidisciplinary work, featuring histological data, computational modeling and animal experiments.
Zenodo
VaStim - Towards enhanced functionality of vagus neuroprostheses through in-silico optimized stimulation
The online platform showcasing the computational models can be accessed at https://neuroeng-hen.ethz.ch/online/
๐1
Machines have surpassed humans in empathy.
New evidence: AI beat humans in detecting emotions and giving support. As long as people didn't know AI created the messages, they felt more heard.
It's not that AI is so good. It's that we often fail to use our emotional intelligence.
New evidence: AI beat humans in detecting emotions and giving support. As long as people didn't know AI created the messages, they felt more heard.
It's not that AI is so good. It's that we often fail to use our emotional intelligence.
๐ฅ3โก1๐1
The future of robotics is in your hands. Literally. A new paper: R+X
A person records everyday activities while wearing a camera.A robot passively learns those skills.
No labels, no training.
A person records everyday activities while wearing a camera.A robot passively learns those skills.
No labels, no training.
Robot Learning Lab
R + X
R + X - The Robot Learning Lab at Imperial College London
๐ฅ6
๐Starlink now operating on over 1000 aircraft, said Elon Musk.
๐2
Answer AI released gpu.cpp
This is a really exciting project which provides a device-independent way to use GPU compute. No more writing separate code for CUDA, AMD, Mac, and Intel GPUs!
GitHub.
This is a really exciting project which provides a device-independent way to use GPU compute. No more writing separate code for CUDA, AMD, Mac, and Intel GPUs!
GitHub.
Answer.AI
gpu.cpp: portable GPU compute for C++ with WebGPU โ Answer.AI
Practical AI R&D
OpenAI to further block access by mainland China, Hong Kong-based developers
The move is set to deal a blow to Chinese companies developing services based on OpenAIโs LLM Llama-3.1.
A number of AI start-ups in China are building apps based on OpenAIโs large models, which also generate revenue for OpenAI, the person said, adding that if OpenAI strengthens its regulations, Chinese developers will have to turn to local alternatives.
Zhipu AI, one of Chinaโs top generative AI start-ups, said it would help affected developers transfer to its platform.
For more on startups, and open source LLMs in China, see.
The move is set to deal a blow to Chinese companies developing services based on OpenAIโs LLM Llama-3.1.
A number of AI start-ups in China are building apps based on OpenAIโs large models, which also generate revenue for OpenAI, the person said, adding that if OpenAI strengthens its regulations, Chinese developers will have to turn to local alternatives.
Zhipu AI, one of Chinaโs top generative AI start-ups, said it would help affected developers transfer to its platform.
For more on startups, and open source LLMs in China, see.
South China Morning Post
OpenAI curbs access by mainland China, Hong Kong-based developers
The move is set to deal a blow to Chinese companies developing services based on OpenAIโs large language models.
โก๏ธGoogle presented the first AI to solve International Mathematical Olympiad problems at a silver medalist level
It combines AlphaProof, a new breakthrough model for formal reasoning, and AlphaGeometry 2, an improved version of previous system.
It combines AlphaProof, a new breakthrough model for formal reasoning, and AlphaGeometry 2, an improved version of previous system.
Google DeepMind
AI achieves silver-medal standard solving International Mathematical Olympiad problems
Breakthrough models AlphaProof and AlphaGeometry 2 solve advanced reasoning problems in mathematics
๐ฅ3
โก๏ธOpenAI is entering the search market. 10,000 test users will get early access.
They โre testing SearchGPT, a temporary prototype of new AI search features.
They โre testing SearchGPT, a temporary prototype of new AI search features.
Openai
SearchGPT is a prototype of new AI search features
Weโre testing SearchGPT, a temporary prototype of new search features that give you fast and timely answers with clear and relevant sources.
Unlock the Future of Mobile AI!
Salesforce introduced MobileAIBench is a new open source framework for assessing mobile-readiness of your LLMs and LMMs.
Quickly and easily test your models on a variety of benchmarks spanning NLP, multimodality, and trust & safety. Using iOS app, test your modelโs on-device performance such as memory consumption, latency etc.
Code.
Salesforce introduced MobileAIBench is a new open source framework for assessing mobile-readiness of your LLMs and LMMs.
Quickly and easily test your models on a variety of benchmarks spanning NLP, multimodality, and trust & safety. Using iOS app, test your modelโs on-device performance such as memory consumption, latency etc.
Code.
arXiv.org
MobileAIBench: Benchmarking LLMs and LMMs for On-Device Use Cases
The deployment of Large Language Models (LLMs) and Large Multimodal Models (LMMs) on mobile devices has gained significant attention due to the benefits of enhanced privacy, stability, and...
MintNeuro will be making its first function-specific integrated circuits available to selected partners for evaluation in the coming months
These compact, low-power mixed-signal integrated circuits are designed specifically with implantable closed loop #Neurotech and #Bioelectronics applications in mind.
These compact, low-power mixed-signal integrated circuits are designed specifically with implantable closed loop #Neurotech and #Bioelectronics applications in mind.
mintneuro
MintNeuro launches EXPLORE โ mintneuro
New paper on end-to-end deep learning for relational databases
RDL learns directly on structured data across multiple tables, eliminates the need for feature engineering, and extends AI use cases to personalization, fraud, forecasting and others:
- Deep representation learning on across multiple tables, eliminating the need for single-table feature engineering.
- Launching RelBench, a new benchmark and testing suite to facilitate research.
RDL learns directly on structured data across multiple tables, eliminates the need for feature engineering, and extends AI use cases to personalization, fraud, forecasting and others:
- Deep representation learning on across multiple tables, eliminating the need for single-table feature engineering.
- Launching RelBench, a new benchmark and testing suite to facilitate research.
๐2
Morgan Stanley: Nvidia anticipates shipping 60,000 to 70,000 B200 server cabinets, each priced between $2 million and $3 million next year
It translates to an estimated annual revenue of at least $210 billion from these machines.
It translates to an estimated annual revenue of at least $210 billion from these machines.
Tom's Hardware
Nvidia and partners could charge up to $3 million per Blackwell server cabinet โ analysts project over $200 billion in revenueโฆ
The industry is going to need 60,000 โ 70,000 B200-based servers.
Mistral dropped Large 123BโDense, multilingual (12 languages), and 128K context.
Comes as instruct-only model checkpoint, with performance less than 405B but higher than L3.1 70B. Released under non-commercial license.
Comes as instruct-only model checkpoint, with performance less than 405B but higher than L3.1 70B. Released under non-commercial license.
huggingface.co
mistralai/Mistral-Large-Instruct-2407 ยท Hugging Face
Weโre on a journey to advance and democratize artificial intelligence through open source and open science.
Stanford Engineering and Toyota Research achieved the worldโs first autonomous tandem drift.
By leveraging the latest AI tech and robotics โ this research opens up new possibilities for future safety systems and making driving safer for all.
By leveraging the latest AI tech and robotics โ this research opens up new possibilities for future safety systems and making driving safer for all.
Stanford University School of Engineering
AI-directed, driverless drift: Stanford Engineering and Toyota Research Institute achieve autonomous milestone
If self-driving vehicles can navigate this complex road challenge safely, the learnings could help advance the safety of automated driving in urban scenarios.
Apple Intelligence is here to testโฆbut only as a beta within the iOS 18.1 developer beta, with no ChatGPT or GenMoji/Image Playground yet.
CNET
Apple Intelligence Features Like ChatGPT Hit iPhones in iOS 18.2 Beta
You can try out the beta version of Apple's generative AI with an iPhone 15 Pro or any iPad or Mac with an M1 chip or later. Here's everything to know about Apple Intelligence.
OpenAI released a 64k output version of GPT-4o.
It's equivalent to 115,000 words or a 300-page-long book.
It's equivalent to 115,000 words or a 300-page-long book.
Openai
GPT-4o Long Output
OpenAI is offering an experimental version of GPT-4o with a maximum of 64K output tokens per request.
Synchron has connected its brain implant to Apple's Vision Pro headset in an industry first.
This means Itโs now possible for patients with limited physical mobility to control the device using only their thoughts.
This means Itโs now possible for patients with limited physical mobility to control the device using only their thoughts.
CNBC
Neuralink rival Synchron's brain implant now lets people control Apple's Vision Pro with their minds
Neuralink competitor Synchron announces integration with Apple Vision Pro.
Applications for AI Grant's newest batch are open until August 9th!
Apply at aigrant.com - there's never been a better time to build things!
Apply at aigrant.com - there's never been a better time to build things!
Aigrant
AI Grant
It's time for AI-native products!
๐ So this is big: "In a series of tightly controlled and high-powered experimental studies, researchers find compelling evidence that AI companions can indeed reduce loneliness."
๐5
This is quite innovative. DeepMind's new architecture, PEER, uses over a million tiny neural networks instead of large feedforward layers in transformers.
It builds on the Mixture of Experts (MoE) principle but takes it further with the Product Key Memory technique to select relevant experts efficiently.
PEER outperforms transformers in efficiency and can constantly learn new information.
It builds on the Mixture of Experts (MoE) principle but takes it further with the Product Key Memory technique to select relevant experts efficiently.
PEER outperforms transformers in efficiency and can constantly learn new information.
โค3