All about AI, Web 3.0, BCI
3.3K subscribers
729 photos
26 videos
161 files
3.13K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
VantAI, NVIDIA introduced PINDER & PLINDER are the Protein-protein/Ligand INteraction Dataset and Evaluation Resource - >500x and 10x larger datasets than previous datasets and provide predefined splits that leverage novel interface-clustering and splitting procedure together with strict quality criteria to ensure accurate and fair performance evaluation.

Researchers find that current performance estimates are not only inflated by up to 2-3x, but surprisingly & excitingly, also that deliberate splits + clustering allows training models that generalize much better.

Via extensive benchmarking and retraining of DiffDock(-PP), enabled by NVIDIA’s BioNeMo framework, researchers show that clustering & splits can dramatically improve generalization.

Paper.
GitHub and this.
If you are crazy ambitious, technical, and ready to start building a multi billion dollar company, you should apply for the South Park Commons Founder Fellowship.

Apply here by August 9.
Patronus AI announced the release of 'Lynx', a new open-source hallucination detection model

They claim that it outperforms existing AI models such as GPT-4, Claude-3-Sonnet, and more.

They are also open sourcing new hallucination benchmark HaluBench. HaluBench is a large-scale 15k sample dataset that contains challenging hallucination tasks and supports diverse real world domains like finance and medicine.

HuggingFace Lynx 8B.
HF 70B
You can use quantized Lynx-8B locally, deploy Lynx-70B with GPUs, or reach out to Patronus AI for easy API access.
AI at work. Report by the BCG

Top 10 TAKEAWAYS:

1. 2 out of 3 business leaders using Gen AI.
2. 50% of employees saving 5+ hours a week.
3. 50% of employees believe their job will disappear.
4. High potential: TMT, Finance, Energy, Healthcare, Consumer.
5. Brazil, India, MENA, Nigeria, S Africa >> Mature markets.
6. Top benefits: Save time, Move fast, Improve quality.
7. The more you use, the more you like.
8. Frontline workers beginning to embrace Gen AI.
9. Deploying Gen AI = Management, not Technology, challenge.
10. Train. Train. Train.
Apple released a 7B model that beats Mistral 7B.

They fully open sourced everything, including weights, training code, and dataset.

The secret to its performance: Data curation.

They released the best open LLM train dataset and a full pipeline for evaluating data curation methods.

Model
GitHub
Dataset
Paper
3👍2
Researchers released the NuminaMath datasets: the largest collection of ~1M math competition problem-solution pairs, ranging in difficulty from junior challenge to Math Olympiad preselection.

These datasets were used to win the 1st Progress Prize of the AI Math Olympiad and consist of two subsets:

1. Chain of Thought (CoT): 860k problem-solution pairs templated with CoT to enhance mathematical reasoning in natural language

2. Tool-integrated reasoning (TIR): 73k synthetic solutions derived from GPT-4 with code-execution feedback to decompose hard problems into simpler subproblems that can be solved with Python

Models trained on NuminaMath achieve best-in-class performance among open weight models and approach or surpass proprietary models on math competition benchmarks.

Tech report along with the training and inference code.
🔥4
OpenAI CEO Sam Altman low-income people $1,000/month for three years, no strings attached. Here’s What Happened.

Now, the results of one of the largest guaranteed-basic-income studies are in.

Back in 2019, 3,000 Texas and Illinois residents were enrolled in this guaranteed-basic-income study.

The experiment was funded by Sam Altman, who raised $60 million for the study ($14 million of which was his own money).

All participants had incomes below $28,000.

1/3 got $1,000/month for three years while the remaining control group members got $50 per month.

For those who received the $1,000 payments, overall spending increased by ~$310/month.
Most of that money went toward food, transportation, and rent.
There was also an increase in offering financial support to others.

However, that doesn't mean those who received the $1,000 payments saw improvements across the board.

There was no "direct evidence of improved access to healthcare or improvements to physical and mental health," researchers found.

While there was an increase in life satisfaction for a short time at the start of the study, it didn't last.

"Cash alone cannot address challenges such as chronic health conditions, lack of childcare, or the high cost of housing."

But there were certainly some benefits that can't be ignored.

For those receiving $1,000/month, their total individual savings spiked by almost 25%.

And incomes rose significantly for all groups.

Recipients of the $1,000/month saw incomes rise from ~$30,000 to $45,710, on average.

Incomes for those in the control group rose even more, to $50,970.

"Cash offers flexibility and may increase agency to make employment decisions that align with recipients' individual circumstances, goals, and values," according to researchers.

As for what the study participants themselves felt, most couldn't believe their luck when selected to participate.

"Looking back, I regret that I didn't save more of it," one said.

"It's almost like a miracle," another said.
2
Enterprise AI Focused startup Cohere raised a $5.5 Bn round

Highlights:
- Revenue of $35 MM ARR, up from $13 MM ARR end of 2023

- $5.5 Bn valuation implies a 150x (ish) price to sales multiple. 2x valuation from last year

Investors:

The company has raised $500 million in a Series D funding, it plans to announce on Monday. 

The round was led by Canadian pension investment manager PSP Investments, alongside a syndicate of additional new backers including investors at Cisco Systems Inc., Japan’s Fujitsu, chipmaker Advanced Micro Devices Inc.’s AMD Ventures and Canada’s export credit agency EDC. 

Customers: Cohere has customers across a wide range of industries.

They include banks, tech companies and retailers.

One luxury consumer brand is using a virtual shopping tool Cohere built to help workers suggest products to customers. Toronto-Dominion Bank, a new customer, will use Cohere’s AI for tasks such as answering questions based on financial documents

Sourcing:

Cohere’s models can be used across 10 languages, including English, Spanish, Chinese, Arabic and Japanese, and its models can cite sources in answers.
Apple presents SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models

Comparable or better performance compared to SotA Video LLMs that are fine-tuned on video datasets while being training-free.
Meta released a new open source AI model, Llama 3.1, that it claims outperforms OpenAI and other rivals on several benchmarks.

Zuckerberg also now
saying he expects the Meta AI assistant to surpass ChatGPT usage by end of year.
Chinese team developed a fabrication method to produce a semiconductor material just 0.7 nanometres thick.

A team led by Liu Kaihui of Peking University, Liu Can of Renmin University, and Zhang Guangyu of the Institute of Physics at the Chinese Academy of Sciences developed a fabrication method to produce a semiconductor material just 0.7 nanometres thick.

The researchers’ findings, which were published in the peer-reviewed journal Science on July 5, address a key barrier to reducing the size of traditional silicon-based chips – as devices shrink, silicon chips run into physical limits that affect their performance.

The scientists explored two-dimensional (2D) transition-metal dichalcogenides (TMDs) as an alternative to silicon, with a thickness of just 0.7 nanometres compared to silicon’s typical 5-10 nanometres.

TMDs also consume less power and have superior electron transport properties, making them ideal for the ultra-scaled down transistors that will be a feature of next-generation electronic and photonic chips.

However, producing TMDs has been challenging – until now. According to the paper, the technique developed by the scientists allows them to quickly produce high-quality 2D crystals in seven formulations, making mass production feasible.

The traditional fabrication process, which involves layer-by-layer assembly of atoms on a substrate – like building a wall with bricks – often results in crystals with insufficient purity, Liu Kaihui told state news agency Xinhua.

“This is due to uncontrollable atomic arrangements in crystal growth and the accumulation of impurities and defects,” he said.

The team arranged the first layer of atoms on the substrate as if they were following the traditional process. However, subsequent atoms were added between the substrate and the first crystal layer, pushing upwards like bamboo shoots to form new layers.
Can an organism understand the code it is programmed in? Humans are getting close to this with new Generative AI models trained directly on biological data.

Anyone reading this post is programmed by the biological code - DNA, RNA, and Proteins.

With LLMs now being trained directly on biological code, we are rapidly moving towards empowering ourselves, as a species, with the toolset to decipher our own programming language better.

So, how exactly are LLMs trained directly on biological data? Let's take protein data as an example, but the same paradigm applies to DNA or RNA.

This is a sneak peek into work at Converge Bio. Here are the five steps:

1. 𝗔𝘀𝘀𝗲𝗺𝗯𝗹𝗲 𝗮 𝗺𝗮𝘀𝘀𝗶𝘃𝗲 𝗽𝗿𝗼𝘁𝗲𝗶𝗻 𝘀𝗲𝗾𝘂𝗲𝗻𝗰𝗲 𝗱𝗮𝘁𝗮𝗯𝗮𝘀𝗲: With genome sequencing becoming cheaper every year, we now have billions of publicly available protein sequences for building these databases.

2. 𝗧𝗼𝗸𝗲𝗻𝗶𝘇𝗲 𝘁𝗵𝗲 𝗽𝗿𝗼𝘁𝗲𝗶𝗻 𝗱𝗮𝘁𝗮𝗯𝗮𝘀𝗲: In this step, they build a dictionary of the "words" of the protein language. These words are called tokens and they are the atomic elements that LLMs learn from. There is a huge amount of research we and others are doing on how to best divide the protein language into tokens.

3. 𝗨𝗻𝘀𝘂𝗽𝗲𝗿𝘃𝗶𝘀𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗣𝗿𝗲-𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 (𝗚𝗣𝗧): Train a transformer-based model by hiding part of the sequence and training the model to fill in the missing tokens. They now have a model that deeply understands the statistical distribution of information in our massive database. At this stage, the trained model is often referred to as a foundational model.

4. 𝗦𝘂𝗽𝗲𝗿𝘃𝗶𝘀𝗲𝗱 𝗺𝘂𝗹𝘁𝗶-𝘁𝗮𝘀𝗸 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴. They now train the foundational model to understand not only the statistical distribution of information in the data but also how that information translates into real-world biological traits. This is done by training the model on any labeled trusted dataset you can get your hands on that connects a protein sequence with a biological outcome. In the protein context, here are a few examples - Protein structures, Protein annotations, and binding affinity experimental data. Research in AI shows that by using this paradigm, the model becomes "smarter" when introduced to multiple diverse tasks (similar to a child learning new skills).

5. 𝗙𝗶𝗻𝗲 𝘁𝘂𝗻𝗶𝗻𝗴: Given a new biological question, you can now fine-tune the model with a relatively small dataset and use it to predict complex biological interactions, explain the model's decision in a protein sequence context, and generate novel and better-performing proteins.
A neural modeling tour de force.
New paper "Towards enhanced functionality of vagus neuroprostheses through in silico optimized stimulation"

Paper.

A main contributions:

1. Researchers developed a realistic and anatomically accurate #model of the vagus nerve.

2. Developed #computational methods to make simulations efficient (from days to minutes of computation).

3. Devised a method using #physiological experiments to functionalize the models in vivo during animal experiments.

4. Optimized #neurosimulation in silico reducing side effects related to laryngeal contractions by ~70% when using VNS to reduce heart rate.

5. Shared the entire modeling framework here for public reuse and developed an online platform to showcase the models.

This multidisciplinary work, featuring histological data, computational modeling and animal experiments.
🆒1
Machines have surpassed humans in empathy.

New evidence: AI beat humans in detecting emotions and giving support. As long as people didn't know AI created the messages, they felt more heard.

It's not that AI is so good. It's that we often fail to use our emotional intelligence.
🔥31👀1
The future of robotics is in your hands. Literally. A new paper: R+X

A person records everyday activities while wearing a camera.A robot passively learns those skills.
No labels, no training.
🔥6
👀Starlink now operating on over 1000 aircraft, said Elon Musk.
👍2
Answer AI released gpu.cpp

This is a really exciting project which provides a device-independent way to use GPU compute. No more writing separate code for CUDA, AMD, Mac, and Intel GPUs!

GitHub.
OpenAI to further block access by mainland China, Hong Kong-based developers

The move is set to deal a blow to Chinese companies developing services based on OpenAI’s LLM Llama-3.1.

A number of AI start-ups in China are building apps based on OpenAI’s large models, which also generate revenue for OpenAI, the person said, adding that if OpenAI strengthens its regulations, Chinese developers will have to turn to local alternatives.

Zhipu AI, one of China’s top generative AI start-ups, said it would help affected developers transfer to its platform.

For more on startups, and open source LLMs in China, see.
⚡️Google presented the first AI to solve International Mathematical Olympiad problems at a silver medalist level

It combines AlphaProof, a new breakthrough model for formal reasoning, and AlphaGeometry 2, an improved version of previous system.
🔥3
⚡️OpenAI is entering the search market. 10,000 test users will get early access.

They ’re testing SearchGPT, a temporary prototype of new AI search features.