All about AI, Web 3.0, BCI
3.3K subscribers
731 photos
26 videos
161 files
3.14K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
A new startup founded by former members of the Imagen team at Google Brain.
Graphs of thoughts for solving elaborate problems with LLMs

- Models LLM generations as arbitrary graph
- "LLM thoughts" are vertices
- Edges are dependencies between
- Can combine & enhance LLM thoughts using feedback loops
- SoTA on a variety of tasks.
Nordic Semiconductor is set to buy US startup Atlazo, for the company's AI hardware IP. Nordic plans to add on-chip AI acceleration to devices across its portfolio.
New research suggests that our visual memories are not simply what one has just seen, but instead are the result of neural codes dynamically evolving to incorporate how we intend to use that information in the future.

Working memory is an incredibly important aspect of cognition and our daily lives. It enables us to retain small amounts of information to be used later— for example, keeping elements or the sequence of a story in mind before the person completes telling it, dialing a telephone number that you were just told, or calculating the total bill of your groceries as you are shopping.

With regards to XR, when we enter an immersive virtual environment presenting novel visual imagery, it requires us to use our working memory—especially in cognitive training tasks specifically aimed to improve working memory in clinical populations.

The findings of this study suggest perhaps better healthcare outcomes can be achieved as patients are encoding information about why they want to use this information, namely their recovery.

“Research makes it clear that memory codes can simultaneously contain information about what we remember seeing and about the future behavior that depends on those visual memories… This means the neural dynamics driving our working memory result from reformatting memories into forms that are closer to later behaviors that rely on visual memories.”
🆒3
A new study is out today in Nature! Researchers demonstrate a brain-computer interface that turns speech-related neural activity into text, enabling a person with paralysis to communicate at 62 words per minute - 3.4 times faster than prior work.

Researchers are publicly released all data and code, and are hosting a machine learning competition.
4
Lemur70B: the SOTA open LLM balancing text & code capabilities

The Lemur project is an open collaborative research effort between XLang Lab and Salesforce Research.

Lemur and Lemur-chat are initialized from Llama 2-70B

1. Pretrain Llama 2 on ~100B code-focused data > Lemur-70B
2. Finetune Lemur on ~300K examples > Lemur-70B-chat

Lemur outperforms other open source language models on coding benchmarks, yet remains competitive textual reasoning and knowledge performance.

Lemur-chat significantly outperforms other open-source supervised fine-tuned models across various dimensions.

Model: huggingface.co/OpenLemur
Blog: xlang.ai/blog/openlemur
3
Brain computer interface helped create a digital avatar of a stroke survivor’s face

A woman who lost her ability to speak after a stroke 18 years ago was able to replicate her voice and even convey a limited range of facial expressions via a computer avatar. A pair of papers published in Nature yesterday about experiments that restored speech to women via brain implants show just how quickly this field is advancing.

How they did it: Both teams used recording devices implanted into the brain to capture the signals controlling the small movements that provide facial expressions. Then they used AI algorithms to decode them into words, and a language model to adjust for accuracy. One team, led by Edward Chang, a neurosurgeon at the University of California, San Francisco, even managed to capture emotions.

The caveats: Researchers caution that these results may not hold for other people, and either way, we are still a very long way from tech that’s available to the wider public. Still, these proofs of concept are hugely exciting
Meta AI released Code Llama, a large language model built on top of Llama 2, fine-tuned for coding & state-of-the-art for publicly available coding tools.
A new startup founded by former members of team that created TensorFlow.js at Google Brain.

A new Open Source product to analyze, structure and clean data with AI.
3
Bond Tokenisation. The Hong Kong Monetary Authority released a report titled “Bond Tokenisation in Hong Kong”

Bond tokenisation is one of the pilot projects announced in the Policy Statement on Development of Virtual Assets in Hong Kong issued by the Financial Services and the Treasury Bureau last October.

In February this year, the HKMA assisted the Government in the successful offering of HK$800 million of tokenised green bond under the Government Green Bond Programme (the Tokenised Green Bond), marking the first tokenised green bond issued by a government globally.

The use of distributed ledger technology has been applied to primary issuance, settlement of secondary trading and coupon payment, and will be tested out in maturity redemption.

The Report :

- sets out details of the Tokenised Green Bond, and suggests available options with regard to salient aspects of a tokenised bond transaction in Hong Kong ranging from technology and platform design to deal structuring considerations.

- serves as a blueprint for potential similar issuances in Hong Kong.

- considers what could further be done to promote tokenisation in the bond market; these include exploring further use cases, addressing issues of fragmentation across platforms and systems, and enhancing Hong Kong’s legal and regulatory framework.

- enables market participants to draw reference from HKMA’s experience when considering tokenised issuances in Hong Kong.
3
Researchers applied an algorithm from a video game to study the dynamics of molecules in living brain cells

Dr. Tristan Wallis and Professor Frederic Meunier from UQ’s Queensland Brain Institute came up with the idea while in lockdown during the COVID-19 pandemic.

“Combat video games use a very fast algorithm to track the trajectory of bullets, to ensure the correct target is hit on the battlefield at the right time,” Dr Wallis said. “The technology has been optimized to be highly accurate, so the experience feels as realistic as possible. We thought a similar algorithm could be used to analyze tracked molecules moving within a brain cell.”

Until now, technology has only been able to detect and analyze molecules in space, and not how they behave in space and time.

“Scientists use super-resolution microscopy to look into live brain cells and record how tiny molecules within them cluster to perform specific functions,” Dr Wallis said. “Individual proteins bounce and move in a seemingly chaotic environment, but when you observe these molecules in space and time, you start to see order within the chaos. It was an exciting idea – and it worked.”

Dr. Wallis used coding tools to build an algorithm that is now used by several labs to gather rich data about brain cell activity.
4
⚡️ Google Gemini eats the world – Gemini Smashes GPT-4 By 5X
The GPU-Poors, MosaicML, Together, and Hugging face
Broken Open-Source
Compute Resources That Make Everyone Look GPU-Poor
Google Cloud TPU wins
6👍2😱2🥴1
OpenAI introduced ChatGPT Enterprise: enterprise-grade security, unlimited high-speed GPT-4 access, extended context windows, and much more.

Bye bye a bunch of startups…
👍4🔥4🤡1
Baidu Apollo Go has launched driverless airport transportation services at Wuhan Tianhe International Airport. This expansion links urban and airport travel for the first time in China, and bridges city roads and highways.
🔥3
Scientists have discovered a previously unknown mechanism by which cells break down proteins that are no longer needed

Their discovery potentially provides a new pathway that could be useful for tackling many diseases including cancer.

It’s a example of the important research that AlphaFold is helping to enable.
4