All about AI, Web 3.0, BCI
3.3K subscribers
731 photos
26 videos
161 files
3.14K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
The first digital twin of the human immune system highlights the vast societal opportunities in physical-digital fusion.

Virtual replicas of ‘real life’ give humanity exponential insights.

Much of the real world – like the seabed, the innards of cars, or the workings of our bodies – is hidden.

Digital-physical fusion creates dynamic representations of real-world objects, systems, and processes in the digital world and can make these things become clearer. And the human immune system is, in many ways, the ultimate example.

Why do we need a digital twin of the human immune system?

Digital twins provide a raft of advantages in all walks of society and business. The immune system project could mean:

1. A clear model of something very complicated

2. The chance to see – and understand – connections that were previously opaque

3. Immediate support for tackling immune-dependent conditions like organ transplants, cancer, and autoimmune diseases

4. Quicker and cheaper drug discovery

5. A doorway to drugs with less harmful side effects

6. The route to personalized medicine
👍4
NFTs in the city. MoonPay enables Web3 experience at Manhattan’s Seaport

The Howard Hughes Corp. is a big real estate development and management company that owns Seaport. It teamed with Web3 infrastructure firm MoonPay to create an interactive digital experience at the New York City’s historic waterfront neighborhood Seaport.

The Seaport Scavenger Hunt will entertain visitors from August 21 to October 31.

Participants are invited to embark on a quest to discover ten purple “pearls” hidden throughout the Seaport neighborhood.

Each pearl features a QR code that can be scanned using a mobile device. Once scanned, participants will receive a free digital token, designed by 3D artist and NFT creator Johana Kroft, which will be added to their digital wallets.

These digital tokens represent playful interpretations of iconic New York City objects. Collecting all ten tokens will qualify participants to enter a weekly prize drawing. Prizes include VIP tickets to the Summer Concert Series on The Rooftop at Pier 17, complimentary workouts at HIIT the Deck Boxing, curated culinary gift boxes worth up to $350 from the Tin Building, and more. Participants have the opportunity to play multiple times each week, increasing their chances of winning.
Once a bustling maritime and commercial hub, the Seaport has evolved into a vibrant entertainment and dining destination within Lower Manhattan. It serves as a community anchor and attracts creative businesses drawn to the area’s dynamic atmosphere and connectivity.
Europe spent €600 million to recreate the human brain in a computer. How did it go?

It took 10 years, around 500 scientists and some €600 million, and now the Human Brain Project — one of the biggest research endeavours ever funded by the European Union — is coming to an end.

Its audacious goal was to understand the human brain by modelling it in a computer.

During its run, scientists under the umbrella of the Human Brain Project (HBP) have published thousands of papers and made significant strides in neuroscience, such as creating detailed 3D maps of at least 200 brain regions1, developing brain implants to treat blindness2 and using supercomputers to model functions such as memory and consciousness and to advance treatments for various brain conditions.

EBRAINS is a suite of tools and imaging data that scientists around the world can use to run simulations and digital experiments.

Management aside, the HBP has stacked up some important and useful science. By creating and combining 3D maps of around 200 cerebral-cortex and deeper brain structures, HBP scientists made the Human Brain Atlas, which is accessible through EBRAINS. The atlas depicts the multilevel organization of the brain, from its cellular and molecular architecture to its functional modules and connectivity.
2
OpenAI’s most significant product update since the App Store: GPT-3.5 finetuning API

This will be the largest LoRA-as-a-service ever. GPT-4 ft is coming in a few months.

Pricing: inference (output tokens) is 2x more expensive than training tokens.

API is quite simple: submit a file -> create a finetuning job -> serve.
A new startup founded by former members of the Imagen team at Google Brain.
Graphs of thoughts for solving elaborate problems with LLMs

- Models LLM generations as arbitrary graph
- "LLM thoughts" are vertices
- Edges are dependencies between
- Can combine & enhance LLM thoughts using feedback loops
- SoTA on a variety of tasks.
Nordic Semiconductor is set to buy US startup Atlazo, for the company's AI hardware IP. Nordic plans to add on-chip AI acceleration to devices across its portfolio.
New research suggests that our visual memories are not simply what one has just seen, but instead are the result of neural codes dynamically evolving to incorporate how we intend to use that information in the future.

Working memory is an incredibly important aspect of cognition and our daily lives. It enables us to retain small amounts of information to be used later— for example, keeping elements or the sequence of a story in mind before the person completes telling it, dialing a telephone number that you were just told, or calculating the total bill of your groceries as you are shopping.

With regards to XR, when we enter an immersive virtual environment presenting novel visual imagery, it requires us to use our working memory—especially in cognitive training tasks specifically aimed to improve working memory in clinical populations.

The findings of this study suggest perhaps better healthcare outcomes can be achieved as patients are encoding information about why they want to use this information, namely their recovery.

“Research makes it clear that memory codes can simultaneously contain information about what we remember seeing and about the future behavior that depends on those visual memories… This means the neural dynamics driving our working memory result from reformatting memories into forms that are closer to later behaviors that rely on visual memories.”
🆒3
A new study is out today in Nature! Researchers demonstrate a brain-computer interface that turns speech-related neural activity into text, enabling a person with paralysis to communicate at 62 words per minute - 3.4 times faster than prior work.

Researchers are publicly released all data and code, and are hosting a machine learning competition.
4
Lemur70B: the SOTA open LLM balancing text & code capabilities

The Lemur project is an open collaborative research effort between XLang Lab and Salesforce Research.

Lemur and Lemur-chat are initialized from Llama 2-70B

1. Pretrain Llama 2 on ~100B code-focused data > Lemur-70B
2. Finetune Lemur on ~300K examples > Lemur-70B-chat

Lemur outperforms other open source language models on coding benchmarks, yet remains competitive textual reasoning and knowledge performance.

Lemur-chat significantly outperforms other open-source supervised fine-tuned models across various dimensions.

Model: huggingface.co/OpenLemur
Blog: xlang.ai/blog/openlemur
3
Brain computer interface helped create a digital avatar of a stroke survivor’s face

A woman who lost her ability to speak after a stroke 18 years ago was able to replicate her voice and even convey a limited range of facial expressions via a computer avatar. A pair of papers published in Nature yesterday about experiments that restored speech to women via brain implants show just how quickly this field is advancing.

How they did it: Both teams used recording devices implanted into the brain to capture the signals controlling the small movements that provide facial expressions. Then they used AI algorithms to decode them into words, and a language model to adjust for accuracy. One team, led by Edward Chang, a neurosurgeon at the University of California, San Francisco, even managed to capture emotions.

The caveats: Researchers caution that these results may not hold for other people, and either way, we are still a very long way from tech that’s available to the wider public. Still, these proofs of concept are hugely exciting
Meta AI released Code Llama, a large language model built on top of Llama 2, fine-tuned for coding & state-of-the-art for publicly available coding tools.
A new startup founded by former members of team that created TensorFlow.js at Google Brain.

A new Open Source product to analyze, structure and clean data with AI.
3
Bond Tokenisation. The Hong Kong Monetary Authority released a report titled “Bond Tokenisation in Hong Kong”

Bond tokenisation is one of the pilot projects announced in the Policy Statement on Development of Virtual Assets in Hong Kong issued by the Financial Services and the Treasury Bureau last October.

In February this year, the HKMA assisted the Government in the successful offering of HK$800 million of tokenised green bond under the Government Green Bond Programme (the Tokenised Green Bond), marking the first tokenised green bond issued by a government globally.

The use of distributed ledger technology has been applied to primary issuance, settlement of secondary trading and coupon payment, and will be tested out in maturity redemption.

The Report :

- sets out details of the Tokenised Green Bond, and suggests available options with regard to salient aspects of a tokenised bond transaction in Hong Kong ranging from technology and platform design to deal structuring considerations.

- serves as a blueprint for potential similar issuances in Hong Kong.

- considers what could further be done to promote tokenisation in the bond market; these include exploring further use cases, addressing issues of fragmentation across platforms and systems, and enhancing Hong Kong’s legal and regulatory framework.

- enables market participants to draw reference from HKMA’s experience when considering tokenised issuances in Hong Kong.
3