All about AI, Web 3.0, BCI
3.29K subscribers
729 photos
26 videos
161 files
3.13K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
Woooow! Bard is running on a new model called gemini pro. it's pretty good!
Ilya Sutskever just now.
😢1
Google also announced Cloud TPU v5p and AI Hypercomputer, most powerful and scalable TPU accelerator to date, Cloud TPU v5p can train models 2.8X faster than its predecessor.

TPU v5p is also 4X more scalable than TPU v4 in terms of total available FLOPs per pod. It joins Cloud TPU v5e which has 2.3X performance per dollar improvements over the previous generation TPU v4, and is most cost-efficient TPU to date.

AI Hypercomputer, a groundbreaking supercomputer architecture that employs an integrated system of performance-optimized hardware, open software, leading ML frameworks, and flexible consumption models.
👍3
AlphaCode-2 is also announced today. It's a competitive coding model finetuned from Gemini.

In the technical report, DeepMind shares a surprising amount of details on an inference-time search, filtering, and re-ranking system. This may be Google's Q*?

They also discussed the finetuning procedure, which is 2 rounds of GOLD (an offline RL algorithm for LLM from 2020), and the training dataset.

AlphaCode-2 scores at 87% percentile among the human competitors.
👍5
⚡️ Orchid Health announced the world's first whole genome embryo reports based on single-cell sequencing and bioinformatics tools for IVF clinics and pre-pregnancy decision making, offering over 99% genome sequencing and providing 100x more data than existing tests.
⚡️Diagnostic conundrums are an unsolved grand challenge for AI. Google DeepMind presented a new research LLM optimized for differential diagnosis (DDx), tested in NEJM challenges.

Their LLM outperformed clinicians & other LLMs both standalone, or as a tool. In safety-critical settings like medicine, AI should be a collaborative and helpful teammate.
Société Générale to become first big bank to list a stablecoin

France’s third-largest bank on Wednesday will debut trading of its own stablecoin, called EUR CoinVertible, on Bitstamp, an exchange based in Luxembourg.

The move marks a significant step for a traditional financial institution into a part of cryptocurrency trading currently dominated by specialist digital assets firms.

Stablecoins are facing increasing attention from regulators, with the UK last month setting out proposals to bring the tokens into the real economy.

Stablecoins are a form of digital cash that track sovereign currencies and make it easier for crypto traders to buy and sell in the market.

The majority of trading in crypto such as bitcoin is done through stablecoins tied to the US dollar.

The $130bn market is dominated by British Virgin Islands-registered Tether and the US’s Circle, which have faced questions over audits of the reserves that back their tokens. SocGen said EUR CoinVertible would be fully backed by euros.

While some large investment banks such as JPMorgan have their own stablecoins, they are only available to small groups of institutional clients. In contrast, SocGen’s stablecoin will be widely available for trading.

Stenger said the bank hoped its stablecoin would be used to settle trades in digital bonds, funds and other assets as traditional financial institutions explore digital ledgers.

Mica, the EU’s flagship digital assets regulation, comes into force next year and Stenger said that SocGen’s stablecoin is built to align with the rules, adding that “very few stablecoins are compliant with Mica”.

Asset managers and banks are increasingly exploring tokenising assets such as bonds and funds, which require digital cash, but the market is still small.

In a development welcomed by the industry, the UK Treasury and Financial Conduct Authority last month gave fund managers the green light to tokenise their funds, as long as they contain “mainstream” assets.
🔥3
Microsoft inches closer to glass storage breakthrough that could finally make ransomware attacks impossible in the data center and hyperscalers — but only Azure customers will benefit from it

The technology is strikingly similar to ceramics-based storage and may replace current day technology soon.

How does glass-based storage work? 

"This paper presents Silica: the first cloud storage system for archival data underpinned by quartz glass, an extremely resilient media that allows data to be left in situ indefinitely," the authors wrote. 

"The hardware and software of Silica have been co-designed and co-optimized from the media up to the service level with sustainability as a primary objective."

Data is written in a square glass platter with ultrafast femtosecond lasers through voxels. These are permanent modifications to the physical structure of the glass, and allow for multiple bits of data to be written in layers across the surface of the glass. These layers are then stacked vertically in their hundreds.

To read data, they employ polarization microscopy technology to image the platter, while the read drive scans sectors in a Z-pattern. The images are then sent to be processed and decoded, which leans on machine learning model to convert analog signals to digital data.

The medium is suitable for a host of sensitive industries including finance, scientific research and healthcare, due to the secure nature of archival glass storage – meaning organizations in these sectors may be able to withstand ransomware attacks targeting data being stored in the cloud. 
👍2
Anthropic found out you can increase LLMs recall capacity by 70% with a single addition to your prompt:

“Here is the most relevant sentence in the context:”

This was enough to raise Claude 2.1 score from 27% to 98% on the 200K context window.
👍4
Microsoft Research announced MatterGen: generative model that enables broad property-guided materials design

In MatterGen, researchers directly generate novel materials with desired properties, similar to how DALL·E 3 tackles image generation.

MatterGen is a diffusion model specifically designed for generating novel, stable materials. MatterGen also has adapter modules that can be fine-tuned to generate materials given a broad range of constraints, including chemistry, symmetry, and properties.

MatterGen generates 2.9 times more stable (≤ 0.1 eV/atom of their expanded reference convex hull), novel, unique structures than a SOTA model (CDVAE). It also generates structures 17.5 times closer to energy local minimum.

MatterGen can directly generate materials satisfying desired magnetic, electronic, and mechanical properties via classifier-free guidance. Researchers verify generated materials with DFT-based workflows.

Additionally, MatterGen can continue generating novel materials that satisfy a target property like high bulk modulus, while screening methods instead plateau due to the exhaustion of materials in the database.

MatterGen can also generate materials given target chemical systems. It outperforms substitution and random structure search baselines equipped with MLFF filtering, especially in challenging 5-element systems.

MatterGen also generates structures given target space groups.

Finally, researchers tackle the multi-property materials design problem of finding low-supply-chain risk magnets.

MatterGen proposes structures that have both high magnetic density and a low supply-chain risk chemical composition, including promising candidates like MnFe3O4.

Researchers results are currently verified via DFT, which has many known limitations. Experimental verification remains the ultimate test for real-word impact.
🔥5
New group aims to professionalize AI auditing

The newly formed International Association of Algorithmic Auditors (IAAA) is hoping to professionalize the sector by creating a code of conduct for AI auditors, training curriculums, and eventually, a certification program.

Over the last few years, lawmakers and researchers have repeatedly proposed the same solution for regulating artificial intelligence: require independent audits. But the industry remains a wild west; there are only a handful of reputable AI auditing firms and no established guardrails for how they should conduct their work.

Yet several jurisdictions have passed laws mandating tech firms to commission independent audits, including New York City. The idea is that AI firms should have to demonstrate their algorithms work as advertised, the same way companies need to prove they haven’t fudged their finances.

The IAAA is being launched by a number of academic researchers, like the Mozilla Foundation’s Ramak Molavi Vasse’i, along with industry leaders like former Twitter executive Rumman Chowdhury and Shea Brown, CEO of the AI auditing firm BABL AI.
👍42
Evaluating Large Language Model Creativity from a Literary Perspective.

One sentence summary: with sophisticated prompting and a human in the loop, you can get pretty impressive results.
Chain of Code (CoC), a simple yet surprisingly effective method that improves Language Model code-driven reasoning.

On BIG-Bench Hard, CoC achieves 84%, a gain of 12% over Chain of Thought.
New open weights LLM from MistralAI

params.json:
- hidden_dim / dim = 14336/4096 => 3.5X MLP expand
- n_heads / n_kv_heads = 32/8 => 4X multiquery
- "moe" => mixture of experts 8X top 2 👀

Likely related code here.
Lean Co-pilot for LLM-human collaboration to write formal mathematical proofs that are 100% accurate.

Researchers use LLMs to suggest proof tactics in Lean and also allow humans to intervene and modify in a seamless manner.

Theorem provers like Lean can formally verify each step of the proof but are laborious for humans to write in Lean. Using LLMs to automate suggestions of Lean proof tactics speeds up proof synthesis significantly, and we can incorporate human inputs only when needed through our Lean Co-pilot.

- LLMs can suggest proof steps, search for proofs, and select useful lemmas from a large mathematical library.

- Lean Copilot is easy to set up as a Lean package and works seamlessly within Lean’s VS Code workflow.

- You can use built-in models from LeanDojo or bring your own models that run either locally (w/ or w/o GPUs) or on the cloud.
ARIA PD Jacques Carolan has released his opportunity space “Precisely interfacing with the human brain at scale”.

He’s exploring how to build targeted, minimally invasive tools to better understand and treat disorders of the brain and looking for feedback.
Five things you need to know about the EU’s new AI Act

1. The AI Act ushers in important, binding rules on transparency and ethics.

The regulation imposes legally binding rules requiring tech companies to notify people when they are interacting with a chatbot or with biometric categorization or emotion recognition systems. It’ll also require them to label deepfakes and AI-generated content, and design systems in such a way that AI-generated media can be detected. This is a step beyond the voluntary commitments that leading AI companies made to the White House to simply develop AI provenance tools, such as watermarking. 

2. AI companies still have a lot of wiggle room.

The AI Act will require foundation models and AI systems built on top of them to draw up better documentation, comply with EU copyright law, and share more information about what data the model was trained on. For the most powerful models, there are extra requirements. Tech companies will have to share how secure and energy efficient their AI models are, for example. 

But here’s the catch: The compromise lawmakers found was to apply a stricter set of rules only the most powerful AI models, as categorized by the computing power needed to train them. And it will be up to companies to assess whether they fall under stricter rules. 

3. The EU will become the world’s premier AI police.

The AI Act will set up a new European AI Office to coordinate compliance, implementation, and enforcement. It will be the first body globally to enforce binding rules on AI, and the EU hopes this will help it become the world’s go-to tech regulator. The AI Act’s governance mechanism also includes a scientific panel of independent experts to offer guidance on the systemic risks AI poses, and how to classify and test models. 

4. National security always wins.

5. What next? 


It might take weeks or even months before we see the final wording of the bill. The text still needs to go through technical tinkering, and has to be approved by European countries and the EU Parliament before it officially enters into law. 
Once it is in force, tech companies have two years to implement the rules. The bans on AI uses will apply after six months, and companies developing foundation models will have to comply with the law within one year.