All about AI, Web 3.0, BCI
3.25K subscribers
727 photos
26 videos
161 files
3.11K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
Jeff Bezos' space company has signed a contract with NASA to fly to the moon

NASA has awarded a NextSTEP-2 Appendix P Sustaining Lunar Development (SLD) contract to Blue Origin.

Blue Origin’s National Team partners include Lockheed Martin, Draper, Boeing, Astrobotic, and Honeybee Robotics.  

Under this contract, Blue Origin and its National Team partners will develop and fly both a lunar lander that can make a precision landing anywhere on the Moon’s surface and a cislunar transporter.

These vehicles are powered by LOX-LH2. The high-specific impulse of LOX-LH2 provides a dramatic advantage for high-energy deep space missions.

Nevertheless, lower performing but more easily storable propellants (such as hydrazine and nitrogen tetroxide as used on the Apollo lunar landers) have been favored for these missions because of the problematic boil-off of LOX-LH2 during their long mission timelines.
🔥2
Game changes. A central question in neuroscience is how consciousness arises from the dynamic interplay of brain structure and function.

Here researchers decompose functional MRI signals from pathological and pharmacologically-induced perturbations of consciousness into distributed patterns of structure-function dependence across scales: the harmonic modes of the human structural connectome.

They show that structure-function coupling is a generalisable indicator of consciousness that is under bi-directional neuromodulatory control.

They find increased structure-function coupling across scales during loss of consciousness, whether due to anaesthesia or brain injury, capable of discriminating between behaviourally indistinguishable sub-categories of brain-injured patients, tracking the presence of covert consciousness.

The opposite harmonic signature characterises the altered state induced by LSD or ketamine, reflecting psychedelic-induced decoupling of brain function from structure and correlating with physiological and subjective scores.

Overall, connectome harmonic decomposition reveals how neuromodulation and the network architecture of the human connectome jointly shape consciousness and distributed functional activation across scales.
👍3
Top500 list of fastest supercomputer

https://top500.org/
👍2
As the deadlock in negotiations to raise the U.S. government's $31.4 trillion debt limit keeps markets tentative, Goldman Sachs warns a potential U.S. debt deal could lead to liquidity strain, causing a ripple effect on Bitcoin
Microsoft Researchers Introduce Reprompting: An Iterative Sampling Algorithm that Searches for the Chain-of-Thought (CoT) Recipes for a Given Task without Human Intervention.

Paper: arxiv.org/abs/2305.09993
🔥2
Generative approaches to IR that store documents in a transformer model instead of search index, are on the rise.

Though not production ready yet, definitely a trend to watch, with incresingly good results on small collections. Here's some notable papers from this month.

1. How Does Generative Retrieval Scale to Millions of Passages?
Manages scaling to the full MS MARCO collection (8M passages), but results are not near SOTA yet.
arxiv.org/abs/2305.11841

2. Large Language Models are Built-in Autoregressive Search Engines.
Evaluates the capabilities of LLMs to memorize / dream up URL's that actually happen to exist.
arxiv.org/abs/2305.09612

3. TOME: A Two-stage Approach for Model-based Retrieval, by Ruiyang Ren et al.
Uses tokenized URLs as DOCids and also achieves scaling to the full MS MARCO collection with decent results.
arxiv.org/abs/2305.11161

4. Learning to Tokenize for Generative Retrieval.
This paper focuses on the learning of meaningful DOCids using a method called GenRet. New SOTA results of NQ320K, and additional evaluation on MS MARCO, and BEIR.
arxiv.org/abs/2304.04171

5. Recommender Systems with Generative Retrieval.
This paper uses the DSI framework for recommenders, achieves SOTA models on an Amazon dataset and improves retrieval of cold-start items.
arxiv.org/abs/2305.05065
🔥3
Heading into the era of truly private, personalized AI assistants

Amazing to see LLMs like RedPajama-INCITE 3B run locally on mobile with hardware acceleration using WebAssembly and WebGPU. No need to write custom code for custom hardware.
🔥2
SO much important stuff being announced Microsoft build today. Much of which is great news for developers and ChatGPT users alike:

1. developers can now use one platform to build plugins that work across both consumer and business surfaces, including ChatGPT, Bing, Dynamics 365 Copilot (in preview) and Microsoft 365 Copilot (in preview).

This is a HUGE win for plugin developers who have adopted early. You can now make your plugin available to orders of magnitude more people without any additional work required.

2. Browsing in ChatGPT!

Microsoft is announcing that Bing is coming to ChatGPT as the default search experience.
This is a huge win, Bing is by many metrics the best game in search, now ChatGPT users will get this out of the box.
Now, answers are grounded by search and web data and include citations so users can learn more, all directly from within chat. The new experience is rolling out to ChatGPT Plus subscribers starting today and will be available to free users soon by simply enabling a plugin.

3. MEGA plugin platform

That's because OpenAI and Microsoft are unifying their plugins standard.

Build once, and you get access to users across:

- ChatGPT
- Bing Chat
- Dynamics 365 Copilot
- Microsoft 365 Copilot
- Windows Copilot

Plugins are a SCREAMING opportunity atm.
👍2🔥2
A Japanese robotics company designed a system of six spider-like robotic limbs that the user can fully control

Essentially, turning humans into cyborgs.

Use cases include working in warehouses to hospital surgery rooms.

However, the most significant impact could be improving lives of people with disabilities.
Another milestone in LLM miniaturization.

Scaling up then scale down, will be the rhythm for the open-source AI community.

QLoRA: 4-bit finetuning of LLMs is here! With it comes Guanaco, a chatbot on a single GPU, achieving 99% ChatGPT performance on the Vicuna benchmark:

Paper: arxiv.org/abs/2305.14314
Code+Demo: github.com/artidoro/qlora
Samples
Colab
🔥1
The False Promise of Imitating Proprietary LLMs

Open-sourced LLMs are adept at mimicking ChatGPT’s style but not its factuality. There exists a substantial capabilities gap, which requires better base LM.

If you want GPT-4 performance you should use way more than 4 open source language models working together.

One to one you’ll never get the same factuality in particular.

It requires new design patterns as well as better base LMs which will take time & effort
The Ethereum exchange balance has dropped to a five-year low
The Grace Hopper Superchip architecture, shown off by Nvidia's Jensen Huang is super geeky but fascinating.

It basically marries the specialization of the Hopper GPU with the generalization of the Grace CPU by allowing access to all memory.

Here we have a more classic architecture.

Remember that memory cannot be accessed if it cannot be addressed done through a page table. In this traditional approach, GPU & CPU run different page tables.

With a GH Superchip, one page table exists & translates memory addresses so GPU can access CPU memory (here it's LPDDR5X) and CPU can read the GPU's memory (HBM3).
Curious to know how devs will utilize this approach.
BiomedGPT: a Unified and Generalist Biomedical Generative Pre-trained Transformer for Vision, Language, and Multimodal Tasks

Outperforms the majority of preceding SotA models across 5 tasks with 20 datasets spanning over 15 biomedical modalities.
RIP online job interviews.

This AI tool enables real-time transcriptions for your microphone input AND speaker output.

It then generates a response for the user to answer questions based on the live conversation:

The tool uses Whisper for transcriptions and GPT-3.5 for suggested responses.
🔥1😁1
New paper: “The Economics of Augmented and Virtual Reality”

The focus is on what provides value — particularly to decision-making. This allows to explicitly rule out what will be of low value — that turns out to be much of the focus of AR/VR up until now.

For AR, value is created when there is high contextual entropy (that is, there is a ton of information in the environment) and the tech allows you to reduce that information and the cognitive load of processing it.

Thus, AR that involves popping up notifications in your eye-line are not of high value — they increase cognitive load of the environment you are in. So Google Glass, North etc were on the wrong path.

For VR, providing nice looking environments for meetings with people you already know provides less information than a zoom call. Thus, Meta-style virtual meetings are not worth the effort for most things. They have to be better than zoom and it is unclear VR will achieve that.
👍21💩1