All about AI, Web 3.0, BCI
3.27K subscribers
728 photos
26 videos
161 files
3.12K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
OpenBCI launches a neuro-powered spatial computer

Galea Beta device includes a range of sensors that simultaneously measure the user’s heart, skin, muscles, eyes, and brain.

Galea Beta includes eye-tracking and displays from Finnish headset-maker, Varjo and can be ordered with the Varjo Aero, XR-3 or the recently announced XR-4.

The Galea Beta sensors can be used without the HMD, or can be tethered to a high-powered PC and used for collecting data from VR and XR environments.

Long-term goal for Galea is to bring everything you see on the table, together into one device. Optics, CPU, I/O and sensors, in one tightly synchronized integrated system.
👍3
According to a Chinese computer scientist who asked not to be named, the new Sunway is not the most powerful supercomputer in China at present.

But after details of it were given at the Supercomputing 2023 (SC23) conference in Denver, US, earlier this month, it gave the public some hints on how China has managed to sidestep US sanctions to build its own supercomputers.

This Chinese dark horse has also outdone leading supercomputers, including the Frontier, in computing efficiency.

It can maintain over 85 per cent of its peak performance in regular operation, ranking the highest among all heterogeneous systems – a type of common supercomputing architecture – and second among all systems.

Meanwhile, China’s most powerful supercomputer remains undisclosed and other supercomputing chips are also under development, according to the Chinese scientist who works at a top mainland university.
A very important sleep study came out

Sleep has a huge impact on someone's health and even health span, but now it seems that "sleep regularity is an important predictor of mortality risk and is a stronger predictor than sleep duration."

Thus sleep regularity should be a simple, yet effective target for improving general health and survival.
4
Google has quietly delayed the public debut of Gemini to January

Sundar Pichai recently decided to scrap a series of Gemini events, originally scheduled this week after the company found the AI didn’t reliably handle some non-English queries.

It’s rare for Google to launch a major product between Thanksgiving and the end of the year, but Google intended to make an exception for Gemini because it’s arguably the company’s most important initiative in a decade.

The Gemini event in Washington was intended to showcase the technology to policymakers and politicians, which have increasingly discussed potential regulations involving AI.
1
No GPU but wanna create your own LLM on laptop?

Here is a QLoRA on CPU, making LLM fine-tuning on client CPU possible.

Code.
Taiwan's National Science and Technology Council has published a list of key technologies that are significant to Taiwan's national security: semiconductor manufacturing process technology under 14 nm included.
Can computers simulate brains?

Scientists have been exploring the intersection of math, computers, and neuroscience for decades. Today, machine learning is unlocking new possibilities in brain modeling.

Fascinating Nature paper on the latest in the field.
👍4
How to Think of R&D - new report by a16Z on the cost item that is hardest to measure and track, but most important for tech companies.

1. R&D is the lifeblood of tech companies, but it’s the hardest to measure, and takes the longest to see paybacks and measure effectiveness.

2. So, how to allocate, plan, and measure R&D spend? First, start with benchmarks.

3. Then, map R&D spend to your product roadmap with expected returns/timelines. 70-20-10 is a common framework. It should be an output of the prioritization work you do, not a prescription. In platform shifts especially, like we have w/ AI now, 70-20-10 probably isn’t right.

4. Here’s another framework to consider rationale for spend and expected timing - offensive/defensive and short/long time.

5. Last, performance management is key.

6. Applying a framework and rigor to ROI is as critical for R&D as other areas of spend. The path and math are not as straightforward, but getting this right is critical.
👍2
Apple released new software from machine learning research.

MLX is an efficient machine learning framework specifically designed for Apple silicon (i.e. your laptop!)

Code.

This may be Apple's biggest move on open-source AI so far: MLX, a PyTorch-style NN framework optimized for Apple Silicon, e.g. laptops with M-series chips.

The release did an excellent job on designing an API familiar to the deep learning audience, and showing minimalistic examples on OSS models that most people care about: Llama, LoRA, Stable Diffusion, and Whisper.
🔥1
Nvidia CEO Jensen Huang said Huawei is among a field of “very formidable” competitors in the race to create the best AI chips, adding Nvidia is working closely with US officials to make new chips for the China market that adhere “perfectly” to the latest US rules.
1
Google DeepMind developed a new way for AI agents to acquire knowledge from human demonstrations in real-time.

This allows for "cultural transmission" without needing large datasets - something that can massively amplify learning over time.

Google DeepMind also published a new paper detailing their (research-driven) attack on rival OpenAI’s ChatGPT.

The finding is that forcing the model to repeat a word forever causes it to leak training data.
👍5🔥3
Woooow! Bard is running on a new model called gemini pro. it's pretty good!
Ilya Sutskever just now.
😢1
Google also announced Cloud TPU v5p and AI Hypercomputer, most powerful and scalable TPU accelerator to date, Cloud TPU v5p can train models 2.8X faster than its predecessor.

TPU v5p is also 4X more scalable than TPU v4 in terms of total available FLOPs per pod. It joins Cloud TPU v5e which has 2.3X performance per dollar improvements over the previous generation TPU v4, and is most cost-efficient TPU to date.

AI Hypercomputer, a groundbreaking supercomputer architecture that employs an integrated system of performance-optimized hardware, open software, leading ML frameworks, and flexible consumption models.
👍3
AlphaCode-2 is also announced today. It's a competitive coding model finetuned from Gemini.

In the technical report, DeepMind shares a surprising amount of details on an inference-time search, filtering, and re-ranking system. This may be Google's Q*?

They also discussed the finetuning procedure, which is 2 rounds of GOLD (an offline RL algorithm for LLM from 2020), and the training dataset.

AlphaCode-2 scores at 87% percentile among the human competitors.
👍5