All about AI, Web 3.0, BCI
3.32K subscribers
733 photos
26 videos
161 files
3.16K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
xAI is raising $6 billion at a valuation of $18 billion from investors including Sequoia

Musk launched xAI in early 2023, and it released its chatbot Grok to premium subscribers on Musk’s social network X in December.

The company is currently training the second generation of Grok on 20,000 Nvidia H100s, the chips that the most advanced AI models operate on, according to Musk.
China’s most prominent pro-blockchain official, Yao Qian, is under investigation by the Chinese government for suspected violations of law.

The specific reasons are unknown.

He was the creator of China’s CBDC and served as the director of the central bank’s digital currency research institute.

On April 8, Yao published an article discussing the Bitcoin spot ETF approved by the USA, saying that the market expects Bitcoin prices to still rise, discussed the opinions of supporters and opponents of Bitcoin, and introduced the regulatory measures for cryptocurrencies in the USA.
WebSim is such a fascinating look at what a truly generative Internet might look like.

The URL bar is a prompting engine that builds a fully interactive and customizable site based on your input.

You can instantly create websites, simulators, games, and more.

You can have fun on WebSim with no product knowledge (try typing in a URL of your own name)

Or, there's a much deeper and more complex language to learn if you want to really build on it - with some randomness thrown in 😂

It truly is the "hallucinated Internet"!
1🤮1
Pleias published the largest dataset to date with automated OCR correction, 1 billion words in English, French, German and Italian.

OCR quality is primary concern of digitization in any large scale organization. Scans are not always on well-preserved and in many case existing OCR tools are not able to properly parse specific fonts or formats, especially in other languages than English.

Automated post-OCR correction has been made possible thanks to progress in open LLM research and several months of dedicated training and alignment by Pleias.

Results are now encouraging most of the time, on a variety of European languages, even when the text is severely degraded.
Google announced Med-Gemini, a family of Gemini models fine-tuned for medical tasks

Achieves SOTA on 10 of the 14 benchmarks, spanning text, multimodal & long-context applications.

Surpasses GPT-4 on all benchmarks!
Meta announced Better & Faster Large Language Models via Multi-token Prediction

Large language models such as GPT and Llama are trained with a next-token prediction loss.
Another triumph for Self-Play. Self-Play Preference Optimization (SPPO) has surpassed (iterative) DPO, IPO, Self-Rewarding LMs, and others on AlpacaEval, MT-Bench, and the Open LLM Leaderboard.

Remarkably, Mistral-7B-instruct-v0.2 fine-tuned by SPPO achieves superior performance to GPT-4 0613 without relying on any GPT-4 responses.
All you need is Kolmogorov–Arnold Network (KAN)

Kolmogorov-Arnold network obliterates Deepmind's results with much smaller networks and much more automation.

KANs also discovered new formulas for signature and discovered new relations of knot invariants in unsupervised ways.

GitHub.
OpenAI is about to go after Google search.

This could be the most serious threat Google has ever faced.

OpenAI's SSL certificate logs now show they created search.chatgpt.com

Microsoft Bing would allegedly power the service.

This shouldn’t be too surprising, considering:
1. OpenAI has a web crawler, GPTBot.
2. ChatGPT Plus users can also use Browse with Bing to search the web.
3. Microsoft Bing uses OpenAI’s GPT-4, customized for search.
Academic benchmarks are losing their potency. There’re 3 types of LLM evaluations that matter:

1. Privately held test set but publicly reported scores, by a trusted 3rd party who doesn’t have their own LLM to promote.Scale’s latest GSM1k is a great example. They are an unbiased neutral party who ensures that the test data is not leaked into anyone’s training.

2. Public, comparative benchmarks like Lmsys.org Chatbot Arena, reported in ELO score. You can’t game democracy.

3. Privately curated, internal benchmarks for each company’s own use cases. You can’t game your customers.
H-GAP is a generalist model for humanoid control.

Trained on large MoCap-derived data, it can generate diverse, natural motions & transfer skills to new tasks without fine-tuning!

Paper.
Meta and Georgia institute of technology released a dataset + SOTA AI models to help accelerate research on Direct Air Capture — a key technology to combat climate change.

OpenDAC23 is the largest dataset of Metal Organic Frameworks characterized by their ability to adsorb CO2 in the presence of water — an order of magnitude larger than any other pre-existing dataset at this precision.
4
Unlearn AI released a new neural network architecture for learning to create digital twins of patients.
4
The future of AI language models may lie in predicting beyond the next word: Multi-Token Prediction

Studies suggest that the human brain predicts multiple words at once when understanding language, utilizing both semantic and syntactic information for broader predictions - now researchers from Meta are hoping to train their LLMs to do the same.

The authors proposed a new training method for language models, called "multi-token prediction," which predicts multiple words simultaneously instead of just the next word.

"Our 13B parameter models solves 12% more problems on HumanEval [benchmark test] and 17% more on MBPP than comparable next-token models."

Predicting hierarchical representations of future input and generating a multi-token response enhances performance, coherence, and reasoning capabilities (particularly for larger models) as it attempts to mimic the human brain.

Works particularly well with coding, and may become a key feature of advanced language models to be released later this year that include the number "5".
Intel introduced the biggest neuromorphic computer in the world - Hala Point.

This system, commissioned by Sandia National Labs, integrates 1152 Intel Labs Loihi 2 chips in a three-dimensional array.

Changes to Loihi from gen one to gen two mean this computer can be used to run spiking neural networks and optimization problems as well as converted mainstream deep learning models at excellent power efficiency.
The latest version of the Unity game engine is now available to developers as a preview.

Unity 6 Preview includes new features to make XR development easier, including Composition Layers which can significantly increase the quality of text, UI, photos, and videos in XR.
Gen_AI_in_Global_Financial_Institutions_1714984764.pdf
2.3 MB
BCG has released an whitepaper on'Transformation’s Edge: The State of GenAI in Global Financial Institutions.'

Decision makers at some of the world’s leading financial institutions (FIs) believe Generative AI (GenAI) presents a transformative business opportunity, according to new survey from BCG.

But only a few have made significant progress in pursuing that vision - for instance, by establishing delivery teams or developing detailed plans for use cases.

Indeed, many FI leaders say more groundwork is needed to assemble the tools and capabilities that would foster a winning GenAI proposition. Acquiring specialized talent is a key priority.

Where FIs are using GenAI in practice, it is most often to serve support functions such as call center services or software development, rather than transform business-critical operations at scale.

Key Survey Insights

1. ⁠85% of financial institutions in BCG's survey believe GenAI will be highly disruptive or transformational.

2. ⁠But only 2% have a fully developed GenAI talent strategy.

3. Just 26% are actively investing a significant proportion of their innovation budgets in GenAI implementation.

•⁠ ⁠Almost three quarters of survey respondents are in the early stages of use case development.

•⁠ ⁠The most progressed GenAI use cases focus on boosting internal productivity, rather than re-shaping critical functions or inventing new business models.
LLMs are better than humans at designing reward functions for robotics