All about AI, Web 3.0, BCI
3.33K subscribers
733 photos
26 videos
161 files
3.17K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
Fundraising for crypto VC has fallen off a cliff in 2023, down 98% since 2022
🔥2
TSMC’s 2nm R&D team is gearing up for trial runs, aiming for 2nm mass production in 2025 for clients including Apple, Nvidia.

TSMC is speeding up the move to 2nm to widen its lead over Samsung , Intel.

TSMC is aiming for a trial run of 1,000 wafers 2nm (N2) by the end of this year, before risk production in 2024 and mass production in 2025.

N2 will be TSMC’s 1st to use GAA transistors, with N2 in mass production in 2025, then N2P and N2X out in 2026.
“Chiang believes that language without the intention, emotion and purpose that humans bring to it becomes meaningless.

“Language is a way of facilitating interactions with other beings. That is entirely different than the sort of next-token prediction, which is what we have [with AI tools] now.”
New paper - researchers showed that pre-training language-image models *solely* on synthetic images from Stable Diffusion can outperform training on real images!
LLMs outperform RL at game play by studying papers and reasoning through chain-of-thought.

LLMs could be the native language to teach robots.
Baidu’s researchers developed LinearDesign, an AI algorithm that boosts the COVID19 mRNA vaccine antibody response by 128-fold, drawing inspiration from a simple NLP technique known as lattice parsing.
👍1
HuggingChat, the 100% open-source alternative to ChatGPT by HuggingFace added a web search feature

Link: huggingface.co/chat/

GitHub Repo https://github.com/huggingface/chat-ui
🆒2
Big VC news: Sequoia is splitting into 3 firms. Sequoia China is now HongShan. Sequoia India is now Peak XV Partners. And Sequoia’s U.S. and Europe business continues as Sequoia Capital.

Sequoia went global in the mid-2000s with separate funds that shared back-office functions, some LP backers, and its well-known VC brand. Now, it’ll end any back-office and profit sharing by Dec 31, and be fully separated “not later than” March 31, 2024.
RedPajama 7B trained on 1T tokens!

• Instruct, chat, base, and interim checkpoints on huggingface
• The instruct model outperforms all open 7B models on HELM benchmarks
• The 5TB dataset has been used to train over 100 models
👍1
Singularity is here? It’s amazing to witness how a few "hacks" such as a memory system + some prompt engineering can stimulate human-like behavior

Inspired by Stanford 's "Generative Agents" paper- Every agent in a GPTeam simulation has its unique personality, memories, and directives, creating human-like behavior.

"The appearance of an agentic human-like entity is an illusion. Created by a memory system and a fe of distinct Language Model prompts."- from GPTeam blog. This ad-hoc human behaviour is mind blowing.
🔥3
Precision Neuroscience completes pilot clinical trials of brain computer interface on first three people

This brings Precision Neuroscience one step closer to completing an FDA application for approval of their device.
🔥41👍1
How much has Apple spent on the most ambitious product - Apple Vision Pro?

Meta has spent ~$55B on its Oculus/Quest line since 2021, including R&D, hardware, marketing, software, device subsidies, hardware, M&A.

Apple doesn’t have product-specific P&Ls, with its investments in displays, silicon, precision manufacturing, etc., applying across nearly all its products

But since development began ~2018, Apple has filed 12,300 patents in the US and spent $97B on R&D.

Apple says over 5000 patents were filed for Vision. Unclear if counting duplicates for each country, but if its unique patents, pro-rata would mean $40B in Vision R&D.

Costs are so extraordinary because they span so many different advances (optics, projection, tracking, foveation, charging, battery) and components (dozen cameras, sensors for face scanning and environmental detection) and manufacturing techniques (glass) well beyond precedent.

Apple's annual R&D budget is nearly 2x Meta's annual Reality Labs spend - plausible Apple's XR allocation is close to Meta's - yet less than 7% of Apple's overall revenues.

Again, patent allocation may overestimate R&D on XR. But it may also underestimate it as there's more pioneering research in XR than say, tablets or macs

Further, the 5000 patents figure was specific to Vision - not other XR-related R&D, such as this Dec 2022 filing.

Which was for "Self-Mixing Interferometry" for a “Sensor-based Gesture System”, with applications covering solo or multi-ring use, with or without Apple Pencil, supporting AR, VR and MR.

Pro-rata isn’t the best way to estimate share of total R&D, but even if you haircut a third, you reach $27B.

The $55B figure on the Quest/Oculus line is really $55B spent on Reality Labs, which is thus far concentrated on these headsets, but spans other projects (e.g. CTRL-Labs) and forthcoming devices (as does Apple's R&D).
A Hong Kong legislator told the media that apart from exchanges and virtual asset management, Hong Kong has not stated that it needs to be regulated.

For example, game tokens, as long as they do not involve securities and futures, will not be regulated, and Hong Kong is relatively free to develop Web3.
When it comes to generative AI companies picking an HQ, the Bay Area dominates.
Microsoft launches OpenAI for Government

The company unveiled its new Azure OpenAI Service for federal agencies, state and local governments and their partners Wednesday, providing existing Azure Government public sector customers access to generative AI capabilities previously available only through its commercial cloud.

“For government customers, Microsoft has developed a new architecture that enables government agencies to securely access the large language models in the commercial environment from Azure Government, allowing those users to maintain the stringent security requirements necessary for government cloud operations,” Bill Chappell, chief technology officer for Strategic Missions and Technologies at Microsoft, said in a Wednesday blog post.
А new PaLM 2.1b model trained at a context length of 8k on C4.

This model release is a continuation of the previously released 150m, 410m, and 1b models.