Very insightful discussions of AI's impact on scientific breakthroughs and society at PCAST
Features prominent speakers like Anima Anandkumar (NVIDIA & Caltech)Fei - Fei Li (Stanford), and Demis Hassabis (DeepMind).
Features prominent speakers like Anima Anandkumar (NVIDIA & Caltech)Fei - Fei Li (Stanford), and Demis Hassabis (DeepMind).
YouTube
PCAST: Discussion of Artificial Intelligence (AI) Enabling Science and AI Impacts on Society
On May 19, 2023, the President’s Council of Advisors on Science and Technology (PCAST) met to hear from two panels of invited speakers and discuss Artificial Intelligence (AI) enabling science and AI impacts on society. The meeting was be livestreamed on…
TSMC’s 2nm R&D team is gearing up for trial runs, aiming for 2nm mass production in 2025 for clients including Apple, Nvidia.
TSMC is speeding up the move to 2nm to widen its lead over Samsung , Intel.
TSMC is aiming for a trial run of 1,000 wafers 2nm (N2) by the end of this year, before risk production in 2024 and mass production in 2025.
N2 will be TSMC’s 1st to use GAA transistors, with N2 in mass production in 2025, then N2P and N2X out in 2026.
TSMC is speeding up the move to 2nm to widen its lead over Samsung , Intel.
TSMC is aiming for a trial run of 1,000 wafers 2nm (N2) by the end of this year, before risk production in 2024 and mass production in 2025.
N2 will be TSMC’s 1st to use GAA transistors, with N2 in mass production in 2025, then N2P and N2X out in 2026.
經濟日報
台積2奈米試產 有動作了 | 產業熱點 | 產業 | 經濟日報
業界傳出,台積電全力衝刺下世代製程技術,近期啟動2奈米試產前置作業,將搭配導入最先進AI系統來節能減碳並加速試產效率,預...
“Chiang believes that language without the intention, emotion and purpose that humans bring to it becomes meaningless.
“Language is a way of facilitating interactions with other beings. That is entirely different than the sort of next-token prediction, which is what we have [with AI tools] now.”
“Language is a way of facilitating interactions with other beings. That is entirely different than the sort of next-token prediction, which is what we have [with AI tools] now.”
New paper - researchers showed that pre-training language-image models *solely* on synthetic images from Stable Diffusion can outperform training on real images!
LLMs outperform RL at game play by studying papers and reasoning through chain-of-thought.
LLMs could be the native language to teach robots.
LLMs could be the native language to teach robots.
Baidu’s researchers developed LinearDesign, an AI algorithm that boosts the COVID19 mRNA vaccine antibody response by 128-fold, drawing inspiration from a simple NLP technique known as lattice parsing.
👍1
HuggingChat, the 100% open-source alternative to ChatGPT by HuggingFace added a web search feature
Link: huggingface.co/chat/
GitHub Repo https://github.com/huggingface/chat-ui
Link: huggingface.co/chat/
GitHub Repo https://github.com/huggingface/chat-ui
🆒2
Big VC news: Sequoia is splitting into 3 firms. Sequoia China is now HongShan. Sequoia India is now Peak XV Partners. And Sequoia’s U.S. and Europe business continues as Sequoia Capital.
Sequoia went global in the mid-2000s with separate funds that shared back-office functions, some LP backers, and its well-known VC brand. Now, it’ll end any back-office and profit sharing by Dec 31, and be fully separated “not later than” March 31, 2024.
Sequoia went global in the mid-2000s with separate funds that shared back-office functions, some LP backers, and its well-known VC brand. Now, it’ll end any back-office and profit sharing by Dec 31, and be fully separated “not later than” March 31, 2024.
Forbes
Sequoia Is Splitting Into Three VC Firms
Sequoia’s China and India and Southeast Asia funds are shedding their brand ties, becoming new firms HongShan and Peak XV Partners.
RedPajama 7B trained on 1T tokens!
• Instruct, chat, base, and interim checkpoints on huggingface
• The instruct model outperforms all open 7B models on HELM benchmarks
• The 5TB dataset has been used to train over 100 models
• Instruct, chat, base, and interim checkpoints on huggingface
• The instruct model outperforms all open 7B models on HELM benchmarks
• The 5TB dataset has been used to train over 100 models
👍1
Singularity is here? It’s amazing to witness how a few "hacks" such as a memory system + some prompt engineering can stimulate human-like behavior
Inspired by Stanford 's "Generative Agents" paper- Every agent in a GPTeam simulation has its unique personality, memories, and directives, creating human-like behavior.
"The appearance of an agentic human-like entity is an illusion. Created by a memory system and a fe of distinct Language Model prompts."- from GPTeam blog. This ad-hoc human behaviour is mind blowing.
Inspired by Stanford 's "Generative Agents" paper- Every agent in a GPTeam simulation has its unique personality, memories, and directives, creating human-like behavior.
"The appearance of an agentic human-like entity is an illusion. Created by a memory system and a fe of distinct Language Model prompts."- from GPTeam blog. This ad-hoc human behaviour is mind blowing.
LangChain Blog
GPTeam: A multi-agent simulation
Editor's Note: This is another edition in our series of guest posts highlighting novel applications of LangChain. After the Generative Agents paper was released, there was a flurry of open source projects rushing to incorporate some of the key ideas. GPTeam…
🔥3
Precision Neuroscience completes pilot clinical trials of brain computer interface on first three people
This brings Precision Neuroscience one step closer to completing an FDA application for approval of their device.
This brings Precision Neuroscience one step closer to completing an FDA application for approval of their device.
WIRED
People Let a Startup Put a Brain Implant in Their Skull—for 15 Minutes
Precision Neuroscience’s brain-computer interface sits on top of the brain, not in it. That could make it easier to implant, and less likely to damage tissue.
🔥4❤1👍1
How much has Apple spent on the most ambitious product - Apple Vision Pro?
Meta has spent ~$55B on its Oculus/Quest line since 2021, including R&D, hardware, marketing, software, device subsidies, hardware, M&A.
Apple doesn’t have product-specific P&Ls, with its investments in displays, silicon, precision manufacturing, etc., applying across nearly all its products
But since development began ~2018, Apple has filed 12,300 patents in the US and spent $97B on R&D.
Apple says over 5000 patents were filed for Vision. Unclear if counting duplicates for each country, but if its unique patents, pro-rata would mean $40B in Vision R&D.
Costs are so extraordinary because they span so many different advances (optics, projection, tracking, foveation, charging, battery) and components (dozen cameras, sensors for face scanning and environmental detection) and manufacturing techniques (glass) well beyond precedent.
Apple's annual R&D budget is nearly 2x Meta's annual Reality Labs spend - plausible Apple's XR allocation is close to Meta's - yet less than 7% of Apple's overall revenues.
Again, patent allocation may overestimate R&D on XR. But it may also underestimate it as there's more pioneering research in XR than say, tablets or macs
Further, the 5000 patents figure was specific to Vision - not other XR-related R&D, such as this Dec 2022 filing.
Which was for "Self-Mixing Interferometry" for a “Sensor-based Gesture System”, with applications covering solo or multi-ring use, with or without Apple Pencil, supporting AR, VR and MR.
Pro-rata isn’t the best way to estimate share of total R&D, but even if you haircut a third, you reach $27B.
The $55B figure on the Quest/Oculus line is really $55B spent on Reality Labs, which is thus far concentrated on these headsets, but spans other projects (e.g. CTRL-Labs) and forthcoming devices (as does Apple's R&D).
Meta has spent ~$55B on its Oculus/Quest line since 2021, including R&D, hardware, marketing, software, device subsidies, hardware, M&A.
Apple doesn’t have product-specific P&Ls, with its investments in displays, silicon, precision manufacturing, etc., applying across nearly all its products
But since development began ~2018, Apple has filed 12,300 patents in the US and spent $97B on R&D.
Apple says over 5000 patents were filed for Vision. Unclear if counting duplicates for each country, but if its unique patents, pro-rata would mean $40B in Vision R&D.
Costs are so extraordinary because they span so many different advances (optics, projection, tracking, foveation, charging, battery) and components (dozen cameras, sensors for face scanning and environmental detection) and manufacturing techniques (glass) well beyond precedent.
Apple's annual R&D budget is nearly 2x Meta's annual Reality Labs spend - plausible Apple's XR allocation is close to Meta's - yet less than 7% of Apple's overall revenues.
Again, patent allocation may overestimate R&D on XR. But it may also underestimate it as there's more pioneering research in XR than say, tablets or macs
Further, the 5000 patents figure was specific to Vision - not other XR-related R&D, such as this Dec 2022 filing.
Which was for "Self-Mixing Interferometry" for a “Sensor-based Gesture System”, with applications covering solo or multi-ring use, with or without Apple Pencil, supporting AR, VR and MR.
Pro-rata isn’t the best way to estimate share of total R&D, but even if you haircut a third, you reach $27B.
The $55B figure on the Quest/Oculus line is really $55B spent on Reality Labs, which is thus far concentrated on these headsets, but spans other projects (e.g. CTRL-Labs) and forthcoming devices (as does Apple's R&D).
A Hong Kong legislator told the media that apart from exchanges and virtual asset management, Hong Kong has not stated that it needs to be regulated.
For example, game tokens, as long as they do not involve securities and futures, will not be regulated, and Hong Kong is relatively free to develop Web3.
For example, game tokens, as long as they do not involve securities and futures, will not be regulated, and Hong Kong is relatively free to develop Web3.
ChainCatcher
对话香港立法会议员吴杰庄:系统性、分阶段地推动香港成为 Web3 发展中心 - ChainCatcher
相比政策监管、资金等问题,香港发展 Web3 最欠缺的是人才,初步估算香港的 Web3 行业起码缺 5~10 万人。
Microsoft launches OpenAI for Government
The company unveiled its new Azure OpenAI Service for federal agencies, state and local governments and their partners Wednesday, providing existing Azure Government public sector customers access to generative AI capabilities previously available only through its commercial cloud.
“For government customers, Microsoft has developed a new architecture that enables government agencies to securely access the large language models in the commercial environment from Azure Government, allowing those users to maintain the stringent security requirements necessary for government cloud operations,” Bill Chappell, chief technology officer for Strategic Missions and Technologies at Microsoft, said in a Wednesday blog post.
The company unveiled its new Azure OpenAI Service for federal agencies, state and local governments and their partners Wednesday, providing existing Azure Government public sector customers access to generative AI capabilities previously available only through its commercial cloud.
“For government customers, Microsoft has developed a new architecture that enables government agencies to securely access the large language models in the commercial environment from Azure Government, allowing those users to maintain the stringent security requirements necessary for government cloud operations,” Bill Chappell, chief technology officer for Strategic Missions and Technologies at Microsoft, said in a Wednesday blog post.
Microsoft Azure Blog
Azure OpenAI Service: Transforming Workloads for Azure Government | Microsoft Azure Blog | Microsoft Azure
You now have the opportunity to use Microsoft Azure OpenAI Service through purpose-built, AI-optimized infrastructure to securely access the large language models in the commercial environment from Azure Government. Learn more.
А new PaLM 2.1b model trained at a context length of 8k on C4.
This model release is a continuation of the previously released 150m, 410m, and 1b models.
This model release is a continuation of the previously released 150m, 410m, and 1b models.
GitHub
GitHub - conceptofmind/PaLM: An open-source implementation of Google's PaLM models
An open-source implementation of Google's PaLM models - conceptofmind/PaLM
The Chinese Academy of Sciences has unveiled a new high-performance processor chip and a new operating system, based on the popular open-source chip design standard known as RISC-V.
www.chinadaily.com.cn
New processor, OS to propel open-source chip ecosystem
The Chinese Academy of Sciences has unveiled a new high-performance processor chip and a new operating system, based on the popular open-source chip design standard known as RISC-V.
REMEDIS a unified self-supervised learning framework for developing foundation models in medical imaging
The results show improvements in data-efficient generalization.
The results show improvements in data-efficient generalization.
Nature
Robust and data-efficient generalization of self-supervised machine learning for diagnostic imaging
Nature Biomedical Engineering - A representation-learning strategy for machine-learning models applied to medical-imaging tasks improves model robustness and training efficiency and mitigates...
UNICEF_Metaverse_XR_and_children_2023_1686261291.pdf
2.6 MB
Metaverse report by UNICEF and Diplo