AI-generated human videos from pose detection is coming with Alibaba researcher's new 'Animate Anyone'.
GitHub
[Official Updates] Follow-up plans for the project · Issue #12 · HumanAIGC/AnimateAnyone
Thank you all for your incredible support and interest in our project. We've received lots of inquiries regarding a demo or the source code. We want to assure you that we are actively working o...
👍2
OpenBCI launches a neuro-powered spatial computer
Galea Beta device includes a range of sensors that simultaneously measure the user’s heart, skin, muscles, eyes, and brain.
Galea Beta includes eye-tracking and displays from Finnish headset-maker, Varjo and can be ordered with the Varjo Aero, XR-3 or the recently announced XR-4.
The Galea Beta sensors can be used without the HMD, or can be tethered to a high-powered PC and used for collecting data from VR and XR environments.
Long-term goal for Galea is to bring everything you see on the table, together into one device. Optics, CPU, I/O and sensors, in one tightly synchronized integrated system.
Galea Beta device includes a range of sensors that simultaneously measure the user’s heart, skin, muscles, eyes, and brain.
Galea Beta includes eye-tracking and displays from Finnish headset-maker, Varjo and can be ordered with the Varjo Aero, XR-3 or the recently announced XR-4.
The Galea Beta sensors can be used without the HMD, or can be tethered to a high-powered PC and used for collecting data from VR and XR environments.
Long-term goal for Galea is to bring everything you see on the table, together into one device. Optics, CPU, I/O and sensors, in one tightly synchronized integrated system.
OpenBCI Community
OpenBCI unveils vision for wearable, neuro-powered personal computer at Slush 2023
OpenBCI unveils vision for wearable, neuro-powered personal computer at Slush 2023. Latest Galea Beta device was revealed for the first time on-stage, along with OpenBCI’s future vision for "Galea Unlimited" wearable computer.
👍3
According to a Chinese computer scientist who asked not to be named, the new Sunway is not the most powerful supercomputer in China at present.
But after details of it were given at the Supercomputing 2023 (SC23) conference in Denver, US, earlier this month, it gave the public some hints on how China has managed to sidestep US sanctions to build its own supercomputers.
This Chinese dark horse has also outdone leading supercomputers, including the Frontier, in computing efficiency.
It can maintain over 85 per cent of its peak performance in regular operation, ranking the highest among all heterogeneous systems – a type of common supercomputing architecture – and second among all systems.
Meanwhile, China’s most powerful supercomputer remains undisclosed and other supercomputing chips are also under development, according to the Chinese scientist who works at a top mainland university.
But after details of it were given at the Supercomputing 2023 (SC23) conference in Denver, US, earlier this month, it gave the public some hints on how China has managed to sidestep US sanctions to build its own supercomputers.
This Chinese dark horse has also outdone leading supercomputers, including the Frontier, in computing efficiency.
It can maintain over 85 per cent of its peak performance in regular operation, ranking the highest among all heterogeneous systems – a type of common supercomputing architecture – and second among all systems.
Meanwhile, China’s most powerful supercomputer remains undisclosed and other supercomputing chips are also under development, according to the Chinese scientist who works at a top mainland university.
South China Morning Post
New Sunway supercomputer hints at how China sidestepped US sanctions
Chip war between US and China fails to stop scientists building one of the top supercomputers in the world.
A very important sleep study came out
Sleep has a huge impact on someone's health and even health span, but now it seems that "sleep regularity is an important predictor of mortality risk and is a stronger predictor than sleep duration."
Thus sleep regularity should be a simple, yet effective target for improving general health and survival.
Sleep has a huge impact on someone's health and even health span, but now it seems that "sleep regularity is an important predictor of mortality risk and is a stronger predictor than sleep duration."
Thus sleep regularity should be a simple, yet effective target for improving general health and survival.
OUP Academic
Sleep regularity is a stronger predictor of mortality risk than sleep duration: A prospective cohort study
Abstract. Abnormally short and long sleep are associated with premature mortality, and achieving optimal sleep duration has been the focus of sleep health guide
⚡4
Google has quietly delayed the public debut of Gemini to January
Sundar Pichai recently decided to scrap a series of Gemini events, originally scheduled this week after the company found the AI didn’t reliably handle some non-English queries.
It’s rare for Google to launch a major product between Thanksgiving and the end of the year, but Google intended to make an exception for Gemini because it’s arguably the company’s most important initiative in a decade.
The Gemini event in Washington was intended to showcase the technology to policymakers and politicians, which have increasingly discussed potential regulations involving AI.
Sundar Pichai recently decided to scrap a series of Gemini events, originally scheduled this week after the company found the AI didn’t reliably handle some non-English queries.
It’s rare for Google to launch a major product between Thanksgiving and the end of the year, but Google intended to make an exception for Gemini because it’s arguably the company’s most important initiative in a decade.
The Gemini event in Washington was intended to showcase the technology to policymakers and politicians, which have increasingly discussed potential regulations involving AI.
The Information
Google Preps Public Preview of Gemini AI After Postponing In-Person Launch Events
Update, Dec.4: After Google quietly scrapped a set of in-person events to launch Gemini, its biggest artificial intelligence initiative in a decade, the company has planned a virtual preview of the new AI as soon as this week, said a person with knowledge…
❤1
No GPU but wanna create your own LLM on laptop?
Here is a QLoRA on CPU, making LLM fine-tuning on client CPU possible.
Code.
Here is a QLoRA on CPU, making LLM fine-tuning on client CPU possible.
Code.
Medium
Creating Large Language Models on Your Laptop
Making Fine-Tuning Possible on Your Personal Computer
Taiwan's National Science and Technology Council has published a list of key technologies that are significant to Taiwan's national security: semiconductor manufacturing process technology under 14 nm included.
Can computers simulate brains?
Scientists have been exploring the intersection of math, computers, and neuroscience for decades. Today, machine learning is unlocking new possibilities in brain modeling.
Fascinating Nature paper on the latest in the field.
Scientists have been exploring the intersection of math, computers, and neuroscience for decades. Today, machine learning is unlocking new possibilities in brain modeling.
Fascinating Nature paper on the latest in the field.
Nature
How AI could lead to a better understanding of the brain
Nature - Early machine-learning systems were inspired by neural networks — now AI might allow neuroscientists to get to grips with the brain’s unique complexities.
👍4
How to Think of R&D - new report by a16Z on the cost item that is hardest to measure and track, but most important for tech companies.
1. R&D is the lifeblood of tech companies, but it’s the hardest to measure, and takes the longest to see paybacks and measure effectiveness.
2. So, how to allocate, plan, and measure R&D spend? First, start with benchmarks.
3. Then, map R&D spend to your product roadmap with expected returns/timelines. 70-20-10 is a common framework. It should be an output of the prioritization work you do, not a prescription. In platform shifts especially, like we have w/ AI now, 70-20-10 probably isn’t right.
4. Here’s another framework to consider rationale for spend and expected timing - offensive/defensive and short/long time.
5. Last, performance management is key.
6. Applying a framework and rigor to ROI is as critical for R&D as other areas of spend. The path and math are not as straightforward, but getting this right is critical.
1. R&D is the lifeblood of tech companies, but it’s the hardest to measure, and takes the longest to see paybacks and measure effectiveness.
2. So, how to allocate, plan, and measure R&D spend? First, start with benchmarks.
3. Then, map R&D spend to your product roadmap with expected returns/timelines. 70-20-10 is a common framework. It should be an output of the prioritization work you do, not a prescription. In platform shifts especially, like we have w/ AI now, 70-20-10 probably isn’t right.
4. Here’s another framework to consider rationale for spend and expected timing - offensive/defensive and short/long time.
5. Last, performance management is key.
6. Applying a framework and rigor to ROI is as critical for R&D as other areas of spend. The path and math are not as straightforward, but getting this right is critical.
👍2
Apple released new software from machine learning research.
MLX is an efficient machine learning framework specifically designed for Apple silicon (i.e. your laptop!)
Code.
This may be Apple's biggest move on open-source AI so far: MLX, a PyTorch-style NN framework optimized for Apple Silicon, e.g. laptops with M-series chips.
The release did an excellent job on designing an API familiar to the deep learning audience, and showing minimalistic examples on OSS models that most people care about: Llama, LoRA, Stable Diffusion, and Whisper.
MLX is an efficient machine learning framework specifically designed for Apple silicon (i.e. your laptop!)
Code.
This may be Apple's biggest move on open-source AI so far: MLX, a PyTorch-style NN framework optimized for Apple Silicon, e.g. laptops with M-series chips.
The release did an excellent job on designing an API familiar to the deep learning audience, and showing minimalistic examples on OSS models that most people care about: Llama, LoRA, Stable Diffusion, and Whisper.
GitHub
GitHub - ml-explore/mlx: MLX: An array framework for Apple silicon
MLX: An array framework for Apple silicon. Contribute to ml-explore/mlx development by creating an account on GitHub.
🔥1
Nvidia CEO Jensen Huang said Huawei is among a field of “very formidable” competitors in the race to create the best AI chips, adding Nvidia is working closely with US officials to make new chips for the China market that adhere “perfectly” to the latest US rules.
The Edge Malaysia
Nvidia sees Huawei as formidable AI chipmaking rival, says CEO
Huawei Technologies Co is among a field of “very formidable” competitors to Nvidia Corp in the race to produce the best AI chips, according to the American company’s chief.
❤1
Google DeepMind developed a new way for AI agents to acquire knowledge from human demonstrations in real-time.
This allows for "cultural transmission" without needing large datasets - something that can massively amplify learning over time.
Google DeepMind also published a new paper detailing their (research-driven) attack on rival OpenAI’s ChatGPT.
The finding is that forcing the model to repeat a word forever causes it to leak training data.
This allows for "cultural transmission" without needing large datasets - something that can massively amplify learning over time.
Google DeepMind also published a new paper detailing their (research-driven) attack on rival OpenAI’s ChatGPT.
The finding is that forcing the model to repeat a word forever causes it to leak training data.
👍5🔥3
Scale is launching Automotive Foundation Model, AFM-1.
It is a SOTA language-grounded perception model for autonomous vehicles.
It is a SOTA language-grounded perception model for autonomous vehicles.
Scale
Introducing Scale’s Automotive Foundation Model
State of the art, language-grounded, perception model to accelerate the development of autonomous vehicles, enabling autolabeling & curation.
🔥1
Woooow! Bard is running on a new model called gemini pro. it's pretty good!
Google also announced Cloud TPU v5p and AI Hypercomputer, most powerful and scalable TPU accelerator to date, Cloud TPU v5p can train models 2.8X faster than its predecessor.
TPU v5p is also 4X more scalable than TPU v4 in terms of total available FLOPs per pod. It joins Cloud TPU v5e which has 2.3X performance per dollar improvements over the previous generation TPU v4, and is most cost-efficient TPU to date.
AI Hypercomputer, a groundbreaking supercomputer architecture that employs an integrated system of performance-optimized hardware, open software, leading ML frameworks, and flexible consumption models.
TPU v5p is also 4X more scalable than TPU v4 in terms of total available FLOPs per pod. It joins Cloud TPU v5e which has 2.3X performance per dollar improvements over the previous generation TPU v4, and is most cost-efficient TPU to date.
AI Hypercomputer, a groundbreaking supercomputer architecture that employs an integrated system of performance-optimized hardware, open software, leading ML frameworks, and flexible consumption models.
Google Cloud Blog
Introducing Cloud TPU v5p and AI Hypercomputer | Google Cloud Blog
The new TPU v5p is a core element of AI Hypercomputer, which is tuned, managed, and orchestrated specifically for gen AI training and serving.
👍3
AlphaCode-2 is also announced today. It's a competitive coding model finetuned from Gemini.
In the technical report, DeepMind shares a surprising amount of details on an inference-time search, filtering, and re-ranking system. This may be Google's Q*?
They also discussed the finetuning procedure, which is 2 rounds of GOLD (an offline RL algorithm for LLM from 2020), and the training dataset.
AlphaCode-2 scores at 87% percentile among the human competitors.
In the technical report, DeepMind shares a surprising amount of details on an inference-time search, filtering, and re-ranking system. This may be Google's Q*?
They also discussed the finetuning procedure, which is 2 rounds of GOLD (an offline RL algorithm for LLM from 2020), and the training dataset.
AlphaCode-2 scores at 87% percentile among the human competitors.
👍5