Morgan Stanley survey results for e-commerce activity. Google gained ground relative to the prior survey (done in Sep. 2023).
This media is not supported in your browser
VIEW IN TELEGRAM
This is so impressive! Unitree introduced Unitree G1: Humanoid Agent, AI Avatar
Price from $16K.
Unlock unlimited sports potential(Extra large joint movement angle, 23~34 joints)
Force control of dexterous hands, manipulation of all things
Imitation & reinforcement learning driven.
Price from $16K.
Unlock unlimited sports potential(Extra large joint movement angle, 23~34 joints)
Force control of dexterous hands, manipulation of all things
Imitation & reinforcement learning driven.
You can now turn any glasses into AI smart glasses for just $20 with Open Glass AI
It will then record your life and remember people names, count calories, live translate, and much more
Ant it's fully open source.
It will then record your life and remember people names, count calories, live translate, and much more
Ant it's fully open source.
GitHub
GitHub - BasedHardware/OpenGlass: Turn any glasses into AI-powered smart glasses
Turn any glasses into AI-powered smart glasses. Contribute to BasedHardware/OpenGlass development by creating an account on GitHub.
OpenAI just dropped GPT-4o and it will completely change the AI assistant game.
10 wild examples:
1. Visual assistant in real-time
2. Helping students learn in real-time
3. Real-time translation
4. Meeting assistant
5. Can be interrupted in real-time and "change emotions"
6. Help you add multi-line texts in images
7. Meeting notes with multiple speake
8. 3D object synthesis
Example, PROMPT: A realistic looking 3D rendering of the OpenAI logo with "OpenAI".
9. Brand Placement on imag
10. Generate text to font
10 wild examples:
1. Visual assistant in real-time
2. Helping students learn in real-time
3. Real-time translation
4. Meeting assistant
5. Can be interrupted in real-time and "change emotions"
6. Help you add multi-line texts in images
7. Meeting notes with multiple speake
8. 3D object synthesis
Example, PROMPT: A realistic looking 3D rendering of the OpenAI logo with "OpenAI".
9. Brand Placement on imag
10. Generate text to font
Openai
Hello GPT-4o
We’re announcing GPT-4 Omni, our new flagship model which can reason across audio, vision, and text in real time.
The Technology Innovation Institute launched a second iteration of its renowned LLM – Falcon 2
Within this series, it has two versions:
1. Falcon 2 11B, a more efficient and accessible LLM trained on 5.5 trillion tokens with 11 billion parameters, and
2. Falcon 2 11B VLM, distinguished by its vision-to-language model (VLM) capabilities, which enable seamless conversion of visual inputs into textual outputs.
While both models are multilingual, notably, Falcon 2 11B VLM stands out as TII's first multimodal model – and the only one currently in the top tier market that has this image-to-text conversion capability, marking a significant advancement in AI innovation.
Tested against several prominent AI models in its class among pre-trained models, Falcon 2 11B surpasses the performance of Meta’s newly launched Llama 3 with 8 billion parameters(8B), and performs on par with Google’s Gemma 7B at first place (Falcon 2 11B: 64.28 vs Gemma 7B: 64.29), as independently verified by HuggingFace, a US-based platform hosting an objective evaluation tool and global leaderboard for open LLMs.
More importantly, Falcon 2 11B and 11B VLM are both open-source, empowering developers worldwide with unrestricted access. In the near future, there are plans to broaden the Falcon 2 next-generation models, introducing a range of sizes. These models will be further enhanced with advanced machine learning capabilities like 'Mixture of Experts' (MoE), aimed at pushing their performance to even more sophisticated levels.
Falcon 2 11B models, equipped with multilingual capabilities, seamlessly tackle tasks in English, French, Spanish, German, Portuguese, and various other languages, enriching their versatility and magnifying their effectiveness across diverse scenarios.
Falcon 2 11B VLM, a 2 vision-to-language model, has the capability to identify and interpret images and visuals from the environment, providing a wide range of applications across industries such as healthcare, finance, e-commerce, education, and legal sectors.
Within this series, it has two versions:
1. Falcon 2 11B, a more efficient and accessible LLM trained on 5.5 trillion tokens with 11 billion parameters, and
2. Falcon 2 11B VLM, distinguished by its vision-to-language model (VLM) capabilities, which enable seamless conversion of visual inputs into textual outputs.
While both models are multilingual, notably, Falcon 2 11B VLM stands out as TII's first multimodal model – and the only one currently in the top tier market that has this image-to-text conversion capability, marking a significant advancement in AI innovation.
Tested against several prominent AI models in its class among pre-trained models, Falcon 2 11B surpasses the performance of Meta’s newly launched Llama 3 with 8 billion parameters(8B), and performs on par with Google’s Gemma 7B at first place (Falcon 2 11B: 64.28 vs Gemma 7B: 64.29), as independently verified by HuggingFace, a US-based platform hosting an objective evaluation tool and global leaderboard for open LLMs.
More importantly, Falcon 2 11B and 11B VLM are both open-source, empowering developers worldwide with unrestricted access. In the near future, there are plans to broaden the Falcon 2 next-generation models, introducing a range of sizes. These models will be further enhanced with advanced machine learning capabilities like 'Mixture of Experts' (MoE), aimed at pushing their performance to even more sophisticated levels.
Falcon 2 11B models, equipped with multilingual capabilities, seamlessly tackle tasks in English, French, Spanish, German, Portuguese, and various other languages, enriching their versatility and magnifying their effectiveness across diverse scenarios.
Falcon 2 11B VLM, a 2 vision-to-language model, has the capability to identify and interpret images and visuals from the environment, providing a wide range of applications across industries such as healthcare, finance, e-commerce, education, and legal sectors.
www.tii.ae
Falcon 2: UAE’s Technology Innovation Institute Releases New AI Model Series, Outperforming Meta’s New Llama 3
TII launched a second iteration of its renowned large language model (LLM) – Falcon 2.
😁2
Auditing AI engines. UK agency releases tools to test AI model safety
Called Inspect, the toolset — which is available under an open source license, specifically an MIT License.
The AI Safety Institute claimed that Inspect marks “the first time that an AI safety testing platform which has been spearheaded by a state-backed body has been released for wider use.”
Called Inspect, the toolset — which is available under an open source license, specifically an MIT License.
The AI Safety Institute claimed that Inspect marks “the first time that an AI safety testing platform which has been spearheaded by a state-backed body has been released for wider use.”
GOV.UK
AI Safety Institute releases new AI safety evaluations platform
The AI Safety Institute has open released a new testing platform to strengthen AI safety evaluations.
Hyperion Software by Ultraleap is the ultimate computer vision platform revolutionising the world of HMI.
This highly flexible platform gives users unprecedented control over their hand tracking interactions, allowing them to tune different parameters and switch between models to suit their application.
Features:
- microgestures
- hands handling objects
- low, balanced and ultra power modes to scale performance to hardware
- enhanced compatibility
- switching between all of these instantaneously in your application
- camera parameter controls
- and more to come!
This highly flexible platform gives users unprecedented control over their hand tracking interactions, allowing them to tune different parameters and switch between models to suit their application.
Features:
- microgestures
- hands handling objects
- low, balanced and ultra power modes to scale performance to hardware
- enhanced compatibility
- switching between all of these instantaneously in your application
- camera parameter controls
- and more to come!
𝗬-𝗟𝗮𝗿𝗴𝗲 introduced largest model in multiple ways:
Yi-Large API (global)
platform.01.ai
Yi-Large API (China)
platform.lingyiwanwu.com
Yi-Large + Wanzhi productivity product (万知 in China)
wanzhi.com
Yi-Large API (global)
platform.01.ai
Yi-Large API (China)
platform.lingyiwanwu.com
Yi-Large + Wanzhi productivity product (万知 in China)
wanzhi.com
Gemini 1.5 Pro will now have 2M token context length #GoogleIO2024
Google announces Gemma 2, a 27B-parameter version of its open model, launching in June.
TechCrunch
Google announces Gemma 2, a 27B-parameter version of its open model, launching in June
At Google I/O, Google introduced Gemma 2, the next generation of Google's Gemma models, which will launch with a 27 billion parameter model in June.
Google introduced LearnLM: a new family of models based on Gemini and fine-tuned for learning.
LearnLM applies educational research to make products — like Search, Gemini and YouTube — more personal, active and engaging for learners.
LearnLM applies educational research to make products — like Search, Gemini and YouTube — more personal, active and engaging for learners.
Today we have AI systems that can write stories, systems that can create game maps, and systems that can play games. How about all of it at once? In this Word2World: Generating Stories and Worlds through Large Language Models researchers demonstrated a Word2World, a system that can generate complete stories and games - and play them!
arXiv.org
Word2World: Generating Stories and Worlds through Large Language Models
Large Language Models (LLMs) have proven their worth across a diverse spectrum of disciplines. LLMs have shown great potential in Procedural Content Generation (PCG) as well, but directly...
Google will have its new Trillium TPU processors manufactured on a TSMC 3nm or 4nm process, media report, adding Google’s design partner was Broadcom and the TPU is based on the Arm architecture.
Separately, the processor in Google’s Pixel 10 smartphones, due in 2025, will also be made using TSMC’s 3nm process.
Separately, the processor in Google’s Pixel 10 smartphones, due in 2025, will also be made using TSMC’s 3nm process.
工商時報
台積、欣興 打入Google供應鏈
谷歌(Google)14日宣布將推出第六代TPU晶片Trillium自研晶片,預計年底推出,該款將採台積電3~4奈米製程,博通設計,ARM架構設計,谷歌表示,是迄今效能最高及最節能TPU。 法人指出,谷歌積極開發自研晶片,按其研發排程,第七代及第八代產品將計劃分別與聯發科、世芯-KY攜手打造,本次第...
VR_and_robot_therapy_for_rehabilitation_1715865000.pdf
1.6 MB
Amazing application of VR combined with brain measurements in the interest of motor rehabilitation.
This is exactly the type of VR in healthcare learning application that I believe will disrupt healthcare.
This offers an extensive review of this literature.
From a neuroscience of learning perspective this is spot on.
VR broadly engages multiple motor learning centers in the brain in synchrony.
Combine that with signals that can be extracted in real-time from the brain and you have the potential for strong rehabilitation.
There is a ton more work to do but reviews like this can guide us.
This is exactly the type of VR in healthcare learning application that I believe will disrupt healthcare.
This offers an extensive review of this literature.
From a neuroscience of learning perspective this is spot on.
VR broadly engages multiple motor learning centers in the brain in synchrony.
Combine that with signals that can be extracted in real-time from the brain and you have the potential for strong rehabilitation.
There is a ton more work to do but reviews like this can guide us.
AR/VR Market set for major growth?
IDC reports a promising turnaround for the AR/VR industry, with headset shipments expected to surge 44.2% to 9.7 million units in 2024.
Key Highlights:
1. Launches like the Meta Quest 3 & Apple's Vision Pro are not just capturing headlines but also pushing the boundaries for what's possible in AR and VR.
2. From virtual to mixed reality, companies are paving the way for authentic AR experiences, with a significant growth potential highlighted for both sectors through 2028.
Whats the future holding?
- Expected to reach 24.7 million units by 2028, expanding beyond gaming into business applications.
- Forecasted to grow at an 87.1% CAGR, reaching 10.9 million units by 2028.
IDC reports a promising turnaround for the AR/VR industry, with headset shipments expected to surge 44.2% to 9.7 million units in 2024.
Key Highlights:
1. Launches like the Meta Quest 3 & Apple's Vision Pro are not just capturing headlines but also pushing the boundaries for what's possible in AR and VR.
2. From virtual to mixed reality, companies are paving the way for authentic AR experiences, with a significant growth potential highlighted for both sectors through 2028.
Whats the future holding?
- Expected to reach 24.7 million units by 2028, expanding beyond gaming into business applications.
- Forecasted to grow at an 87.1% CAGR, reaching 10.9 million units by 2028.
IDC
IDC Forecasts Robust Growth for AR/VR Headset Shipments Fueled by the Rise of Mixed Reality
IDC examines consumer markets by devices, applications, networks, and services to provide complete solutions for succeeding in these expanding markets.
Huggingface launched ZeroGPU, a shared infrastructure for indie and academic AI builders to run AI demos on Spaces
Technically speaking, ZeroGPU leverages Hugging Face's experience in hosting and serving more than 100 Petabytes monthly from the Hugging Face Hub.
ZeroGPU allows Spaces to run on multiple GPUs by making Spaces efficiently hold and release GPUs as needed (as opposed to a classical GPU Space that holds exactly one GPU at any time).
This architecture is also more energy-efficient since GPUs are shared rather than duplicated. ZeroGPU uses Nvidia A100 GPU devices under the hood.
Technically speaking, ZeroGPU leverages Hugging Face's experience in hosting and serving more than 100 Petabytes monthly from the Hugging Face Hub.
ZeroGPU allows Spaces to run on multiple GPUs by making Spaces efficiently hold and release GPUs as needed (as opposed to a classical GPU Space that holds exactly one GPU at any time).
This architecture is also more energy-efficient since GPUs are shared rather than duplicated. ZeroGPU uses Nvidia A100 GPU devices under the hood.
👍2
Stages_of_Funding_An_Explainer__1715945861.pdf
3.8 MB
Stages of Funding. Awesome Explainer by Spice Route Finance
1. This document provides an in-depth examination of the various stages of funding for startups, from initial rounds like pre-seed and seed funding to more advanced stages including Series A, B, C, and potentially Series D and E rounds.
2. Each stage is meticulously described, offering insights into the amounts typically raised, the sources of these funds, and the crucial aspects that attract investors at different phases of a startup's growth.
3. The document elucidates the journey of startup funding, beginning with personal investments and small angel rounds to substantial venture capital injections as companies scale and mature.
3. It discusses the dynamics and expectations at each funding stage, the challenges startups face, and the criteria investors consider before committing their capital.
4. The guide also highlights the potential pitfalls and successes that startups may encounter as they navigate through these critical stages of growth.
1. This document provides an in-depth examination of the various stages of funding for startups, from initial rounds like pre-seed and seed funding to more advanced stages including Series A, B, C, and potentially Series D and E rounds.
2. Each stage is meticulously described, offering insights into the amounts typically raised, the sources of these funds, and the crucial aspects that attract investors at different phases of a startup's growth.
3. The document elucidates the journey of startup funding, beginning with personal investments and small angel rounds to substantial venture capital injections as companies scale and mature.
3. It discusses the dynamics and expectations at each funding stage, the challenges startups face, and the criteria investors consider before committing their capital.
4. The guide also highlights the potential pitfalls and successes that startups may encounter as they navigate through these critical stages of growth.
👍5🆒2
State_of_ZK_Report_for_Q1_2024_1715952894.pdf
1.8 MB
State of ZK Report (Q1-2024)
The "State of ZK Report for Q1 2024" presents a comprehensive overview of the significant developments, trends, and innovations within the Zero-Knowledge (ZK) ecosystem during the first quarter of 2024.
This report highlights the progress made in optimizing SNARK/STARK systems, the notable rollouts of new ZK technologies, and the ongoing integration of ZK solutions into various blockchain projects.
1. One of the key advancements this quarter was the rollout of EIP 4844 with Ethereum’s Mainnet Dencun upgrade, which significantly lowered Layer 2 (L2) gas fees.
2. This development, along with the introduction of new projects and products by existing ZK companies, underscores the dynamic nature of the ZK landscape.
3. Emerging segments such as ZK Prover Marketplaces, zkVMs (non-zkEVMs), and Fully Homomorphic Encryption (FHE) projects are beginning to take shape.
4. While FHE is not traditionally classified under the ZK umbrella, its synergy with ZK technology offers new possibilities for creating private and secure execution environments.
5. The report also details the growth of ZK prover marketplaces, including the announcement of Nebra and Succinct’s Prover Network, joining other established players like Nexus, =nil, RISC Zero, and Gevulot in decentralizing ZK proof generation.
6. Additionally, significant launches, such as the Succinct Processor 1 (SP1) zkVM, have spurred discussions on benchmarking standards for zkVM performance.
7. Furthermore, new Layer 1 (L1) blockchains continue to explore ZK integrations, with notable activities in projects like Celestia and Bitcoin, which have recently seen the emergence of ZK rollup teams.
8. The report concludes with an optimistic outlook for 2024, highlighting the continued momentum and innovation within the ZK space.
The "State of ZK Report for Q1 2024" presents a comprehensive overview of the significant developments, trends, and innovations within the Zero-Knowledge (ZK) ecosystem during the first quarter of 2024.
This report highlights the progress made in optimizing SNARK/STARK systems, the notable rollouts of new ZK technologies, and the ongoing integration of ZK solutions into various blockchain projects.
1. One of the key advancements this quarter was the rollout of EIP 4844 with Ethereum’s Mainnet Dencun upgrade, which significantly lowered Layer 2 (L2) gas fees.
2. This development, along with the introduction of new projects and products by existing ZK companies, underscores the dynamic nature of the ZK landscape.
3. Emerging segments such as ZK Prover Marketplaces, zkVMs (non-zkEVMs), and Fully Homomorphic Encryption (FHE) projects are beginning to take shape.
4. While FHE is not traditionally classified under the ZK umbrella, its synergy with ZK technology offers new possibilities for creating private and secure execution environments.
5. The report also details the growth of ZK prover marketplaces, including the announcement of Nebra and Succinct’s Prover Network, joining other established players like Nexus, =nil, RISC Zero, and Gevulot in decentralizing ZK proof generation.
6. Additionally, significant launches, such as the Succinct Processor 1 (SP1) zkVM, have spurred discussions on benchmarking standards for zkVM performance.
7. Furthermore, new Layer 1 (L1) blockchains continue to explore ZK integrations, with notable activities in projects like Celestia and Bitcoin, which have recently seen the emergence of ZK rollup teams.
8. The report concludes with an optimistic outlook for 2024, highlighting the continued momentum and innovation within the ZK space.
❤3👍2🎃1