A newly revealed patent from Microsoft Bing detailed ‘Visual Search’.
The patent describes a reverse image search with personal results tailored to user preferences and interests.
The patent describes a reverse image search with personal results tailored to user preferences and interests.
⚡5
Integrated data visualization and analysis for workhorse biological assays
Make publication-quality figures in under a minute. No more fiddling with Excel, Prism, ggplot, and PowerPoint.
Make publication-quality figures in under a minute. No more fiddling with Excel, Prism, ggplot, and PowerPoint.
⚡4
Google presents Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention
1B model that was fine-tuned on up to 5K sequence length passkey instances solves the 1M length problem.
The future of attention just dropped, and it looks a lot like a state space model (finite size, continual updates)
Little doubt now that a mixture of architectures will support the long-term, gradually conditioned memory needed for highly capable agents
1B model that was fine-tuned on up to 5K sequence length passkey instances solves the 1M length problem.
The future of attention just dropped, and it looks a lot like a state space model (finite size, continual updates)
Little doubt now that a mixture of architectures will support the long-term, gradually conditioned memory needed for highly capable agents
arXiv.org
Leave No Context Behind: Efficient Infinite Context Transformers...
This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key component in our proposed...
⚡4
MR_for_autism_1712848311.pdf
2.6 MB
Eye tracking as a window onto conscious and non-conscious processing in the brain.
The goals are two-fold and relatively simple:
1. propose and test a method for familiarizing individuals with severe autism spectrum disorder (ASD) with the HoloLens 2 headset and the use of MR technology through a tutorial.
2. obtain quantitative learning indicators in MR, such as execution speed and eye tracking, by comparing individuals with ASD to neurotypical individuals.
Over 80% of individuals with ASD successfully familiarized themselves with MR after several sessions.
In addition, the visual activity of individuals with ASD did not differ from that of neurotypical individuals when they successfully familiarized themselves.
This opens a lot of doors for potential learning opportunities in this population.
The goals are two-fold and relatively simple:
1. propose and test a method for familiarizing individuals with severe autism spectrum disorder (ASD) with the HoloLens 2 headset and the use of MR technology through a tutorial.
2. obtain quantitative learning indicators in MR, such as execution speed and eye tracking, by comparing individuals with ASD to neurotypical individuals.
Over 80% of individuals with ASD successfully familiarized themselves with MR after several sessions.
In addition, the visual activity of individuals with ASD did not differ from that of neurotypical individuals when they successfully familiarized themselves.
This opens a lot of doors for potential learning opportunities in this population.
⚡4
Apple nears production of first M4 chips with AI upgrades and plans to bring the new chips to all of its Macs, including new MacBook Pros and Airs, Mac Pro, Mac Studio, Mac mini and iMac across the end of this year and 2025.
Bloomberg.com
Apple Plans to Overhaul Entire Mac Line With AI-Focused M4 Chips
Apple Inc., aiming to boost sluggish computer sales, is preparing to overhaul its entire Mac line with a new family of in-house processors designed to highlight artificial intelligence.
⚡4
Sanctuary AI announced a partnership with European automaker Magna.
The collab features Sanctuary AI’s development of general-purpose AI robots for deployment in Magna’s manufacturing operations.
The collab features Sanctuary AI’s development of general-purpose AI robots for deployment in Magna’s manufacturing operations.
Bloomberg.com
Robotics Startup Sanctuary Signs Deal for Factory Tests, Funds
Humanoid robot-making startup Sanctuary AI has struck a deal with a major auto-parts manufacturer for deployment in its factories and additional equity.
👍4⚡3
Forbes released their AI 50 list.
Here’s a categorized list of the companies, including their current funding amounts.
Here’s a categorized list of the companies, including their current funding amounts.
⚡6
Germany published the report "Generative AI Models - Opportunities and Risks for Industry and Authorities."
Quotes & comments:
1. “LLMs are trained based on huge text corpora. The origin of these texts and their quality are generally not fully verified due to the large amount of data. Therefore, personal or copyrighted data, as well as texts with questionable, false, or discriminatory content (e.g., disinformation, propaganda, or hate messages), may be included in the training set. When generating outputs, these contents may appear in these outputs either verbatim or slightly altered (Weidinger, et al., 2022). Imbalances in the training data can also lead to biases in the model" (page 9)
2. “If individual data points are disproportionately present in the training data, there is a risk that the model cannot adequately learn the desired data distribution and, depending on the extent, tends to produce repetitive, one-sided, or incoherent outputs (known as model collapse). It is expected that this problem will increasingly occur in the future, as LLM-generated data becomes more available on the internet and is used to train new LLMs (Shumailov, et al., 2023). This could lead to self-reinforcing effects, which is particularly critical in cases where texts with abuse potential have been generated, or when a bias in text data becomes entrenched. This happens, for example, as more and more relevant texts are produced and used again for training new models, which in turn generate a multitude of texts (Bender, et al., 2021)." (page 10)
3. “The high linguistic quality of the model outputs, combined with user-friendly access via APIs and the enormous flexibility of responses from currently popular LLMs, makes it easier for criminals to misuse the models for a targeted generation of misinformation (De Angelis, et al., 2023), propaganda texts, hate messages, product reviews, or posts for social media."
According to the report, special attention should be given to the following aspects:
➵ Raising awareness of users;
➵ Testing;
➵ Handling sensitive data;
➵ Establishing transparency;
➵ Auditing of inputs and outputs;
➵ Paying attention to (indirect) prompt injections;
➵ Selection and management of training data;
➵ Developing practical expertise.
Quotes & comments:
1. “LLMs are trained based on huge text corpora. The origin of these texts and their quality are generally not fully verified due to the large amount of data. Therefore, personal or copyrighted data, as well as texts with questionable, false, or discriminatory content (e.g., disinformation, propaganda, or hate messages), may be included in the training set. When generating outputs, these contents may appear in these outputs either verbatim or slightly altered (Weidinger, et al., 2022). Imbalances in the training data can also lead to biases in the model" (page 9)
2. “If individual data points are disproportionately present in the training data, there is a risk that the model cannot adequately learn the desired data distribution and, depending on the extent, tends to produce repetitive, one-sided, or incoherent outputs (known as model collapse). It is expected that this problem will increasingly occur in the future, as LLM-generated data becomes more available on the internet and is used to train new LLMs (Shumailov, et al., 2023). This could lead to self-reinforcing effects, which is particularly critical in cases where texts with abuse potential have been generated, or when a bias in text data becomes entrenched. This happens, for example, as more and more relevant texts are produced and used again for training new models, which in turn generate a multitude of texts (Bender, et al., 2021)." (page 10)
3. “The high linguistic quality of the model outputs, combined with user-friendly access via APIs and the enormous flexibility of responses from currently popular LLMs, makes it easier for criminals to misuse the models for a targeted generation of misinformation (De Angelis, et al., 2023), propaganda texts, hate messages, product reviews, or posts for social media."
According to the report, special attention should be given to the following aspects:
➵ Raising awareness of users;
➵ Testing;
➵ Handling sensitive data;
➵ Establishing transparency;
➵ Auditing of inputs and outputs;
➵ Paying attention to (indirect) prompt injections;
➵ Selection and management of training data;
➵ Developing practical expertise.
⚡4
Google DeepMind published new work on using reinforcement learning to train robots to be more agile (like soccer players).
They tested a new method of adding disruptive forces and randomness, which led to impressive results compared to the baseline.
They tested a new method of adding disruptive forces and randomness, which led to impressive results compared to the baseline.
Science Robotics
Learning agile soccer skills for a bipedal robot with deep reinforcement learning
OP3 humanoid robots learned to play agile soccer using deep reinforcement learning.
⚡4
Researchers at Tsinghua University in China have developed a revolutionary new AI chip that uses light instead of electricity to process data.
Dubbed “Taichi,” the chip is reportedly over 1,000 times more energy-efficient than Nvidia’s high performance H100 GPU chip.
Dubbed “Taichi,” the chip is reportedly over 1,000 times more energy-efficient than Nvidia’s high performance H100 GPU chip.
Interesting Engineering
Light-based chip: China's Taichi could power artificial general intelligence
Researchers have developed a highly energy-efficient photonic AI chip called Taichi, which could accelerate the development of advanced computing solutions.
⚡5🔥2
The AI Index 2024 annual report by Stanford University is finally here.
This is the resource that will impact our conversations on the topic in the following months, highlighting the latest technical developments, its impact on global organizations, society and research, and the existential need for clear regulation and governance.
This is the resource that will impact our conversations on the topic in the following months, highlighting the latest technical developments, its impact on global organizations, society and research, and the existential need for clear regulation and governance.
⚡4
Executive exodus and a 10% cut to workforce at Tesla
Drew Baglino resigned from Tesla. Baglino had been at Tesla since 2006 and was SVP of Powertrain/Energy Engineering. He was one of just four named executive officers and integral to the work the company was doing on everything from EVs to energy storage and next gen 4680 cells. Drew posted on social media a few hours after our story to confirm his departure.
Tesla's Public Policy chief Rohan Patel also has left the company, we reported.
When you take in to account it follows key names in Tesla's semicondcutor team leaving earlier this year and Zach Kirkhorn leaving in August, its a lot of intellectual capital and experience out.
Actually things are going well - particularly in the energy division - and a reason for Baglino leaving is he felt the place was in good hands.
Last night Musk told staff around the world Tesla would cut 10% of more of its global workforce. The reasons are clear: pursuit of cost cuts and productivity in a tough environment. But Tesla's grown quickly. And Musk cited the number of duplicate roles in his reason for the RIF. 10% of Tesla's staff is around 14,000 people. So its sizable, the biggest RIF ever.
Drew Baglino resigned from Tesla. Baglino had been at Tesla since 2006 and was SVP of Powertrain/Energy Engineering. He was one of just four named executive officers and integral to the work the company was doing on everything from EVs to energy storage and next gen 4680 cells. Drew posted on social media a few hours after our story to confirm his departure.
Tesla's Public Policy chief Rohan Patel also has left the company, we reported.
When you take in to account it follows key names in Tesla's semicondcutor team leaving earlier this year and Zach Kirkhorn leaving in August, its a lot of intellectual capital and experience out.
Actually things are going well - particularly in the energy division - and a reason for Baglino leaving is he felt the place was in good hands.
Last night Musk told staff around the world Tesla would cut 10% of more of its global workforce. The reasons are clear: pursuit of cost cuts and productivity in a tough environment. But Tesla's grown quickly. And Musk cited the number of duplicate roles in his reason for the RIF. 10% of Tesla's staff is around 14,000 people. So its sizable, the biggest RIF ever.
Bloomberg.com
Tesla Executive Baglino Leaves as Musk Loses Another Top Deputy
Two of Tesla Inc.’s top executives have left in the midst of the carmaker’s largest-ever round of job cuts, as slowing electric-vehicle demand leads the company to reduce its global headcount by more than 10%.
⚡4
Paradigm announced an open source Reth Alphanet.
Reth AlphaNet is an OP Stack-compatible testnet rollup that maximizes Reth performance and enables experimentation of bleeding edge Ethereum Research.
Reth AlphaNet is a testnet rollup built on OP Stack & OP Reth.
Reth AlphaNet is aimed at experimentation of Ethereum research at the bleeding edge, and comes with 3 EIPs not available anywhere else yet. EIP-3074, EIP-71212, EIP-2537.
These EIPs are built with best-practices in mind, are optimized, and tested.
Reth AlphaNet is an OP Stack-compatible testnet rollup that maximizes Reth performance and enables experimentation of bleeding edge Ethereum Research.
Reth AlphaNet is a testnet rollup built on OP Stack & OP Reth.
Reth AlphaNet is aimed at experimentation of Ethereum research at the bleeding edge, and comes with 3 EIPs not available anywhere else yet. EIP-3074, EIP-71212, EIP-2537.
These EIPs are built with best-practices in mind, are optimized, and tested.
Paradigm
Reth AlphaNet - Paradigm
Paradigm is a research-driven crypto investment firm that funds companies and protocols from their earliest stages.
This is Microsoft Asia team overtaking original GPT-4 with their evol tuning
Today they are announcing WizardLM-2, next generation state-of-the-art LLM.
New family includes three cutting-edge models: WizardLM-2 8x22B, 70B, and 7B - demonstrates highly competitive performance compared to leading proprietary LLMs.
Model weights.
Today they are announcing WizardLM-2, next generation state-of-the-art LLM.
New family includes three cutting-edge models: WizardLM-2 8x22B, 70B, and 7B - demonstrates highly competitive performance compared to leading proprietary LLMs.
Model weights.
wizardlm.github.io
TWITTER BANNER TITLE META TAG
TWITTER BANNER DESCRIPTION META TAG
⚡4
Hong Kong approves first batch of spot bitcoin, ether ETFs in drive to become crypto hub
Hong Kong's Securities and Futures Commission approved several spot bitcoin and ether ETFs.
China Asset Management said it will collaborate with OSL and BOCI International to offer spot bitcoin and ether ETFs, with OSL serving as a trading and sub-custodian partner.
Harvest Global said it received in-principle approval for two spot crypto ETFs, also issued in collaboration with OSL.
Bosera and HashKey Capital obtained conditional approval to launch a spot bitcoin ETF and a spot ether ETF, expected to allow investors to subscribe for shares using bitcoin and ether directly.
The asset managers did not disclose the timeline of their launch.
Hong Kong’s crypto-friendly policies contrast sharply with mainland China’s crackdown following the establishment of a licensing regime for crypto trading platforms in June 2023.
Meanwhile, Bloomberg ETF analyst Eric Balchunas said the Hong Kong crypto ETFs will be "lucky" to attract $500 million in total assets under management.
Hong Kong's Securities and Futures Commission approved several spot bitcoin and ether ETFs.
China Asset Management said it will collaborate with OSL and BOCI International to offer spot bitcoin and ether ETFs, with OSL serving as a trading and sub-custodian partner.
Harvest Global said it received in-principle approval for two spot crypto ETFs, also issued in collaboration with OSL.
Bosera and HashKey Capital obtained conditional approval to launch a spot bitcoin ETF and a spot ether ETF, expected to allow investors to subscribe for shares using bitcoin and ether directly.
The asset managers did not disclose the timeline of their launch.
Hong Kong’s crypto-friendly policies contrast sharply with mainland China’s crackdown following the establishment of a licensing regime for crypto trading platforms in June 2023.
Meanwhile, Bloomberg ETF analyst Eric Balchunas said the Hong Kong crypto ETFs will be "lucky" to attract $500 million in total assets under management.
The Block
Hong Kong approves first batch of spot bitcoin, ether ETFs in drive to become crypto hub
Hong Kong approved several spot bitcoin ETFs and spot ether ETFs managed by China Asset Management, Harvest, Bosera and HashKey on Monday.
⚡3
Global Digital Health Market Size
The global digital health market is poised to reach USD 946.0 billion by 2030, with a projected CAGR of 21.9% from 2024 to 2030.
North America emerged as a dominant player, commanding a significant revenue share of 38.2% in the digital health market. This was attributed to the region's rapid development of healthcare IT infrastructure, the emergence of startups, increased availability of funding options, and enhanced technological literacy among healthcare stakeholders.
Market Drivers
- Increasing adoption of digital healthcare
- Shortage of medical professionals and increasing demand for healthcare
- Rise in artificial intelligence, IoT, and big data
- Growing adoption of mobile health applications
Market Restraints
1. Cybersecurity and privacy concerns
2. Lack of healthcare infrastructure
The global digital health market is poised to reach USD 946.0 billion by 2030, with a projected CAGR of 21.9% from 2024 to 2030.
North America emerged as a dominant player, commanding a significant revenue share of 38.2% in the digital health market. This was attributed to the region's rapid development of healthcare IT infrastructure, the emergence of startups, increased availability of funding options, and enhanced technological literacy among healthcare stakeholders.
Market Drivers
- Increasing adoption of digital healthcare
- Shortage of medical professionals and increasing demand for healthcare
- Rise in artificial intelligence, IoT, and big data
- Growing adoption of mobile health applications
Market Restraints
1. Cybersecurity and privacy concerns
2. Lack of healthcare infrastructure
Yahoo Finance
Global Digital Health Market Size, Share & Trends Analysis Report 2024-2030: Seizing Opportunities and Revolutionizing Care with…
Global Digital Health Market Global Digital Health Market Dublin, April 11, 2024 (GLOBE NEWSWIRE) -- The "Global Digital Health Market Size, Share & Trends Analysis Report by Technology (Healthcare Analytics, mHealth), Component (Hardware, Software, Services)…
⚡1
Navigating_the_Global_Crypto_Landscape_with_PwC_2024_1713255174.pdf
11.4 MB
Navigating the Global Crypto Landscape with PwC: 2024 Outlook
Google announced Probing the 3D Awareness of Visual Foundation Models
Recent advances in large-scale pretraining have yielded visual foundation models with strong capabilities. Not only can recent models generalize to arbitrary images for their training task, their intermediate representations are useful for other visual tasks such as detection and segmentation.
Recent advances in large-scale pretraining have yielded visual foundation models with strong capabilities. Not only can recent models generalize to arbitrary images for their training task, their intermediate representations are useful for other visual tasks such as detection and segmentation.
🆒1
Adobe introduced a slew of new AI capabilities coming to Adobe Premiere Pro.
- New AI-powered tools in Premiere Pro
- A new proprietary AI model called Firefly Video
- Third-party AI model integrations with OpenAI, Runway, and Pika Labs.
Adobe is doing what they do best... No one expected that. Sora, Runway, and Pika part of one software that's massive. A major highlight from the video was that Sora can generate multiple versions of a video from one prompt in a single API call
- New AI-powered tools in Premiere Pro
- A new proprietary AI model called Firefly Video
- Third-party AI model integrations with OpenAI, Runway, and Pika Labs.
Adobe is doing what they do best... No one expected that. Sora, Runway, and Pika part of one software that's massive. A major highlight from the video was that Sora can generate multiple versions of a video from one prompt in a single API call
Adobe
Professional AI video editing software | Adobe Premiere
Spend less time on tedious video editing tasks like changing color, adjusting audio, or extending clips with professional AI video editing tools in Adobe Premiere.
Mistral seeks $5B valuation just a few months after getting $2B.
The possible back-to-back financing reflects investors’ continued appetite for certain AI startups and the large amount of capital such startups need to compete with Google, OpenAI and other leaders in the field.
It’s unclear which investors Mistral has spoken to about a new funding round. Its existing investors include Andreessen Horowitz, Lightspeed Venture Partners and Microsoft, which has also backed OpenAI but with far more capital.
A $5 billion valuation would stand out even among AI startups that have fetched steep prices on modest revenue. Cohere, which also develops LLMs, has been in talks to raise money at a $5 billion valuation despite generating revenue at an annualized pace of about $22 million.
Mistral charges customers that use an application programming interface to access its large language models. The models, most of which are also available for free, have surged in popularity among application developers.
Mistral is banking on its ability to convince AI customers in Europe to work with a locally based firm rather than with AI providers based in other parts of the world.
Named after a northern winter wind, Mistral was founded a year ago by former DeepMind researcher Arthur Mensch and former Meta AI scientists Timothée Lacroix and Guillaume Lample. Unlike OpenAI, Mistral is open-sourcing its models and has said that it’s developing its products in line with stricter European regulations on the safe development of such software.
Mistral experienced backlash from some researchers over its first open-source model, launched last year, for lacking the safety features to prevent users from asking for advice on how to commit suicide, for instance. Mistral has also admitted to using Meta’s open-source AI, Llama 2, to create its own AI, only disclosing that fact after the information leaked.
In February, Microsoft took a tiny minority stake in Mistral. Microsoft said it would offer Mistral’s AI models, including its most advanced one, Mistral Large, on its Azure cloud rental service. Mistral Large was designed to compete with OpenAI’s most advanced model, GPT-4.
The possible back-to-back financing reflects investors’ continued appetite for certain AI startups and the large amount of capital such startups need to compete with Google, OpenAI and other leaders in the field.
It’s unclear which investors Mistral has spoken to about a new funding round. Its existing investors include Andreessen Horowitz, Lightspeed Venture Partners and Microsoft, which has also backed OpenAI but with far more capital.
A $5 billion valuation would stand out even among AI startups that have fetched steep prices on modest revenue. Cohere, which also develops LLMs, has been in talks to raise money at a $5 billion valuation despite generating revenue at an annualized pace of about $22 million.
Mistral charges customers that use an application programming interface to access its large language models. The models, most of which are also available for free, have surged in popularity among application developers.
Mistral is banking on its ability to convince AI customers in Europe to work with a locally based firm rather than with AI providers based in other parts of the world.
Named after a northern winter wind, Mistral was founded a year ago by former DeepMind researcher Arthur Mensch and former Meta AI scientists Timothée Lacroix and Guillaume Lample. Unlike OpenAI, Mistral is open-sourcing its models and has said that it’s developing its products in line with stricter European regulations on the safe development of such software.
Mistral experienced backlash from some researchers over its first open-source model, launched last year, for lacking the safety features to prevent users from asking for advice on how to commit suicide, for instance. Mistral has also admitted to using Meta’s open-source AI, Llama 2, to create its own AI, only disclosing that fact after the information leaked.
In February, Microsoft took a tiny minority stake in Mistral. Microsoft said it would offer Mistral’s AI models, including its most advanced one, Mistral Large, on its Azure cloud rental service. Mistral Large was designed to compete with OpenAI’s most advanced model, GPT-4.
The Information
Mistral, an OpenAI Rival in Europe, in Talks to Raise Capital at a $5 Billion Valuation
Mistral, a Paris-based startup developing artificial intelligence that is open source, has been speaking to investors about raising several hundred million dollars at a valuation of $5 billion, according to a person with direct knowledge. The company, which…
👍1