DeFi. Governor Christopher J. Waller of the Federal Reserve Board recently gave a speech on ‘Centralized and Decentralized Finance: Substitutes or Complements?’ at the Vienna Macroeconomics Workshop, Institute of Advanced Studies, Vienna, Austria.
Key Takeaways of the speech:
1. DeFi allows asset trading without intermediaries, distinguishing it from centralized finance, yet it also has applications that complement traditional finance.
2. DLT offers faster and more efficient recordkeeping, useful for 24/7 markets, and is being explored by traditional financial institutions.
3. Tokenizing assets and using DLT can speed up transactions and enable automated, secure trading through #smartcontracts, reducing #settlement and counterparty risks.
4. Smart contracts streamline transactions by automating multiple steps, enhancing #security and #efficiency in both #DeFi and #centralizedfinance.
5. Stablecoins, typically pegged to $, facilitate decentralized trading and have potential in reducing global #payment costs, though they require regulatory safeguards to address #risks.
6. DeFi poses unique risks, including the potential for funds to reach bad actors, raising questions about the need for #regulations similar to those in traditional finance.
7. DeFi technologies can enhance centralized finance by improving efficiency, benefiting households and businesses through a more effective financial system.
Key Takeaways of the speech:
1. DeFi allows asset trading without intermediaries, distinguishing it from centralized finance, yet it also has applications that complement traditional finance.
2. DLT offers faster and more efficient recordkeeping, useful for 24/7 markets, and is being explored by traditional financial institutions.
3. Tokenizing assets and using DLT can speed up transactions and enable automated, secure trading through #smartcontracts, reducing #settlement and counterparty risks.
4. Smart contracts streamline transactions by automating multiple steps, enhancing #security and #efficiency in both #DeFi and #centralizedfinance.
5. Stablecoins, typically pegged to $, facilitate decentralized trading and have potential in reducing global #payment costs, though they require regulatory safeguards to address #risks.
6. DeFi poses unique risks, including the potential for funds to reach bad actors, raising questions about the need for #regulations similar to those in traditional finance.
7. DeFi technologies can enhance centralized finance by improving efficiency, benefiting households and businesses through a more effective financial system.
Board of Governors of the Federal Reserve System
Centralized and Decentralized Finance: Substitutes or Complements?
Thank you for inviting me to speak today.1 I have participated in this conference for nearly 20 years and have often presented my research on monetary theory, banking, and payments. So, I believe this is the right audience to speak to regarding the role of…
Alibaba introduced Qwen2.5-Coder-32B-Instruct: A New Era in AI Coding
Meet the groundbreaking family of coding models that's revolutionizing AI-assisted programming!
The results are nothing short of incredible.
The flagship Qwen2.5-Coder-32B-Instruct achieves remarkable benchmark scores:
✨ HumanEval: 92.7
✨ MBPP: 86.8
✨ CodeArena: 68.9
✨ LiveCodeBench: 31.4
Key highlights that make it special:
- Outperforms GPT-4 in several benchmarks!
- Available in multiple sizes: 0.5B, 1.5B, 3B, 7B, 14B, and 32B
- Supports popular quantization formats: GPTQ, AWQ, GGUF
- Seamless integration with Ollama for local deployment
- Fully open source
Get your hands on it now:
📍 Hugging Face
📍 ModelScope
📍 Kaggle
📍 GitHub
Meet the groundbreaking family of coding models that's revolutionizing AI-assisted programming!
The results are nothing short of incredible.
The flagship Qwen2.5-Coder-32B-Instruct achieves remarkable benchmark scores:
✨ HumanEval: 92.7
✨ MBPP: 86.8
✨ CodeArena: 68.9
✨ LiveCodeBench: 31.4
Key highlights that make it special:
- Outperforms GPT-4 in several benchmarks!
- Available in multiple sizes: 0.5B, 1.5B, 3B, 7B, 14B, and 32B
- Supports popular quantization formats: GPTQ, AWQ, GGUF
- Seamless integration with Ollama for local deployment
- Fully open source
Get your hands on it now:
📍 Hugging Face
📍 ModelScope
📍 Kaggle
📍 GitHub
Qwen
Qwen2.5-Coder Series: Powerful, Diverse, Practical.
GITHUB HUGGING FACE MODELSCOPE KAGGLE DEMO DISCORD
Introduction Today, we are excited to open source the “Powerful”, “Diverse”, and “Practical” Qwen2.5-Coder series, dedicated to continuously promoting the development of Open CodeLLMs.
Powerful: Qwen2.5-Coder…
Introduction Today, we are excited to open source the “Powerful”, “Diverse”, and “Practical” Qwen2.5-Coder series, dedicated to continuously promoting the development of Open CodeLLMs.
Powerful: Qwen2.5-Coder…
AlphaFold 3 is now open source!
AlphaFold 3 is a revolutionary AI model developed by Google DeepMind and Isomorphic Labs that can predict the 3D structures and interactions of virtually all biological molecules (including proteins, DNA, RNA, and small molecules) with crazy accuracy, achieving at least 50% improvement over previous existing methods.
AlphaFold 3 is a revolutionary AI model developed by Google DeepMind and Isomorphic Labs that can predict the 3D structures and interactions of virtually all biological molecules (including proteins, DNA, RNA, and small molecules) with crazy accuracy, achieving at least 50% improvement over previous existing methods.
GitHub
GitHub - google-deepmind/alphafold3: AlphaFold 3 inference pipeline.
AlphaFold 3 inference pipeline. Contribute to google-deepmind/alphafold3 development by creating an account on GitHub.
Justin Drake proposed a new consensus layer upgrade proposal "Beam Chain" at the Devcon conference, which is called "Ethereum 3.0" by the community.
The proposal aims to achieve faster block times, lower validator staking requirements, "chain snarkifaction" and quantum security improvements.
It is expected to formulate specifications in 2025 and enter the full testing phase in 2027.
The proposal aims to achieve faster block times, lower validator staking requirements, "chain snarkifaction" and quantum security improvements.
It is expected to formulate specifications in 2025 and enter the full testing phase in 2027.
The Block
Justin Drake proposes 'Beam Chain,' an Ethereum consensus layer redesign
Ethereum Foundation researcher Justin Drake announced Beam Chain, a new consensus layer upgrade proposal for Ethereum at Devcon on Tuesday.
Sanofi, OpenAI, Formation debut patient recruiting tool, will use in Phase 3 multiple sclerosis studies
Muse is AI tool for patient recruitment strategy & content creation.
AI systems like Muse will enable to drastically reduce cost + time of bringing new medicines to patients.
Muse is AI tool for patient recruitment strategy & content creation.
AI systems like Muse will enable to drastically reduce cost + time of bringing new medicines to patients.
PR Newswire
Formation Bio collaborates with Sanofi and OpenAI to Introduce Muse, a first of its kind AI tool to accelerate patient recruitment…
/PRNewswire/ -- Today, Formation Bio, together with OpenAI and Sanofi, introduced Muse, an advanced AI-powered tool developed to accelerate and improve drug...
👍3
New paper on scaling laws in primate vision modeling
Researchers trained and analyzed 600+ neural networks to understand how bigger models & more data affect brain predictivity.
Researchers trained and analyzed 600+ neural networks to understand how bigger models & more data affect brain predictivity.
2411.04330v1.pdf
1.4 MB
Precision-Aware Scaling Laws: A New Perspective on Language Model Training and Inference
A groundbreaking paper from researchers at Harvard, Stanford, MIT, and CMU reveals crucial insights into the relationship between model precision, training data, and performance in language models.
Key Findings:
1. Post-Training Quantization Challenge
The researchers discovered a counterintuitive phenomenon: models trained on more data become increasingly sensitive to post-training quantization. This means that after a certain point, additional training data can actually harm performance if the model will be quantized for inference.
2. Optimal Training Precision
The study suggests that the current standard practice of training in 16-bit precision may be suboptimal. Their analysis indicates that 7-8 bits might be the sweet spot for training, challenging both current high-precision (16-bit) and ultra-low precision (4-bit) approaches.
3. Unified Scaling Law
The team developed a comprehensive scaling law that accounts for:
- Training precision effects
- Post-training quantization impacts
- Interactions between model size, data, and precision
4. Practical Implications
- Larger models can be trained effectively in lower precision
- The race to extremely low-precision training (sub-4-bit) may face fundamental limitations
- There's an optimal precision point that balances performance and computational efficiency
5. Methodology
The research is backed by extensive experimentation, including:
- 465+ pretraining runs
- Models up to 1.7B parameters
- Training datasets up to 26B tokens
This work provides valuable insights for ML engineers and researchers working on large language models, suggesting that precision choices should be carefully considered based on model size and training data volume rather than following a one-size-fits-all approach.
The findings have significant implications for future hardware design and training strategies, potentially influencing how we approach model scaling and efficiency optimization in the AI field.
A groundbreaking paper from researchers at Harvard, Stanford, MIT, and CMU reveals crucial insights into the relationship between model precision, training data, and performance in language models.
Key Findings:
1. Post-Training Quantization Challenge
The researchers discovered a counterintuitive phenomenon: models trained on more data become increasingly sensitive to post-training quantization. This means that after a certain point, additional training data can actually harm performance if the model will be quantized for inference.
2. Optimal Training Precision
The study suggests that the current standard practice of training in 16-bit precision may be suboptimal. Their analysis indicates that 7-8 bits might be the sweet spot for training, challenging both current high-precision (16-bit) and ultra-low precision (4-bit) approaches.
3. Unified Scaling Law
The team developed a comprehensive scaling law that accounts for:
- Training precision effects
- Post-training quantization impacts
- Interactions between model size, data, and precision
4. Practical Implications
- Larger models can be trained effectively in lower precision
- The race to extremely low-precision training (sub-4-bit) may face fundamental limitations
- There's an optimal precision point that balances performance and computational efficiency
5. Methodology
The research is backed by extensive experimentation, including:
- 465+ pretraining runs
- Models up to 1.7B parameters
- Training datasets up to 26B tokens
This work provides valuable insights for ML engineers and researchers working on large language models, suggesting that precision choices should be carefully considered based on model size and training data volume rather than following a one-size-fits-all approach.
The findings have significant implications for future hardware design and training strategies, potentially influencing how we approach model scaling and efficiency optimization in the AI field.
❤5👍2
❗️Endocisternal minimally invasive neural interfaces
In a first-of-its-kind demonstration, researchers from The University of Texas Medical Branch and Rice University a wireless neural interface through a cistern, a space filled with CSF that provides an alternative to endovascular delivery.
They also showed neuromodulation, recording, and explantation!
In a first-of-its-kind demonstration, researchers from The University of Texas Medical Branch and Rice University a wireless neural interface through a cistern, a space filled with CSF that provides an alternative to endovascular delivery.
They also showed neuromodulation, recording, and explantation!
Nature
Endocisternal interfaces for minimally invasive neural stimulation and recording of the brain and spinal cord
Nature Biomedical Engineering - Endocisternal neural interfaces with wireless and magnetoelectrically powered electronics that approach brain and spinal cord targets through spaces filled with...
⚡4
Can a tiny startup’s 70 billion parameter model beat OpenAI’s o1 model?
Nous Research just launched the Forge Reasoning Engine, and it even managed to beat o1 on the American Invitational Math Exam.
Forge uses a combination of:
A) Monte Carlo Tree Search
B) Chain of Code
C) Mixture of Agents
D) Code Intepreter use
to get Nous’ Hermes 70B model close to o1’s performance on several math and science benchmarks.
This is a significant development as it is one of the first inference time scaling releases post o1 release.
They also point out that Forge allows “advancement in inference time scaling that can be applied to any model or a set of models”.
This means that they can swap out and upgrade the LLM piece over time, while keeping the rest of the Engine constant.
Nous is famous in the open source community for having released some of the best early open source fine tunes in 2023 and 2024.
Forge though is not open sourced, and is currently available via API to a small group of beta testers.
It is interesting to note that fairly small models maybe able to scale to the intelligence of very large models just by taking the time to think more at inference.
Inference time compute may finally level the playing field between the GPU poor and GPU rich.
Try Nous Chat today here for free.
Nous Research just launched the Forge Reasoning Engine, and it even managed to beat o1 on the American Invitational Math Exam.
Forge uses a combination of:
A) Monte Carlo Tree Search
B) Chain of Code
C) Mixture of Agents
D) Code Intepreter use
to get Nous’ Hermes 70B model close to o1’s performance on several math and science benchmarks.
This is a significant development as it is one of the first inference time scaling releases post o1 release.
They also point out that Forge allows “advancement in inference time scaling that can be applied to any model or a set of models”.
This means that they can swap out and upgrade the LLM piece over time, while keeping the rest of the Engine constant.
Nous is famous in the open source community for having released some of the best early open source fine tunes in 2023 and 2024.
Forge though is not open sourced, and is currently available via API to a small group of beta testers.
It is interesting to note that fairly small models maybe able to scale to the intelligence of very large models just by taking the time to think more at inference.
Inference time compute may finally level the playing field between the GPU poor and GPU rich.
Try Nous Chat today here for free.
NOUS RESEARCH
Introducing the Forge Reasoning API Beta and Nous Chat: An Evolution in LLM Inference - NOUS RESEARCH
The Forge Reasoning API contains some of our latest advancements in inference-time AI research, building on our journey from the original Hermes model.
🔥7❤1
Supermaven is merging with Cursor
This union brings together two innovative forces in AI-powered development tools.
Who is Supermaven? Founded by Jacob Jackson, the pioneer behind Tabnine (2019) and former OpenAI innovator, Supermaven has developed a lightning-fast, context-aware AI coding assistant that's been pushing the boundaries of what's possible in development tools.
Why does this matter?
• Combined expertise in AI and development tools
• Faster delivery of innovative features
• Shared vision for revolutionizing software development
• Enhanced capabilities through unified technologies
What's next?
The teams are already working on exciting improvements, including a next-generation Tab model featuring:
• Enhanced speed and responsiveness
• Superior context awareness
• Advanced intelligence for handling complex code changes
This union brings together two innovative forces in AI-powered development tools.
Who is Supermaven? Founded by Jacob Jackson, the pioneer behind Tabnine (2019) and former OpenAI innovator, Supermaven has developed a lightning-fast, context-aware AI coding assistant that's been pushing the boundaries of what's possible in development tools.
Why does this matter?
• Combined expertise in AI and development tools
• Faster delivery of innovative features
• Shared vision for revolutionizing software development
• Enhanced capabilities through unified technologies
What's next?
The teams are already working on exciting improvements, including a next-generation Tab model featuring:
• Enhanced speed and responsiveness
• Superior context awareness
• Advanced intelligence for handling complex code changes
Cursor
Supermaven joins Cursor
We’re teaming up to build the next phase of AI coding.
❤4🤔1
Foerster Lab for Al Research announced Kinetix: an open-ended universe of physics-based tasks for RL
They use Kinetix to train a general agent on millions of randomly generated physics problems and show that this agent generalises to unseen handmade environments.
Kinetix can represent problems ranging from robotic locomotion and grasping, to classic RL environments and video games, all within a unified framework. This opens the door to training a single generalist agent for all these tasks.
GitHub
arXiv
They use Kinetix to train a general agent on millions of randomly generated physics problems and show that this agent generalises to unseen handmade environments.
Kinetix can represent problems ranging from robotic locomotion and grasping, to classic RL environments and video games, all within a unified framework. This opens the door to training a single generalist agent for all these tasks.
GitHub
arXiv
GitHub
GitHub - FLAIROx/Kinetix: Reinforcement learning on general 2D physics environments in JAX. ICLR 2025 Oral.
Reinforcement learning on general 2D physics environments in JAX. ICLR 2025 Oral. - FLAIROx/Kinetix
BlackRock announced that the BlackRock USD Institutional Digital Liquidity Fund (BUIDL), which is tokenized by Securitize and was initially launched on the Ethereum in March 2024, will now be expanding on Aptos, Arbitrum, Avalanche, Optimism’s OP Mainnet, and Polygon.
Securitize
BlackRock Launches New BUIDL Share Classes Across Multiple Blockchains to Expand Access and Potential of BUIDL Ecosystem
👍4
OpenAI is preparing to launch a computer using AI agent codenamed “Operator” that take actions on a person’s behalf thru a browser, such as writing code or booking travel.
Staff told in all-hands mtg today it will be released in Jan
Staff told in all-hands mtg today it will be released in Jan
Bloomberg.com
OpenAI Nears Launch of AI Agent Tool to Automate Tasks for Users
The new software, codenamed “Operator,” is set to be released in January.
🔥8
a16Z reported the rise of intelligent automation: a market overview and startup guide
The intelligent automation market represents a significant transformation in how businesses handle operations. With the advent of AI, particularly large language models, we're seeing a shift from traditional RPA (Robotic Process Automation) to true intelligent automation.
Horizontal Solutions
- Core infrastructure providers offering universal tools
- Focus on fundamental capabilities like data extraction and web interactions
- Examples: Browserbase (web automation), Reducto (data extraction)
Vertical Solutions
Industries seeing rapid adoption:
- Healthcare: Patient referral management
- Legal: Document processing
- Supply Chain: Order tracking
- Finance: Transaction processing
- Compliance: Regulatory monitoring
Winning Formula for Startups
1. Market Entry Strategy:
- Target underdigitized industries
- Focus on high-volume, repetitive tasks
- Start with revenue-generating workflows
- Build processes that create data access advantages
2. Product Development:
- Begin with a narrow, well-defined problem
- Ensure high accuracy and reliability
- Create scalable solutions
- Focus on integration with existing systems
3. Team Requirements:
- Technical expertise in AI/ML
- Deep industry knowledge
- Startup experience
- Understanding of customer pain points
4. Key Success Factors:
- Replace manual administrative work
- Generate immediate business value
- Start at the beginning of workflows
- Build trust through consistent performance
The market is particularly attractive because:
- $250B+ business process outsourcing market
- 8M+ operations/information clerk roles
- Limited existing software solutions
- Growing enterprise acceptance of AI solutions
The intelligent automation market represents a significant transformation in how businesses handle operations. With the advent of AI, particularly large language models, we're seeing a shift from traditional RPA (Robotic Process Automation) to true intelligent automation.
Horizontal Solutions
- Core infrastructure providers offering universal tools
- Focus on fundamental capabilities like data extraction and web interactions
- Examples: Browserbase (web automation), Reducto (data extraction)
Vertical Solutions
Industries seeing rapid adoption:
- Healthcare: Patient referral management
- Legal: Document processing
- Supply Chain: Order tracking
- Finance: Transaction processing
- Compliance: Regulatory monitoring
Winning Formula for Startups
1. Market Entry Strategy:
- Target underdigitized industries
- Focus on high-volume, repetitive tasks
- Start with revenue-generating workflows
- Build processes that create data access advantages
2. Product Development:
- Begin with a narrow, well-defined problem
- Ensure high accuracy and reliability
- Create scalable solutions
- Focus on integration with existing systems
3. Team Requirements:
- Technical expertise in AI/ML
- Deep industry knowledge
- Startup experience
- Understanding of customer pain points
4. Key Success Factors:
- Replace manual administrative work
- Generate immediate business value
- Start at the beginning of workflows
- Build trust through consistent performance
The market is particularly attractive because:
- $250B+ business process outsourcing market
- 8M+ operations/information clerk roles
- Limited existing software solutions
- Growing enterprise acceptance of AI solutions
🆒5
Microsoft released a neat Python library to run LLM-powered multi-agent simulations of people with personalities, interests, and goals.
GitHub
GitHub - microsoft/TinyTroupe: LLM-powered multiagent persona simulation for imagination enhancement and business insights.
LLM-powered multiagent persona simulation for imagination enhancement and business insights. - microsoft/TinyTroupe
Stripe launched a SDK built for AI agents
❗️LLMs can call payment, billing, issuing, etc APIs
❗️Integrates with vercel, LangChain, crewAI
❗️Use any model via functions (w/ per token billing)
❗️LLMs can call payment, billing, issuing, etc APIs
❗️Integrates with vercel, LangChain, crewAI
❗️Use any model via functions (w/ per token billing)
stripe.dev
Adding payments to your LLM agentic workflows
This post discusses integrating the Stripe agent toolkit with large language models (LLMs) to enhance automation workflows, enabling financial services access, metered billing, and streamlined operations across agent frameworks.
❤4
ChatGPT can now use your Apps on Mac
This is a first step toward ChatGPT seeing everything on your computer and having full control as an agent.
What you need to know:
—It can write code in Xcode/VSCode
—It can make a git commit in Terminal/iterm2
—If you give it permission of course
—Available right now to Plus and Team users
—Coming soon to Enterprise and Edu
—It’s an early beta
The next step beyond this would be to allow ChatGPT to control see and control your desktop as an agent.
So App Usage could essentially be the OpenAI team doing bug fixes with humans in the loop feedback prior to Agent release.
Google DeepMind, Anthrophic, and several other startups (some which are in stealth) also reportedly have agent-type systems coming within weeks.
This is a first step toward ChatGPT seeing everything on your computer and having full control as an agent.
What you need to know:
—It can write code in Xcode/VSCode
—It can make a git commit in Terminal/iterm2
—If you give it permission of course
—Available right now to Plus and Team users
—Coming soon to Enterprise and Edu
—It’s an early beta
The next step beyond this would be to allow ChatGPT to control see and control your desktop as an agent.
So App Usage could essentially be the OpenAI team doing bug fixes with humans in the loop feedback prior to Agent release.
Google DeepMind, Anthrophic, and several other startups (some which are in stealth) also reportedly have agent-type systems coming within weeks.
Chatgpt
ChatGPT on desktop
ChatGPT on your desktop. Chat about email, screenshots, files, and anything on your screen.
🥰5
Tether announced the launch of Hadron, a platform designed to simplify the tokenization of everything from stocks, bonds, stablecoins, loyalty points, and more.
Hadron supports multiple blockchains such as Bitcoin layer 2 solutions such as Blockstream’s Liquid.
Hadron supports multiple blockchains such as Bitcoin layer 2 solutions such as Blockstream’s Liquid.
tether.io
Hadron by Tether Platform Brings Simplified Asset Tokenization to the Mass Market - Tether.io
Tether’s New User-Friendly Platform Will Allow the Tokenization of Anything 14 November 2024 – Tether, the largest company in the digital asset industry, today announced the launch of Hadron by Tether, a platform designed to simplify the tokenization of…