Crypto M - Crypto News
2.55K subscribers
15.9K photos
190 links
Your #1 destination for the latest and most unbiased market news on Bitcoin, Ethereum, NFT, Fintech, Web3, DeFi, and Blockchain.
Download Telegram
🚀 Nvidia's AI Chip Performance Surpasses Moore's Law, Says CEO

According to PANews, Nvidia CEO Jensen Huang announced during his CES speech that the performance of the company's AI chips is advancing faster than Moore's Law. He highlighted that Nvidia's latest data center superchip, the GB200 NVL72, is 30-40 times faster in AI inference tasks compared to the previous generation H100, with overall chip performance increasing 1000-fold over the past decade.

Huang emphasized that Nvidia's ability to surpass Moore's Law is due to synchronized innovations across architecture, chips, systems, libraries, and algorithms. He also predicted that as chip performance continues to improve, the costs associated with high-computation AI inference models, such as OpenAI's o3 model, will gradually decrease.


#Nvidia #AI #MooresLaw #Superchip #GB200NVL72 #H100 #ChipPerformance #AIInference #Innovation #TechNews
🚀 NVIDIA CEO Acknowledges Misunderstanding of DeepSeek R1

According to PANews, NVIDIA CEO Jensen Huang admitted to having a complete misunderstanding of DeepSeek R1. He emphasized that the demand for computing remains significantly high, with artificial intelligence inference further increasing the need for computational resources.

#NVIDIA #JensenHuang #DeepSeekR1 #artificialintelligence #computing #AIInference
🚀 Telegram Launches Decentralized Privacy Computing Network Cocoon

According to BlockBeats, Telegram founder Pavel Durov announced the official launch of Cocoon, a decentralized privacy computing network. Cocoon is composed of three main components: clients, proxies, and work nodes.

Clients initiate work requests to proxies and pay fees upon completion. Proxies route these requests to work nodes, selecting nodes based on device model, load, and reputation. Protected by TEE, proxies pay nodes from client fees, with nodes earning commissions. Telegram plans to allow anyone to run their own proxy, aiming for complete decentralization.

Work nodes execute AI inference requests within TEE-protected virtual machines and receive payment from proxies. Anyone with a GPU can become a work node and earn $TON by contributing AI computing power to the decentralized network. By running the Cocoon protocol stack on TEE-supported GPU servers, private and verifiable AI model execution is possible, with transparent $TON payments for processed requests.

Durov noted that some AI-related Telegram features, such as message translation, are partially supported by Cocoon. He mentioned voice-to-text and summarization features, indicating Cocoon's practical applications from the start. Telegram mini-programs are expected to increase demand for Cocoon's network.

Unlike previous Telegram NFT projects, Cocoon integrates blockchain and $TON as a payment method, focusing on decentralized AI computing without issuing new tokens. This reflects a growing trend of privacy concerns regarding centralized AI companies. In the crypto space, privacy projects increasingly emphasize "data sovereignty protection."

Currently, Cocoon's total TVL is approximately 4487 TON, with 30 work nodes, 18 proxies, and 12 clients, indicating early-stage development. Supported by Telegram's built-in functionality and TON incentives, Cocoon's future trajectory remains to be seen, as it seeks to differentiate itself from other crypto computing projects.


#Telegram #Cocoon #DecentralizedPrivacy #AIComputing #Blockchain #PrivacyComputing #TON #TEE #Crypto #DataSovereignty #AIInference #TelegramFeatures #Decentralization #CryptoProjects
🚀 Gonka's AI Network Sees Significant Surge in Computing Power

According to PANews, the decentralized AI inference network Gonka has experienced a substantial increase in computing power, with its network capacity soaring nearly 20 times to surpass 10,000 NVIDIA H100 equivalents, reaching 10,729 as of December 17. This scale is comparable to a large national AI computing center or a major cloud provider's core AI cluster, capable of supporting the training of large-scale models and high-throughput inference of even larger models.

Gonka achieves this by integrating global GPU resources in a decentralized manner, eliminating the need for centralized data centers and forming a scalable high-performance AI computing network. This milestone signifies Gonka's entry into the ranks of top-tier global AI infrastructure networks, with the capability to offer commercial-grade API services.

Currently, Gonka supports five mainstream AI inference models. Since its mainnet launch three months ago, the total daily usage of inference models has reached nearly 100 million tokens per epoch, with the popular model Qwen3-235B-Instruct accounting for approximately 30 million tokens daily, indicating exponential growth potential. The network has attracted nearly 600 active nodes from over 30 countries and regions, with more than 2,000 daily users of its AI inference services. Data shows that the growth rate of Gonka's inference usage has outpaced the increase in network nodes and computing power, demonstrating the market's strong demand for Gonka's decentralized inference services and the viability of its business model.

Previously, Gonka, recognized as an efficient decentralized AI inference and training network, secured a $50 million investment from Bitfury, with backing from OpenAI investor Coatue and Solana investor Slow Ventures. It is regarded as one of the most promising emerging infrastructures in the AI × DePIN sector.


#Gonka #AIinference #computingpower #decentralizednetwork #GPUresources #AItraining #cloudcomputing #AIinfrastructure #NVIDIAG100 #Qwen3 #dePIN #Bitfury #OpenAI #investment
🚀 Baseten Secures $300 Million Funding, Valuation Reaches $5 Billion

Baseten, a startup specializing in artificial intelligence inference, has successfully raised $300 million in funding, according to ChainCatcher. The Wall Street Journal, citing informed sources, reported that the company's valuation has now reached $5 billion, nearly doubling its previous valuation.

The funding round was led by venture capital firm IVP and Alphabet's independent growth fund, CapitalG. Chip giant Nvidia also participated in the investment, contributing $150 million as part of the deal. This transaction highlights Nvidia's strategic focus on startups within the AI inference sector.

As the industry's focus shifts from training models to large-scale operations and inference—where AI models generate outputs based on inputs—Nvidia is increasing its investments in related startups while continuing to support its AI chip clientele.


#Baseten #ArtificialIntelligence #Funding #VentureCapital #IVP #CapitalG #Nvidia #AIInference #Startup #Valuation #AI #Investment #Technology
🚀 SpaceX Plans to Launch 1 Million Satellites for Orbital Data Center Network

SpaceX has filed an application with the U.S. Federal Communications Commission (FCC) to launch up to one million satellites, aiming to create an orbital data center network around Earth. According to Jin10, the company submitted the documents on Friday evening, describing the project as a satellite constellation with "unprecedented computing capabilities" designed to support advanced AI models and their applications. In an eight-page document, SpaceX outlined its vision for the "orbital data center system," stating that to provide the necessary capabilities for large-scale AI inference and data processing to billions of users worldwide, the company aims to deploy up to one million satellites. These satellites will operate in multiple narrow orbital layers, each no thicker than 50 kilometers.

#SpaceX #satellites #orbitaldatacenter #AI #FCC #satelliteconstellation #computingcapabilities #AIinference #dataprocessing
🚀 AI TRENDS | Vitalik Buterin Explores Secure AI Personal Use Solutions

Ethereum co-founder Vitalik Buterin has detailed his exploration of localized, private, and secure AI personal use solutions. According to PANews, Buterin highlighted significant privacy and security concerns within the current AI landscape, including local open-source AI. He pointed out vulnerabilities such as OpenClaw agents modifying critical settings without human confirmation and malicious external inputs easily taking over user instances. Some skills even contain harmful instructions.

Buterin advocates for prioritizing local inference and sandbox isolation for all large language models (LLMs) and documents. He conducted tests using hardware like the NVIDIA 5090 laptop and AMD Ryzen AI Max Pro, employing the Qwen3.5:35B model through llama-server and utilizing the NixOS system.


#AI #VitalikButerin #Ethereum #privacy #security #AItrends #localAI #opensourceAI #LLM #sandbox #NixOS #Qwen3 #NVIDIA5090 #AMD #AIinference #ETH