All about AI, Web 3.0, BCI
3.41K subscribers
735 photos
26 videos
161 files
3.2K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
The agentic commerce market map is here.

Get the lay of the land of the different players.

Next up: who wins from agentic payments.
❀3πŸ”₯3πŸ‘2
Google released Time Series Foundation Model is a 200M-parameter model that can forecast time-series data it has never seen before, with no additional fine-tuning required.

Time-series forecasting is required everywhere - retail, finance, healthcare, etc.
And for the longest time, this was the domain of traditional statistical methods. Then deep learning models came along and did better, but they involved long training and validation cycles before you could even test them on new data.

The architecture is decoder-only, the same idea as GPT. Instead of words, it works with "patches" - groups of contiguous time-points treated as tokens. The model predicts the next patch from all the ones before it.

The model was pre-trained on 100 billion real-world time-points, mostly from Google Trends and Wikipedia Pageviews - which naturally capture a huge variety of patterns across domains.

On benchmarks, zero-shot TimesFM matches PatchTST and DeepAR that were explicitly trained on those datasets, and even beats GPT-3.5 on forecasting tasks despite being far smaller.

The model is open on HuggingFace and GitHub.
πŸ”₯5πŸ‘3🀣3🍌1
Meta Introduced PAHF: continual personalization where agents learn from feedback to stay in sync.

PAHF is a 3-step loop:

1. Pre-Action: Ask when preferences are unclear
2. Action: Act using retrieved user memory
3. Post-Action: Use feedback to update/correct the memory
πŸ”₯2πŸ₯°2πŸ’―2
Google's Cybersecurity 2026 Forecast Report warns of a "Shadow Agent" crisis.

These AI agents, deployed by employees without corporate oversight, can create invisible pipelines for sensitive information, leading to data leaks, compliance violations, and IP theft.
πŸ‘3πŸ”₯3❀2
OpenAI added WebSocket support to their Responses API to make AI agents run much faster.

This update cuts waiting time by up to 40% when the AI repeatedly uses external tools.
πŸ‘3πŸ”₯3❀2
The largest real-world AI medical device trial just published. The results are complicated.

Eko Health, a leader in AI-powered cardiac and pulmonary disease detection, announced the publication of the TRICORDER study in The Lancet.

205 NHS primary care practices. 1.5 million patients. Eko Health's AI-enabled stethoscope vs. standard care.

When clinicians actually used the AI stethoscope, detection rates jumped:

β€’ Heart failure: 2.3X
β€’ Atrial fibrillation: 3.5X
β€’ Valvular heart disease: 1.9X
The algorithm works. No question.

The problem:

The intention-to-treat analysis showed no significant difference between intervention and control groups.

Translation: on average, across all practices, patients weren't diagnosed any better.

Why? Implementation gaps.

The AI stethoscope improved detection dramatically when used. But adoption was inconsistent. Workflow integration failed. Some clinicians ignored the alerts. Some forgot the device. Some didn't trust it.

The algorithm was sound. The humans were the bottleneck.

The deeper lesson:

This is AI's dirty secret in healthcare. We obsess over model performance β€” AUC, sensitivity, specificity. But the real challenge isn't building the model. It's getting clinicians to use it.

An algorithm with 95% accuracy that sits in a drawer is worse than one with 80% accuracy that's actually deployed.
❀1πŸ”₯1πŸ‘1
Anthropic introduced Cowork and plugin updates that help enterprises customize Claude for better collaboration with every team.

Admins can create private plugin marketplaces to distribute them across the org.

A unified "Customize" menu also gives you more control over plugins, skills, and connectors in one place.

Also added:

1. connectors for Google Workspace, Docusign, Apollo, Clay, Outreach, Similarweb, MSCI, FactSet, WordPress, and Harvey, along with plugins from Slack by Salesforce, LSEG, S&P Global, Common Room, and Tribe AI.

2. created plugins across HR, design, engineering, ops, financial analysis, investment banking, equity research, private equity, and wealth management to help users see what's possible and start building their own.

Now in research preview: Claude can work across Excel and PowerPoint end-to-end, running analysis in one and building the presentation in the other.

Available for all paid plans on both Mac and Windows.
❀2πŸ”₯2πŸ‘2
UMD researchers studied 2.6 million ai agents on Moltbook, the largest ai-only social network. nearly 300,000 posts. 1.8 million comments. no humans in the loop

The question : if you let enough agents interact freely, do real social dynamics emerge? culture, consensus, influence hierarchies?

The answer should change how you think about multi-agent systems.

At the macro level, it looks like culture is forming. the platform's semantic signature stabilizes quickly, approaching 0.95 cosine similarity between daily centroids.

Zoom out and you'd think this society is converging. Agents developing shared norms, but zoom in and the picture falls apart completely.
❀2πŸ”₯2πŸ‘2
An AI chip startup MatX founded by 2 Google alumni has raised more than $500 million in a new round to compete with Nvidia

They’re building an LLM chip that delivers much higher throughput than any other chip while also achieving the lowest latency. Call it the MatX One.

The MatX One chip is based on a splittable systolic array, which has the energy and area efficiency that large systolic arrays are famous for, while also getting high utilization on smaller matrices with flexible shapes. The chip combines the low latency of SRAM-first designs with the long-context support of HBM. These elements, plus a fresh take on numerics, deliver higher throughput on LLMs than any announced system, while simultaneously matching the latency of SRAM-first designs. Higher throughput and lower latency give you smarter and faster models for your subscription dollar.
πŸ”₯2πŸ₯°2πŸ‘2❀1
Anthropic rolled out Remote Control for Claude Code, letting users control a session begun in the terminal from the Claude mobile app or the web.

Remote Control is now available in Research Preview for Max users, and coming soon to Pro users.

Run claude rc to get started.

The aha moment from Openclaw was moving the control panel of your agent from desktop to where you are (mobile: whatsapp, telegram etc) and people lost their minds.

Very impressive to see the speed at which Anthropic saw that, built remote control and shipped.
❀2πŸ”₯2πŸ₯°2
Berkeley and Princeton developed a new offline GCRL method based on multistep quasimetrics that can learn multistage tasks in the real world using the Bridge Dataset.

The tasks might seem simple, but they use exactly the same Bridge Dataset as older works, squeezing much more advanced multistage tasks from the same data!
This is insane. Next.js rebuilt based on Vite, and it only took one week and $1,100 in tokens.

Code was always the cheap part tho, knowing what to build and why is the hard part.

Cloudflare didnt just throw tokens at next.js β€” they had deep opinions about edge architecture and knew exactly where the framework needed to change. $1100 in tokens + years of infra expertise.

The expertise is the expensive input nobody counts
❀2πŸ”₯2πŸ‘2
Nvidia introduced synthetic data for terminal use.
πŸ”₯2πŸ₯°2πŸ‘2
Meet LUMI-lab is a self-driving lab that closes the loop between an AI foundation model + robotics to accelerate lipid nanoparticle (LNP) discovery for mRNA delivery.

LUMI-lab (Large-scale Unsupervised Modeling followed by Iterative experiments) is a self-driving laboratory that tightly closes the loop between an AI foundation model and automated robotics to accelerate LNP discovery for mRNA delivery.

To tackle data scarcity in emerging mRNA delivery domains, pretrained the model on 28M+ molecular structures, then iteratively improved it with closed-loop experimental data.

In this work, across ten active-learning cycles, LUMI-lab synthesized and evaluated 1,700+ new LNPs and unexpectedly identified a new design feature for efficient delivery: brominated lipid tails.

These brominated-tail ionizable lipids delivered mRNA into human lung cells more efficiently than approved benchmarks, despite representing only a small fraction of the initial chemical space explored.

GitHub.
Check the video here.
πŸ”₯3❀2πŸ‘2
A good model of the world requires not just great graphics but spatial and world intelligence so that you can understand how objects move and respond, what actions cause what outcomes, and what the effects of interactions by players are.

Moonlake's world model delivers that.
❀2πŸ”₯2πŸ₯°2
Google introduced Nano Banana 2

It uses Gemini’s understanding of the world and is powered by real-time information and images from web search. That means it can better reflect real-world conditions in high-fidelity.

Check out "Window Seat," a demo using Nano Banana 2’s world understanding to generate more accurate views from any window in the world, pulling live local weather info with 2K/4K specs. The precision is mind blowing.

Rolling out today as the new default in the Geminiapp, Search (across 141 countries), and Flow + available in preview via Google AIStudio and Vertex AI. Also available in Google Antigravity.
πŸ”₯2πŸ₯°2πŸ‘2
New from DeepSeek: DualPath

Researchers from Peking University, Tsinghua University, and #DeepSeek unveiled DualPath to fix the storage bandwidth bottleneck, which may be the secret killer of LLM agent performance.

Instead of letting data get stuck in a single storage traffic jam, DualPath creates a second highway for data to travel.

It loads saved model memory into idle decoding engines and then zips it over to the processing engines using high-speed internal networks, ensuring no part of the system sits idle while waiting for data.

The results are massive: DualPath boosts offline throughput by up to 1.87x and nearly doubles online serving speeds without violating performance targets.
❀2πŸ”₯2πŸ₯°2
Sakana introduced Doc-to-LoRA and Text-to-LoRA, two related research exploring how to make LLM customization faster and more accessible.

By training a Hypernetwork to generate LoRA adapters on the fly, these methods allow models to instantly internalize new information or adapt to new tasks.

Biological systems naturally rely on two key cognitive abilities: durable long-term memory to store facts, and rapid adaptation to handle new tasks given limited sensory cues. While modern LLMs are highly capable, they still lack this flexibility. Traditionally, adding long-term memory or adapting an LLM to a specific downstream task requires an expensive and time-consuming model update, such as fine-tuning or context distillation, or relies on memory-intensive long prompts.

To bypass these limitations, this work focuses on the concept of cost amortization. Researchers pay the meta-training cost once to train a hypernetwork capable of producing tasks or document specific LoRAs on demand. This turns what used to be a heavy engineering pipeline into a single, inexpensive forward pass. Instead of performing per-task optimization, the hypernetwork meta-learns update rules to instantly modify an LLM given a new task description or a long document.

In experiments, Text-to-LoRA successfully specializes models to unseen tasks using just a natural language description. Building on this, Doc-to-LoRA is able to internalize factual documents. On a needle-in-a-haystack task, Doc-to-LoRA achieves near-perfect accuracy on instances five times longer than the base model's context window. It can even generalize to transfer visual information from a vision-language model into a text-only LLM, allowing it to classify images purely through internalized weights.

Importantly, both methods run with sub-second latency, enabling rapid experimentation while avoiding the overhead of traditional model updates. This approach is a step towards lowering the technical barriers of model customization, allowing end-users to specialize foundation models via simple text inputs.

Doc-to-LoRA
Paper
Code

Text-to-LoRA
Paper
Code
πŸ”₯2πŸ₯°2πŸ‘2