Data Science by ODS.ai 🦜
45.1K subscribers
754 photos
84 videos
7 files
1.83K links
First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @malev
Download Telegram
​​UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition

The landscape of large language models (LLMs) has just been enhanced with the introduction of UniversalNER, a groundbreaking innovation using targeted distillation with mission-focused instruction tuning. The researchers managed to distill ChatGPT into more cost-efficient UniversalNER models without losing the quality of named entity recognition (NER). The study showcases how UniversalNER excels across an impressive array of 43 datasets in 9 diverse domains, outperforming other models like Alpaca and Vicuna by over 30 absolute F1 points on average.

What sets UniversalNER apart is its ability to acquire the capabilities of ChatGPT while having only a fraction of the parameters. It not only recognizes arbitrary entity types but even surpasses ChatGPT's NER accuracy by 7-9 absolute F1 points. Most remarkably, without any direct supervision, it manages to outclass even state-of-the-art multi-task systems like InstructUIE. This achievement is poised to be a game-changer in the field of NLP, offering a potent combination of efficiency and accuracy.

Paper link: https://arxiv.org/abs/2308.03279
Project link: https://universal-ner.github.io/

A detailed unofficial overview of the paper:
https://andlukyane.com/blog/paper-review-universalner

#deeplearning #nlp #llm #ner
👍109🔥2
​​Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

Reinforcement Learning from Human Feedback (RLHF), the key method for fine-tuning large language models (LLMs), is placed under the microscope in this paper. While recognizing RLHF's central role in aligning AI systems with human goals, the authors boldly tackle the uncharted territory of its flaws and limitations. They not only dissect open problems and the core challenges but also map out pioneering techniques to augment RLHF. This insightful work culminates in proposing practical standards for societal oversight, marking a critical step towards a multi-dimensional and responsible approach to the future of safer AI systems.

Paper link: https://arxiv.org/abs/2307.15217

A detailed unofficial overview of the paper:
https://andlukyane.com/blog/paper-review-rlhf-overview

#deeplearning #nlp #llm #rlhf
5👍4🔥1🤓1
​​FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization

The fusion of transformer and convolutional architectures has ushered in a new era of enhanced model accuracy and efficiency, and FastViT is at the forefront of this revolution. This novel hybrid vision transformer architecture boasts an impressive latency-accuracy trade-off, setting new benchmarks in the field. Key to its success is the RepMixer, an innovative token mixing operator that utilizes structural reparameterization to slash memory access costs by doing away with traditional skip-connections.

In practical terms, FastViT's prowess is undeniable. Not only is it a staggering 3.5x faster than CMT on mobile devices for ImageNet accuracy, but it also leaves EfficientNet and ConvNeXt trailing in its wake, being 4.9x and 1.9x faster respectively. Additionally, when pitted against MobileOne at a similar latency, FastViT emerges triumphant with a 4.2% superior Top-1 accuracy. Across a spectrum of tasks, from image classification and detection to segmentation and 3D mesh regression, FastViT consistently outshines its competitors, showcasing both remarkable speed and robustness against out-of-distribution samples and corruptions.

Paper link: https://huggingface.co/papers/2303.14189
Code link: https://github.com/apple/ml-fastvit

A detailed unofficial overview of the paper:
https://andlukyane.com/blog/paper-review-fastvit

#deeplearning #cv
👍7🔥41👏1
​​LISA: Reasoning Segmentation via Large Language Model

The field of image segmentation has taken a leap forward with the introduction of LISA (Large Language Instructed Segmentation Assistant). This cutting-edge model excels at "reasoning segmentation," a novel task that generates segmentation masks from complex and implicit text queries. Building upon the capabilities of multi-modal Large Language Models, LISA expands its vocabulary with a <SEG> token and introduces an innovative "embedding-as-mask" paradigm to achieve this feat. Notably, the model is adept at intricate reasoning, utilizes world knowledge, offers explanatory answers, and can handle multi-turn conversations.

What's astonishing about LISA is its robust zero-shot learning abilities. Even when trained on datasets that lack reasoning-based tasks, LISA performs impressively well. Moreover, when fine-tuned with just 239 specific reasoning segmentation image-instruction pairs, the model's performance is further enhanced.

Paper link: https://arxiv.org/abs/2308.00692
Code link: https://github.com/dvlab-research/LISA

A detailed unofficial overview of the paper:
https://andlukyane.com/blog/paper-review-lisa

#deeplearning #cv #nlp #imagesegmentation #largelanguagemodel
🔥11👍7
​​OBELISC: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents

The OBELICS dataset is a game-changer in the world of machine learning and AI! Unlike existing closed-source datasets, OBELICS is a vast, open-source, web-scale dataset specially curated for training large multimodal models. Boasting 141 million web pages from Common Crawl, 353 million high-quality images, and an impressive 115 billion text tokens, OBELICS sets a new standard in the richness and diversity of training data.

But it's not just about the numbers; it's about results. To prove its mettle, models with 9 and 80 billion parameters were trained on OBELICS, showcasing competitive performance across various multimodal benchmarks. Named IDEFICS, these models outperformed or matched their closed-source counterparts, proving that OBELICS isn't just a theoretical concept—it's a practical, high-impact alternative.

Paper link: https://huggingface.co/papers/2306.16527
Model card link: https://huggingface.co/HuggingFaceM4/idefics-80b-instruct
Blogpost link: https://huggingface.co/blog/idefics

A detailed unofficial overview of the paper:
https://andlukyane.com/blog/paper-review-obelisc

#deeplearning #cv #nlp #largelanguagemodel #opensource
👍8🔥32🥰1
​​Giraffe: Adventures in Expanding Context Lengths in LLMs

Modern Large Language Models (LLMs) have revolutionized our ability to process and understand vast amounts of textual data. Yet, these models, like LLaMA and LLaMA2, often come with a caveat: they're constrained by fixed context lengths, which means they're limited in handling longer sequences of input data at evaluation. This paper tackles that constraint by investigating a variety of methods for "context length extrapolation," which essentially enables these models to understand and work with longer text sequences. Among the techniques explored, the paper introduces an innovative "truncated basis" strategy for altering positional encodings within the attention mechanism, promising a more scalable future for LLMs.

The researchers put their theories to the test with three brand-new evaluation tasks—FreeFormQA, AlteredNumericQA, and LongChat-Lines—providing a more nuanced measure of model performance than the traditionally used metric of perplexity. Their findings? Linear scaling came out on top as the most effective way to extend the context length, but the truncated basis method showed potential for future exploration. To propel the research community even further, the paper releases three game-changing long-context models, named Giraffe, with context lengths ranging from 4k to an astonishing 32k.

Paper link: https://arxiv.org/abs/2308.10882
Code link: https://github.com/abacusai/Long-Context

A detailed unofficial overview of the paper:
https://andlukyane.com/blog/paper-review-giraffe

#deeplearning #cv #nlp #largelanguagemodel #opensource #largecontext
👍133🔥3
​​CoTracker: It is Better to Track Together

The CoTracker paper proposes a groundbreaking approach that takes video motion prediction to the next level. Traditional methods have often been limited, either tracking the motion of all points in a frame collectively using optical flow, or tracking individual points through a video. These approaches tend to overlook the crucial interrelationships between multiple points, especially when they're part of the same physical object. CoTracker flips the script by employing a transformer-based architecture to jointly track multiple points throughout a video, effectively modeling the correlations between different points in time.

What really sets CoTracker apart is its versatility and adaptability. It's engineered to handle extremely long videos through a unique sliding-window mechanism, and iteratively updates estimates for multiple trajectories. The system even allows for the addition of new tracking points on-the-fly, offering unmatched flexibility. CoTracker outshines state-of-the-art methods in nearly all benchmark tests.

Paper link: https://arxiv.org/abs/2307.07635
Code link: https://github.com/facebookresearch/co-tracker
Project link: https://co-tracker.github.io/

A detailed unofficial overview of the paper:
https://andlukyane.com/blog/paper-review-cotracker

#deeplearning #cv #objecttracking
👍7🔥75😁1
​​RecMind: Large Language Model Powered Agent For Recommendation

Recent advancements have significantly improved the capabilities of Large Language Models (LLMs) in various tasks, yet their potential in the realm of personalized recommendations has been relatively unexplored. To address this gap, a new LLM-powered autonomous recommender agent called RecMind has been developed. RecMind is designed to provide highly personalized recommendations by leveraging planning algorithms, tapping into external data sources, and using individualized data.

One standout feature of RecMind is its novel "Self-Inspiring" algorithm, which enhances the model's planning abilities. During each step of planning, the algorithm encourages the model to consider all its past actions, thereby improving its understanding and use of historical data. The performance of RecMind has been evaluated across multiple recommendation tasks like rating prediction, sequential and direct recommendation, explanation generation, and review summarization. The results show that RecMind outperforms existing LLM-based methods in these tasks and is competitive with the specialized P5 model.

Paper link: https://arxiv.org/abs/2308.14296

A detailed unofficial overview of the paper:
https://andlukyane.com/blog/paper-review-recmind

#deeplearning #nlp #llm #recommender
👍175🔥1
​​TSMixer: An All-MLP Architecture for Time Series Forecasting

Time-series datasets in real-world scenarios are inherently multivariate and riddled with intricate dynamics. While recurrent or attention-based deep learning models have been the go-to solution to address these complexities, recent discoveries have shown that even basic univariate linear models can surpass them in performance on standard academic benchmarks. As an extension of this revelation, the paper introduces the Time-Series Mixer TSMixer. This innovative design, crafted by layering multi-layer perceptrons, hinges on mixing operations across both time and feature axes, ensuring an efficient extraction of data nuances.

Upon application, TSMixer has shown promising results. Not only does it hold its ground against specialized state-of-the-art models on well-known benchmarks, but it also trumps leading alternatives in the challenging M5 benchmark, a dataset that mirrors the intricacies of retail realities. The paper's outcomes emphasize the pivotal role of cross-variate and auxiliary data in refining time series forecasting.

Paper link: https://arxiv.org/abs/2303.06053
Code link: https://github.com/google-research/google-research/tree/master/tsmixer

A detailed unofficial overview of the paper:
https://andlukyane.com/blog/paper-review-tsmixer

#paperreview #deeplearning #timeseries #mlp
👍23🔥74👏2
Forwarded from Machinelearning
🎙️ NVIDIA выпустили Canary-1B v2 — открытую модель для распознавания и перевода речи, которая работает с 25 европейскими языками.

Что она умеет:
- 📝 Точное ASR (распознавание речи) и AST (перевод речи) между английским и 24 другими языками.
- Автоматическая пунктуация, капитализация и точные таймстампы до слова.
- Поддержка русского, французского, немецкого, испанского и многих других языков.

Чем интересна
- До 10× быстрее инференс, чем у моделей в 3 раза больше.
- Уже показывает state-of-the-art точность среди открытых моделей на Hugging Face.
- Лицензия CC-BY-4.0 — можно свободно использовать в проектах.

Под капотом:
- Архитектура: FastConformer-энкодер + Transformer-декодер (~978M параметров).
- Форматы: .wav и .flac, моно 16 кГц.
- Легко интегрируется через NVIDIA NeMo или прямо с Hugging Face.

Где пригодится:
🟢 голосовые ассистенты
🟢 субтитры и перевод видео
🟢 чат-боты с речевым вводом
🟢 real-time анализ речи

Всего ~978M параметров → легче, быстрее и дешевле в использовании, чем большие модели конкурентов.

🟠 Попробовать можно здесь: https://huggingface.co/nvidia/canary-1b-v2
🟠SET: https://huggingface.co/datasets/nvidia/Granary
🟠PARAKEET: https://huggingface.co/nvidia/parakeet-tdt-0.6b-v3

@ai_machinelearning_big_data


#AI #NVIDIA #SpeechRecognition #ASR #AST #Multilingual #MachineLearning #DeepLearning
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
5👍4🔥4
Forwarded from Machinelearning
🚀 Релиз: Qwen3-Next-80B-A3B - эффективная модель заточенная на работа работу с очень длинным контекстом!

🔹 80B параметров, но активируется только 3B на токен → тренировка и инференс 10x дешевле и быстрее, чем у Qwen3-32B (особенно при 32K+ контексте).
🔹 Гибридная архитектура: Gated DeltaNet + Gated Attention → сочетает скорость и точность.
🔹 Ultra-sparse MoE: 512 экспертов, маршрутизируется 10 + 1 общий.
🔹 Multi-Token Prediction → ускоренное speculative decoding.
🔹 По производительности обходит Qwen3-32B и приближается к Qwen3-235B в рассуждениях и long-context задачах.

🟢Qwen3-Next-80B-A3B-Instruct показатели почти на уровне 235B flagship.
🟢 Qwen3-Next-80B-A3B-Thinking превосходит Gemini-2.5-Flash-Thinking.

Попробовать: https://chat.qwen.ai
Анонс: https://qwen.ai/blog?id=4074cca80393150c248e508aa62983f9cb7d27cd&from=research.latest-advancements-list
HuggingFace: https://huggingface.co/collections/Qwen/qwen3-next-68c25fd6838e585db8eeea9d
ModelScope: https://modelscope.cn/collections/Qwen3-Next-c314f23bd0264a
Kaggle: https://kaggle.com/models/qwen-lm/qwen3-next-80b
Alibaba Cloud API: https://alibabacloud.com/help/en/model-studio/models#c5414da58bjgj

@ai_machinelearning_big_data

#AI #LLM #Qwen #DeepLearning #MoE #EfficientModels #LongContext #Reasonin
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥42👍2
Forwarded from Machinelearning
⚡️ Glyph: масштабирование контекста через визуально-текстовую компрессию

В основе модели лежит простая идея : вместо того чтобы кормить модели километровый текст, Glyph превращает его в изображение и обрабатывает через vision-language модель.

Используется LLM-управляемый генетический алгоритм, чтобы подобрать наилучшие параметры визуального отображения текста (шрифт, плотность, макет), балансируя между сжатием и точностью.

Это радикально снижает вычислительные затраты, сохраняя при этом смысловую структуру текста.

При этом точность почти не падает: на задачах с длинным контекстом Glyph работает на уровне современных моделей вроде Qwen3-8B.

При экстремальном сжатии VLM с контекстом 128K может эффективно обрабатывать задачи, эквивалентные 1M+ токенов в традиционных LLM.

Фактически, длинный контекст становится мультимодальной задачей, а не чисто текстовой.

📄 Подробности: arxiv.org/abs/2510.17800

🧩 Веса: huggingface.co/zai-org/Glyph

👉 Репозиторий: github.com/thu-coai/Glyph

@ai_machinelearning_big_data


#AI #LLM #Multimodal #Research #DeepLearning
9🔥3👍2😢1🙏1