Data Science by ODS.ai 🦜
44.8K subscribers
776 photos
85 videos
7 files
1.85K links
First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @malev
Download Telegram
Forwarded from Machinelearning
Media is too big
VIEW IN TELEGRAM
📹 DEVA: Tracking Anything with Decoupled Video Segmentation

Decoupled video segmentation approach (DEVA), composed of task-specific image-level segmentation and class/task-agnostic bi-directional temporal propagation.

Новая модель сегментации видео для "отслеживания чего угодно" без обучения по видео для любой отдельной задачи.

🖥 Github: https://github.com/hkchengrex/Tracking-Anything-with-DEVA

🖥 Colab: https://colab.research.google.com/drive/1OsyNVoV_7ETD1zIE8UWxL3NXxu12m_YZ?usp=sharing

Project: https://hkchengrex.github.io/Tracking-Anything-with-DEVA/

📕 Paper: https://arxiv.org/abs/2309.03903v1

⭐️ Docs: https://paperswithcode.com/dataset/burst

ai_machinelearning_big_data
Please open Telegram to view this post
VIEW IN TELEGRAM
👍18🔥116
​​TSMixer: An All-MLP Architecture for Time Series Forecasting

Time-series datasets in real-world scenarios are inherently multivariate and riddled with intricate dynamics. While recurrent or attention-based deep learning models have been the go-to solution to address these complexities, recent discoveries have shown that even basic univariate linear models can surpass them in performance on standard academic benchmarks. As an extension of this revelation, the paper introduces the Time-Series Mixer TSMixer. This innovative design, crafted by layering multi-layer perceptrons, hinges on mixing operations across both time and feature axes, ensuring an efficient extraction of data nuances.

Upon application, TSMixer has shown promising results. Not only does it hold its ground against specialized state-of-the-art models on well-known benchmarks, but it also trumps leading alternatives in the challenging M5 benchmark, a dataset that mirrors the intricacies of retail realities. The paper's outcomes emphasize the pivotal role of cross-variate and auxiliary data in refining time series forecasting.

Paper link: https://arxiv.org/abs/2303.06053
Code link: https://github.com/google-research/google-research/tree/master/tsmixer

A detailed unofficial overview of the paper:
https://andlukyane.com/blog/paper-review-tsmixer

#paperreview #deeplearning #timeseries #mlp
👍23🔥74👏2
Forwarded from Machinelearning
🔥 Introducing Würstchen: Fast Diffusion for Image Generation

Diffusion model, whose text-conditional component works in a highly compressed latent space of images

Würstchen - это диффузионная модель, которой работает в сильно сжатом латентном пространстве изображений.

Почему это важно? Сжатие данных позволяет на порядки снизить вычислительные затраты как на обучение, так и на вывод модели.

Обучение на 1024×1024 изображениях гораздо затратное, чем на 32×32. Обычно в других моделях используется сравнительно небольшое сжатие, в пределах 4x - 8x пространственного сжатия.

Благодаря новой архитектуре достигается 42-кратное пространственное сжатие!

🤗 HF: https://huggingface.co/blog/wuertschen

📝 Paper: https://arxiv.org/abs/2306.00637

📕 Docs: hhttps://huggingface.co/docs/diffusers/main/en/api/pipelines/wuerstchen

🚀 Demo: https://huggingface.co/spaces/warp-ai/Wuerstchen

ai_machinelearning_big_data
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
👍5116🔥9😁1
Hey, please boost our channel to allow us to post stories.

We solemnly swear to post only memes there.

https://xn--r1a.website/opendatascience?boost
👎29💩18🖕17👍14😁94
Well, AI can learn that humans might be deceiving.

Upd: as our readers noted, post originally was written by Denis here.
But then Yudkowski retweeted and it was spread on X.
🤣158👍6722🤯13😁6👏4👌4🔥3
😁76👍17🤔128💯5🔥2
LLM models are in their childhood years

Source.
👍65🤨1244🦄4👎3🥰2
Forwarded from ilia.eth | ØxPlasma
Position: Analyst/Researcher for AI Team at Cyber.fund

About Cyber.fund:
Cyber.fund is a pioneering $100mm research-driven fund specializing in the realm of web3, decentralized AI, autonomous agents, and self-sovereign identity. Our legacy is built upon being the architects behind monumental projects such as Lido, p2p.org, =nil; foundation, Neutron, NEON, and early investments in groundbreaking technologies like Solana, Ethereum, EigenLayer among 150+ others. We are committed to advancing the frontiers of Fully Homomorphic Encryption (FHE) for Machine Learning, privacy-first ML (Large Language Models), AI aggregations, and routing platforms alongside decentralized AI solutions.

Who Are We Looking For?
A dynamic individual who straddles the worlds of business acumen and academic rigor with:
- A robust theoretical foundation in Computer Science and a must-have specialization in Machine Learning.
- An educational background from a technical university, with a preference for PhD holders from prestigious institutions like MIT or МФТИ.
- A track record of publications in the Machine Learning domain, ideally at the level of NeuroIPS.
- Experience working in startups or major tech companies, ideally coupled with a background in angel investing.
- A profound understanding of algorithms, techniques, and models in ML, with an exceptional ability to translate these into innovative products.
- Fluent English, intellectual curiosity, and a fervent passion for keeping abreast of the latest developments in AI/ML.

Responsibilities:
1) Investment Due Diligence: Conduct technical, product, and business analysis of potential AI/ML investments. This includes market analysis, engaging with founders and technical teams, and evaluating the scalability, reliability, risks, and limitations of products.

2) Portcos Support: Provide strategic and technical support to portfolio companies in AI/ML. Assist in crafting technological strategies, hiring, industry networking, identifying potential project challenges, and devising solutions.

3) Market and Technology Research: Stay at the forefront of ML/DL/AI trends (e.g., synthetic data, flash attention, 1bit LLM, FHE for ML, JEPA, etc.). Write publications, whitepapers, and potentially host X spaces/streams/podcasts on these subjects (in English). Identify promising companies and projects for investment opportunities.

How to Apply?
If you find yourself aligning with our requirements and are excited by the opportunity to contribute to our vision, please send your CV to sg@cyber.fund. Including a cover letter, links to publications, open-source contributions, and other achievements will be advantageous.

Location:
Location is flexible, but the candidate should be within the time zones ranging from EET to EST (Eastern Europe to the East Coast of the USA).

This is not just a job opportunity; it's a call to be part of a visionary journey reshaping the landscape of AI and decentralized technology. Join us at Cyber.fund and be at the forefront of the technological revolution.
🤡25👍218💩7👎5
Let’s get back to posting 😌
🫡19👍53🥴3
Forwarded from Machinelearning
⚡️ Awesome CVPR 2024 Papers, Workshops, Challenges, and Tutorials!

На конференцию 2024 года по компьютерному зрению и распознаванию образов (CVPR) поступило 11 532 статей, из которых только 2 719 были приняты, что составляет около 23,6% от общего числа.

Ниже приведен список лучших докладов, гайдов, статей, семинаров и датасетов с CVPR 2024.

Github

@ai_machinelearning_big_data
🔥13👍63
⚡️ PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models

Significantly improved finetuned perf by simply changing the initialization of LoRA's AB matrix from Gaussian/zero to principal components.

On GSM8K, Mistral-7B fine-tuned with PiSSA achieves an accuracy of 72.86%, outperforming LoRA’s 67.7% by 5.16%.

Github: https://github.com/GraphPKU/PiSSA
Paper: https://arxiv.org/abs/2404.02948

@opendatascience
🔥22😁11👍6🌭32🍌2
🥔 YaART: Yet Another ART Rendering Technology

💚 This study introduces YaART, a novel production-grade text-to-image cascaded diffusion model aligned to human preferences using Reinforcement Learning from Human Feedback (RLHF).

💜 During the development of YaART, Yandex especially focus on the choices of the model and training dataset sizes, the aspects that were not systematically investigated for text-to-image cascaded diffusion models before.

💖 In particular, researchers comprehensively analyze how these choices affect both the efficiency of the training process and the quality of the generated images, which are highly important in practice.

Paper page - https://ya.ru/ai/art/paper-yaart-v1
Arxiv - https://arxiv.org/abs/2404.05666
Habr - https://habr.com/ru/companies/yandex/articles/805745/

@opendatascience
Please open Telegram to view this post
VIEW IN TELEGRAM
👍2510🔥9💩4🍌2🖕2