Data Science by ODS.ai 🦜
44.8K subscribers
786 photos
85 videos
7 files
1.86K links
First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @malev
Download Telegram
​​Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes

#3DRCAN for denoising, super resolution and expansion microscopy.

GitHub: https://github.com/AiviaCommunity/3D-RCAN
ArXiV: https://www.biorxiv.org/content/10.1101/2020.08.27.270439v1

#biolearning #cv #dl
​​DeepMind significally (+100%) improved protein folding modelling

Why is this important: protein folding = protein structure = protein function = how protein works in the living speciment and what it does.
What this means: better vaccines, better meds, more curable diseases and more calamities easen by the medications or better understanding.

Dataset: ~170000 available protein structures from PDB
Hardware: 128 TPUv3 cores (roughly  equivalent to ~100-200 GPUs)

Link: https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology

#DL #NLU #proteinmodelling #bio #biolearning #insilico #deepmind #AlphaFold
​​Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes

This technology allows to move camera a bit on any video, slow down time or do both. Great application for video producers and motion designers.

Website: http://www.cs.cornell.edu/~zl548/NSFF/
ArXiV: https://arxiv.org/abs/2011.13084
YouTube: https://youtu.be/qsMIH7gYRCc

#Nerf #videointerpolation #DL
πŸ‘©β€πŸŽ“Online lectures on Special Topics in AI: Deep Learning

Fresh free and open playlist on special topics in #DL from University of Wisconsin-Madison. Topics covering reliable deep learning, generalization, learning with less supervision, lifelong learning, deep generative models and more.

Overview Lecture: https://www.youtube.com/watch?v=6LSErxKe634&list=PLKvO2FVLnI9SYLe1umkXsOfIWmEez04Ii
YouTube Playlist: https://www.youtube.com/playlist?list=PLKvO2FVLnI9SYLe1umkXsOfIWmEez04Ii
Syllabus: http://pages.cs.wisc.edu/~sharonli/courses/cs839_fall2020/schedule.html

#wheretostart #lectures #YouTube
πŸ‘1
​​MPG: A Multi-ingredient Pizza Image Generator with Conditional StyleGANs

Work on conditional image generation

ArXiV: https://arxiv.org/abs/2012.02821

#GAN #DL #food2vec
​​πŸ”₯New breakthrough on text2image generation by #OpenAI

DALLΒ·E: Creating Images from Text

This architecture is capable of understanding style descriptions as well as complex relationship between objects in context.

That opens whole new perspective for digital agencies, potentially threatening stock photo sites and new opportunies for regulations and lawers to work on.

Interesting times!

Website: https://openai.com/blog/dall-e/

#GAN #GPT3 #openai #dalle #DL
​​Characterising Bias in Compressed Models

Popular compression techniques turned out to amplify bias in deep neural networks.

ArXiV: https://arxiv.org/abs/2010.03058

#NN #DL #bias
Towards Causal Representation Learning

Work on how neural networks derive casual variables from low-level observations.

Link: https://arxiv.org/abs/2102.11107

#casuallearning #bengio #nn #DL
SEER: The start of a more powerful, flexible, and accessible era for computer vision

#SEER stands for SElf-supERvised architecture which follows the vision of Yan LeCunn that real breakthrough in quality of models is possible only with #selfsupervised learning.

And here it is β€” model which was trained using some enormous amount of data achieves 84.2 percent top-1 accuracy on ImageNet.

Paramus: 1.3B
Dataset: 1B random images
Hardware: 512 GPUs (unspecified)

Blogpost: https://ai.facebook.com/blog/seer-the-start-of-a-more-powerful-flexible-and-accessible-era-for-computer-vision
ArXiV: https://arxiv.org/pdf/2103.01988.pdf

#facebook #fair #cv #dl
πŸ₯Self-supervised Learning for Medical images

Due to standard imaging procedures, medical images (X-ray, CT scans, etc) are usually well aligned.
This paper gives an opportunity to utilize such an alignment to automatically connect similar pairs of images for training.

GitHub: https://github.com/fhaghighi/TransVW
ArXiV: https://arxiv.org/abs/2102.10680

#biolearning #medical #dl #pytorch #keras
πŸ‘2
GAN Prior Embedded Network for Blind Face Restoration in the Wild

New proposed method allowed authors to improve the quality of old photoes

ArXiV: https://arxiv.org/abs/2105.06070
Github: https://github.com/yangxy/GPEN

#GAN #GPEN #blind_face_restoration #CV #DL
πŸ‘2
Color2Style: Real-Time Exemplar-Based Image Colorization with Self-Reference Learning and Deep Feature Modulation

ArXiV: https://arxiv.org/pdf/2106.08017.pdf

#colorization #dl
Mava: a scalable, research framework for multi-agent reinforcement learning

The framework integrates with popular MARL environments such as PettingZoo, SMAC, RoboCup, OpenSpiel, Flatland , as well as a few custom environments.

Mava includes distributed implementations of multi-agent versions of ddpg, d4pg, dqn, ppo, as well as DIAL, VDN and QMIX.

ArXiV: https://arxiv.org/pdf/2107.01460.pdf
GitHub: https://github.com/instadeepai/Mava

#MARL #RL #dl
πŸŽ“Online Berkeley Deep Learning Lectures 2021

University of Berkeley released its fresh course lectures online for everyone to watch. Welcome Berkeley CS182/282 Deep Learnings - 2021!

YouTube: https://www.youtube.com/playlist?list=PLuv1FSpHurUevSXe_k0S7Onh6ruL-_NNh

#MOOC #wheretostart #Berkeley #dl
​​Program Synthesis with Large Language Models

Paper compares models used for program synthesis in general purpose programming languages against two new benchmarks, MBPP (The Mostly Basic Programming Problems) and MathQA-Python, in both the few-shot and fine-tuning regimes.

MBPP contains 974 programming tasks, designed to be solvable by entry-level programmers. MathQA benchmark, contains 23914 problems that evaluate the ability of the models to synthesize code from more complex text.

Largest fine-tuned model achieves 83.8 percent accuracy on the latter benchmark.

Why this is interesting: better models for code / problem understanding means improved search for the coding tasks and the improvement of the coding-assistant projects like #TabNine or #Copilot

ArXiV: https://arxiv.org/abs/2108.07732

#DL #NLU #codewritingcode #benchmark
πŸ‘1