Gradient Dude
2.54K subscribers
180 photos
50 videos
2 files
169 links
TL;DR for DL/CV/ML/AI papers from an author of publications at top-tier AI conferences (CVPR, NIPS, ICCV,ECCV).

Most ML feeds go for fluff, we go for the real meat.

YouTube: youtube.com/c/gradientdude
IG instagram.com/gradientdude
Download Telegram
VQGAN: Taming Transformers for High-Resolution Image Synthesis
from my lab in Heidelberg University

Paper explained on my YouTube channel!

Authors introduce VQGAN which combines the efficiency of convolutional approaches with the expressivity of transformers. VQGAN is essentially a GAN that learns a codebook of context-rich visual parts and uses it to quantize the bottleneck representation at every forward pass.
The self-attention model is used to learn a prior distribution of codewords.
Sampling from this model then allows producing plausible constellations of the codewords which are then fed through a decoder to generate realistic images in arbitrary resolution.

๐Ÿ“ Paper
โš™๏ธ Code (with pretrained models)
๐Ÿ““ Colab notebook
๐Ÿ““ Colab notebook to compare the first stage models in VQGAN and in DALL-E

๐Ÿ’ช๐Ÿป๐Ÿฆพ๐Ÿค™๐Ÿผ
โ–ถ๏ธ YouTube Video explanation
Visual results. Bellissimo! ๐Ÿ‘Œ๐Ÿป
Facebook open-sourced a library for state-of-the-art self-supervised learning: VISSL.

+ It contains reproducible reference implementation of SOTA self-supervision approaches (like SimCLR, MoCo, PIRL, SwAV etc) and their components that can be reused. Also supports supervised trainings.
+ Easy to train model on 1-gpu, multi-gpu and multi-node. Seamless scaling to large scale data and model sizes with FP16, LARC etc.

Finally somebody unified all recent works in one modular framework. I don't know about you, but I'm very happy ๐Ÿ˜Œ!

VISSL: https://vissl.ai/
Blogpost: https://ai.facebook.com/blog/seer-the-start-of-a-more-powerful-flexible-and-accessible-era-for-computer-vision
Tutorials in Google Colab: https://vissl.ai/tutorials/
Self-supervised Pretraining of Visual Features in the Wild

Facebook also published its ultimate SElf-supERvised (SEER) model.

- They pretrained it on a 1B random, unlabeled and uncurated Instagram images ๐Ÿ‘€.
- SEER outperformed SOTA self-supervised systems, reaching 84.2% top-1 accuracy on ImageNet.
- SEER also outperformed SOTA supervised models on downstream tasks, including low-shot, object detection, segmentation, and image classification.
- When trained with just 10% of the ImageNet, SEER still achieved 77.9% top-1 accuracy on the full data set. When trained with just 1% of the annotated ImageNet examples, SEER achieved 60.5% top-1 accuracy.
- SEER is based on recent RegNet achitecture . Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet l models while being up to 5x faster on GPUs.

๐Ÿ“ Paper
๐Ÿ“– Blogpost
โš™๏ธ I guess the source code will be published as a part of VISSL soon.
84.2% top-1 accuracy on Imagenet! ๐Ÿ‘€
This media is not supported in your browser
VIEW IN TELEGRAM
Synthesized StyleGAN2 portrait was tuned using a textual description using CLIP encoder. A man was transformed into a vampire by navigating in the latent space using a query "an image of a man resembling a vampire, with the face of Count Dracula". Video attached.

For me this looks like a sorcery โœจ.

โž– Link to the source twitt
๐Ÿ““ Colab notebook
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
Yann LeCun, FAIR

New self-supervised learning loss: compute cross-correlation matrix between the features of two distorted versions of a sample and make it as close to the identity matrix as possible.

+ This naturally avoids representation collapse and causes the representation vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors.
+ It is also robust to the training batch size.
+ Comparable to SOTA self-supervised methods (similar results as BYOL), but the method is conceptually simpler.

โš™๏ธ My favorite part, training resources: 32x V100 GPUs, approx. 124 hours

๐Ÿ“ Paper
๐Ÿ›  Code (will be released soon)
Self-supervised learning: The dark matter of intelligence

Blog post by Yann LeCun and Ishan Misra - well-known experts in self-supervised learning at FAIR.

They talk about:
- Self-supervised learning as a paradigm in general
- Self-supervised learning as predictive learning,
- Self-supervised learning for language versus vision
- Modeling the uncertainty in prediction
- A unified view of self-supervised methods
- Self-supervised learning at Facebook

Some excerpts:

As babies, we learn how the world works largely by observation. We form generalized predictive models about objects in the world by learning concepts such as object permanence and gravity. Later in life, we observe the world, act on it, observe again, and build hypotheses to explain how our actions change our environment by trial and error.

We believe that self-supervised learning (SSL) is one of the most promising ways to build such background knowledge and approximate a form of common sense in AI systems.

๐Ÿ“Ž Read more here.
โ€œLong term, progress in AI will come from programs that just watch videos all day and learn like a baby. ... Childrean learn by watching the spectacle of the world.
But when the spectacle of the world is captured by a camera, it's a video." -
@Yann Lecun

I can only add here, that AI might be also learnign from interacting with its environment (at least a simulated one).

Blogpost with high-level reflection on self-supervised learning at wired.com.
This media is not supported in your browser
VIEW IN TELEGRAM
Visualising Neurons in Artificial Neural Networks

What a surprise, openAI discovered yet another time that neurons can be interpretable ๐Ÿ˜‚ now they showed neurons for their recently hyped CLIP model.

https://openai.com/blog/multimodal-neurons/
But to be honest this time they look much better and crispy.
However, CLIP tends to over-abstract images in many ways and this leads to a new type of attack on such neural Networks - typographic attack.

Yes, nowadays it is that easy to trick this powerful AI.
Regarding the typographic attack in the previous post. Apparently, It can be avoided if you give proper query text string. For example โ€œwait a second, this is just an apple with a label saying iPodโ€ will get a higher confidence than the โ€œiPodโ€

This was discovered by Yannic.
This media is not supported in your browser
VIEW IN TELEGRAM
It is Sunday, pancake time ๐Ÿ‘Œ๐Ÿป. So I could not resist sharing this spectacular Deep Fake with you.
Neural Funk: AI generates endless breakbeats

Enthusiasts from Skoltech have trained a WaveGAN on 7500 vintage drum loops, then used the resulting model to generate thousands of new drum loops.
I have attached my favorite 6-minute sample (147 bpm). Love it!

The result was obtained by moving a point slowly through a random trajectory in the modelโ€™s latent space. Each point in the latent space corresponds to either an existing or non-existing break. Linear movement between two points results in a smooth transition between two corresponding breaks.

The pace of progress in synthetic audio and image generation is mind-blowing. Will we be able to generate infinite-movies? Imagine an infinite Harry Potter story or an endless New Year's speech of Putin ๐Ÿ˜…


โ–ถ๏ธ A 6-hour Neural Funk on YouTube
๐ŸŽง A 6-hour sequence in wav format

๐Ÿ““Colab notebook with pretrained models
โ€‹โ€‹Interview with Natalia Neverova - Research Lead at Facebook AI Research

Natalia Neverova was one of my research advisors during my internship at Facebook AI Research. In this interview, she talks about the research at FAIR, which students do they prefer to hire, 3D reconstruction of people and animals (3D animals ๐Ÿ’ was exactly my research project at FAIR).

๐ŸŒ Link to the interview (unfortunately, only in Russian)
China trains a 10billion parameter multimodal networkโ€ฆ using NVIDIAโ€™s code:

A hybrid team of researchers from Alibaba and Tsinghua University have built M6, a โ€œMulti-Modality to Multi-Modality Multitask Mega-transformerโ€. M6 is a multi-modal model trained on a huge corpus of text and image data, including image-text pairs (similar to recent systems like OpenAIโ€™s CLIP). M6 has a broad capability surface and because of how it was trained, you can use M6 to search for an image or vice versa, generate media in different modalities, match images together, write poems, answer questions, and so on.

๐Ÿ“ฆ Data: ~60 million images (with accompanying text pairs) totalling 1.9TB (almost twice the raw size of ImageNet), plus 292GB of text.