Gradient Dude
2.55K subscribers
180 photos
50 videos
2 files
169 links
TL;DR for DL/CV/ML/AI papers from an author of publications at top-tier AI conferences (CVPR, NIPS, ICCV,ECCV).

Most ML feeds go for fluff, we go for the real meat.

YouTube: youtube.com/c/gradientdude
IG instagram.com/gradientdude
Download Telegram
Google open-sourced its AutoML framework for model architecture search at scale.
It helps to find the right model architecture for any classification problems (i.e., CNN with different types of layers).
Now you can write fit(); predict() and call it a day! Of course, in case you have enough GPUs ๐Ÿ™Š๐Ÿ˜…

You can define your own model building blocks to use for search as well.
The framework uses Bayesian optimization to find proper hyperparameters and can build an ensemble of the models.
Works both for table and image data.

https://github.com/google/model_search
This media is not supported in your browser
VIEW IN TELEGRAM
How does Bayesian optimization help to find the proper hyperparameters for a machine learning model?

Bayesian optimization works by constructing a posterior distribution of the objective function (Gaussian process) and use it to select the most promising hyperparameters to evaluate.
As the number of observations grows, the posterior distribution improves, and the algorithm becomes more certain of which regions in the parameter space are worth exploring, and which are not.

Good blogposts to learn about Bayesian optimization: [at towardsdatascience] [at research.fb.com]
A talk on Theoretical Foundations of Graph Neural Networks by Petar Veliฤkoviฤ‡ from DeepMind.

In this talk Petar derives GNNs from first principles, motivates their use in the sciences, and explain how they emerged along several research lines.
Should be very interesting for those who wanted to learn about GNNs but could not find a good starting point.

Video: https://youtu.be/uF53xsT7mjc
Slides: https://petar-v.com/talks/GNN-Wednesday.pdf
This media is not supported in your browser
VIEW IN TELEGRAM
Guys from RunwayML created an awesome user-friendly demo for our approach "Adaptive Style Transfer".

You can play around with it and easily stylize your own photos. One important thing: the larger an input image, the more crispy becomes a stylization.

Run Models for 8 different artists
Run Picasso model
Run Van Gogh model

Method source code on Github: https://github.com/CompVis/adaptive-style-transfer
Media is too big
VIEW IN TELEGRAM
Stable View Synthesis (by Vladlen Koltun from Intel)

Given a set of source images depicting a scene from arbitrary viewpoints, it synthesizes new views of the scene.

The method operates on a geometric scaffold computed via structure-from-motion and multi-view stereo. Each point on this 3D scaffold is associated with view rays and corresponding feature vectors that encode the appearance of this point in the input images. The core of SVS is view-dependent on-surface feature aggregation, in which directional feature vectors at each 3D point are processed to produce a new feature vector for a ray that maps this point into the new target view. The target view is then rendered by a convolutional network from a tensor of features synthesized in this way for all pixels. The method is trained end-to-end.

The results are magnificent!

Source code
Paper
This media is not supported in your browser
VIEW IN TELEGRAM
VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Representation

I continue discussing deep learning approaches for self-driving cars.
Future motion prediction is a task of paramount importance for autonomous driving. For a self-driving car to safely operate it is crucial to be able to anticipate the actions of other agents on the road.

In this video, I explain VectorNet - one of the methods for future motion prediction based on the vectorized representation of the scene instead of RGB images.

โ–ถ๏ธYouTube Video
๐Ÿ“Paper
This is just awesome ๐Ÿ˜€
Forwarded from ะขะตั…ะฝะพะปะพะณะธะธ | ะะตะนั€ะพัะตั‚ะธ | ะ‘ะพั‚ั‹
This media is not supported in your browser
VIEW IN TELEGRAM
๐Ÿ˜…
GANsformer or StyleGAN2 on steroids
Facebook AI Research

GANsformer is a novel and efficient type of transformer, explored for the task of image generation. The network employs a bipartite structure that enables long-range interactions across the image, while maintaining computation of linearly efficiency, that can readily scale to high-resolution synthesis. The model iteratively propagates information from a set of latent variables to the evolving visual features and vice versa, to support the refinement of each in light of the other and encourage the emergence of compositional representations of objects and scenes. In contrast to the classic transformer architecture, it utilizes multiplicative integration that allows flexible region-based modulation, and can thus be seen as a generalization of the successful StyleGAN network.

Authors promise to release the pretrained models soon.

๐Ÿ““ arxiv.org/abs/2103.01209
โš’github.com/dorarad/gansformer
Hey Rebyata!

I brought you some good news for software engineers w/o an academic background in AI! ๐Ÿฆพ
Yesterday Facebook AI launched a new program to draw more diverse talent into AI - Rotational AI Science & Engineering (RAISE).

RAISE is a 24-month full-time research engineering position in Facebook AI. You will work at 4 different AI teams (6 months at each) during that time and towards the program en, you will select a permanent team in Facebook AI.

Applications deadline: May 10th, 2021 @ 5pm PT.
https://ai.facebook.com/blog/facebook-ais-raise-program-an-innovative-new-career-pathway-into-ai
This media is not supported in your browser
VIEW IN TELEGRAM
Neural 3D Video Synthesis
Facebook Reality Labs

These guys created a novel time-conditioned Neural Radiance Fields. The results are impressive. When it gets faster, it will enable mind-blowing applications!

It is a sort of extension of NeRF model for videos. The model learns to generate video frames conditioned on position, view direction and time-variant latent code.
Temporal latent codes are optimized jointly with other network parameters.
NeRF model is notoriously slow and requires a long training time. Training separate NERF models for every frame requires ~15K GPU hours, while the proposed model - only 1.3K GPU hours.

๐Ÿ“ Paper: https://arxiv.org/abs/2103.02597
๐ŸŒ Project page: https://neural-3d-video.github.io
โš™๏ธ Model architecture:
z_t is a time-variant learnable 1024-dimensional latent code. The rest is almost the same as in NERF.

๐Ÿ”ช Limitations:
- Training requires time-synchronized input videos from multiple static cameras with known intrinsic and extrinsic parameters.
- Training for a single 10-seconds video is still quite slow for any real-life application: It takes about a week with 8 x V100 GPUs (~1300 GPU hours).
- Blur in the moving regions in highly dynamic scenes with large and fast motions.
- Apparent artifacts when rendering from viewpoints beyond the bounds of the training views (baseline NERF model has the same problems).
VQGAN: Taming Transformers for High-Resolution Image Synthesis
from my lab in Heidelberg University

Paper explained on my YouTube channel!

Authors introduce VQGAN which combines the efficiency of convolutional approaches with the expressivity of transformers. VQGAN is essentially a GAN that learns a codebook of context-rich visual parts and uses it to quantize the bottleneck representation at every forward pass.
The self-attention model is used to learn a prior distribution of codewords.
Sampling from this model then allows producing plausible constellations of the codewords which are then fed through a decoder to generate realistic images in arbitrary resolution.

๐Ÿ“ Paper
โš™๏ธ Code (with pretrained models)
๐Ÿ““ Colab notebook
๐Ÿ““ Colab notebook to compare the first stage models in VQGAN and in DALL-E

๐Ÿ’ช๐Ÿป๐Ÿฆพ๐Ÿค™๐Ÿผ
โ–ถ๏ธ YouTube Video explanation
Visual results. Bellissimo! ๐Ÿ‘Œ๐Ÿป
Facebook open-sourced a library for state-of-the-art self-supervised learning: VISSL.

+ It contains reproducible reference implementation of SOTA self-supervision approaches (like SimCLR, MoCo, PIRL, SwAV etc) and their components that can be reused. Also supports supervised trainings.
+ Easy to train model on 1-gpu, multi-gpu and multi-node. Seamless scaling to large scale data and model sizes with FP16, LARC etc.

Finally somebody unified all recent works in one modular framework. I don't know about you, but I'm very happy ๐Ÿ˜Œ!

VISSL: https://vissl.ai/
Blogpost: https://ai.facebook.com/blog/seer-the-start-of-a-more-powerful-flexible-and-accessible-era-for-computer-vision
Tutorials in Google Colab: https://vissl.ai/tutorials/