Media is too big
VIEW IN TELEGRAM
Stable View Synthesis (by Vladlen Koltun from Intel)
Given a set of source images depicting a scene from arbitrary viewpoints, it synthesizes new views of the scene.
The method operates on a geometric scaffold computed via structure-from-motion and multi-view stereo. Each point on this 3D scaffold is associated with view rays and corresponding feature vectors that encode the appearance of this point in the input images. The core of SVS is view-dependent on-surface feature aggregation, in which directional feature vectors at each 3D point are processed to produce a new feature vector for a ray that maps this point into the new target view. The target view is then rendered by a convolutional network from a tensor of features synthesized in this way for all pixels. The method is trained end-to-end.
The results are magnificent!
Source code
Paper
Given a set of source images depicting a scene from arbitrary viewpoints, it synthesizes new views of the scene.
The method operates on a geometric scaffold computed via structure-from-motion and multi-view stereo. Each point on this 3D scaffold is associated with view rays and corresponding feature vectors that encode the appearance of this point in the input images. The core of SVS is view-dependent on-surface feature aggregation, in which directional feature vectors at each 3D point are processed to produce a new feature vector for a ray that maps this point into the new target view. The target view is then rendered by a convolutional network from a tensor of features synthesized in this way for all pixels. The method is trained end-to-end.
The results are magnificent!
Source code
Paper
This media is not supported in your browser
VIEW IN TELEGRAM
VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Representation
I continue discussing deep learning approaches for self-driving cars.
Future motion prediction is a task of paramount importance for autonomous driving. For a self-driving car to safely operate it is crucial to be able to anticipate the actions of other agents on the road.
In this video, I explain VectorNet - one of the methods for future motion prediction based on the vectorized representation of the scene instead of RGB images.
โถ๏ธYouTube Video
๐Paper
I continue discussing deep learning approaches for self-driving cars.
Future motion prediction is a task of paramount importance for autonomous driving. For a self-driving car to safely operate it is crucial to be able to anticipate the actions of other agents on the road.
In this video, I explain VectorNet - one of the methods for future motion prediction based on the vectorized representation of the scene instead of RGB images.
โถ๏ธYouTube Video
๐Paper
Forwarded from ะขะตั
ะฝะพะปะพะณะธะธ | ะะตะนัะพัะตัะธ | ะะพัั
This media is not supported in your browser
VIEW IN TELEGRAM
๐
GANsformer or StyleGAN2 on steroids
Facebook AI Research
GANsformer is a novel and efficient type of transformer, explored for the task of image generation. The network employs a bipartite structure that enables long-range interactions across the image, while maintaining computation of linearly efficiency, that can readily scale to high-resolution synthesis. The model iteratively propagates information from a set of latent variables to the evolving visual features and vice versa, to support the refinement of each in light of the other and encourage the emergence of compositional representations of objects and scenes. In contrast to the classic transformer architecture, it utilizes multiplicative integration that allows flexible region-based modulation, and can thus be seen as a generalization of the successful StyleGAN network.
Authors promise to release the pretrained models soon.
๐ arxiv.org/abs/2103.01209
โgithub.com/dorarad/gansformer
Facebook AI Research
GANsformer is a novel and efficient type of transformer, explored for the task of image generation. The network employs a bipartite structure that enables long-range interactions across the image, while maintaining computation of linearly efficiency, that can readily scale to high-resolution synthesis. The model iteratively propagates information from a set of latent variables to the evolving visual features and vice versa, to support the refinement of each in light of the other and encourage the emergence of compositional representations of objects and scenes. In contrast to the classic transformer architecture, it utilizes multiplicative integration that allows flexible region-based modulation, and can thus be seen as a generalization of the successful StyleGAN network.
Authors promise to release the pretrained models soon.
๐ arxiv.org/abs/2103.01209
โgithub.com/dorarad/gansformer
Hey Rebyata!
I brought you some good news for software engineers w/o an academic background in AI! ๐ฆพ
Yesterday Facebook AI launched a new program to draw more diverse talent into AI - Rotational AI Science & Engineering (RAISE).
RAISE is a 24-month full-time research engineering position in Facebook AI. You will work at 4 different AI teams (6 months at each) during that time and towards the program en, you will select a permanent team in Facebook AI.
Applications deadline: May 10th, 2021 @ 5pm PT.
https://ai.facebook.com/blog/facebook-ais-raise-program-an-innovative-new-career-pathway-into-ai
I brought you some good news for software engineers w/o an academic background in AI! ๐ฆพ
Yesterday Facebook AI launched a new program to draw more diverse talent into AI - Rotational AI Science & Engineering (RAISE).
RAISE is a 24-month full-time research engineering position in Facebook AI. You will work at 4 different AI teams (6 months at each) during that time and towards the program en, you will select a permanent team in Facebook AI.
Applications deadline: May 10th, 2021 @ 5pm PT.
https://ai.facebook.com/blog/facebook-ais-raise-program-an-innovative-new-career-pathway-into-ai
This media is not supported in your browser
VIEW IN TELEGRAM
Neural 3D Video Synthesis
Facebook Reality Labs
These guys created a novel time-conditioned Neural Radiance Fields. The results are impressive. When it gets faster, it will enable mind-blowing applications!
It is a sort of extension of NeRF model for videos. The model learns to generate video frames conditioned on position, view direction and time-variant latent code.
Temporal latent codes are optimized jointly with other network parameters.
NeRF model is notoriously slow and requires a long training time. Training separate NERF models for every frame requires ~15K GPU hours, while the proposed model - only 1.3K GPU hours.
๐ Paper: https://arxiv.org/abs/2103.02597
๐ Project page: https://neural-3d-video.github.io
Facebook Reality Labs
These guys created a novel time-conditioned Neural Radiance Fields. The results are impressive. When it gets faster, it will enable mind-blowing applications!
It is a sort of extension of NeRF model for videos. The model learns to generate video frames conditioned on position, view direction and time-variant latent code.
Temporal latent codes are optimized jointly with other network parameters.
NeRF model is notoriously slow and requires a long training time. Training separate NERF models for every frame requires ~15K GPU hours, while the proposed model - only 1.3K GPU hours.
๐ Paper: https://arxiv.org/abs/2103.02597
๐ Project page: https://neural-3d-video.github.io
โ๏ธ Model architecture:
๐ช Limitations:
- Training requires time-synchronized input videos from multiple static cameras with known intrinsic and extrinsic parameters.
- Training for a single 10-seconds video is still quite slow for any real-life application: It takes about a week with 8 x V100 GPUs (~1300 GPU hours).
- Blur in the moving regions in highly dynamic scenes with large and fast motions.
- Apparent artifacts when rendering from viewpoints beyond the bounds of the training views (baseline NERF model has the same problems).
z_t
is a time-variant learnable 1024-dimensional latent code. The rest is almost the same as in NERF.๐ช Limitations:
- Training requires time-synchronized input videos from multiple static cameras with known intrinsic and extrinsic parameters.
- Training for a single 10-seconds video is still quite slow for any real-life application: It takes about a week with 8 x V100 GPUs (~1300 GPU hours).
- Blur in the moving regions in highly dynamic scenes with large and fast motions.
- Apparent artifacts when rendering from viewpoints beyond the bounds of the training views (baseline NERF model has the same problems).
VQGAN: Taming Transformers for High-Resolution Image Synthesis
from my lab in Heidelberg University
Paper explained on my YouTube channel!
Authors introduce VQGAN which combines the efficiency of convolutional approaches with the expressivity of transformers. VQGAN is essentially a GAN that learns a codebook of context-rich visual parts and uses it to quantize the bottleneck representation at every forward pass.
The self-attention model is used to learn a prior distribution of codewords.
Sampling from this model then allows producing plausible constellations of the codewords which are then fed through a decoder to generate realistic images in arbitrary resolution.
๐ Paper
โ๏ธ Code (with pretrained models)
๐ Colab notebook
๐ Colab notebook to compare the first stage models in VQGAN and in DALL-E
๐ช๐ป๐ฆพ๐ค๐ผ
โถ๏ธ YouTube Video explanation
from my lab in Heidelberg University
Paper explained on my YouTube channel!
Authors introduce VQGAN which combines the efficiency of convolutional approaches with the expressivity of transformers. VQGAN is essentially a GAN that learns a codebook of context-rich visual parts and uses it to quantize the bottleneck representation at every forward pass.
The self-attention model is used to learn a prior distribution of codewords.
Sampling from this model then allows producing plausible constellations of the codewords which are then fed through a decoder to generate realistic images in arbitrary resolution.
๐ Paper
โ๏ธ Code (with pretrained models)
๐ Colab notebook
๐ Colab notebook to compare the first stage models in VQGAN and in DALL-E
๐ช๐ป๐ฆพ๐ค๐ผ
โถ๏ธ YouTube Video explanation
Facebook open-sourced a library for state-of-the-art self-supervised learning: VISSL.
+ It contains reproducible reference implementation of SOTA self-supervision approaches (like SimCLR, MoCo, PIRL, SwAV etc) and their components that can be reused. Also supports supervised trainings.
+ Easy to train model on 1-gpu, multi-gpu and multi-node. Seamless scaling to large scale data and model sizes with FP16, LARC etc.
Finally somebody unified all recent works in one modular framework. I don't know about you, but I'm very happy ๐!
VISSL: https://vissl.ai/
Blogpost: https://ai.facebook.com/blog/seer-the-start-of-a-more-powerful-flexible-and-accessible-era-for-computer-vision
Tutorials in Google Colab: https://vissl.ai/tutorials/
+ It contains reproducible reference implementation of SOTA self-supervision approaches (like SimCLR, MoCo, PIRL, SwAV etc) and their components that can be reused. Also supports supervised trainings.
+ Easy to train model on 1-gpu, multi-gpu and multi-node. Seamless scaling to large scale data and model sizes with FP16, LARC etc.
Finally somebody unified all recent works in one modular framework. I don't know about you, but I'm very happy ๐!
VISSL: https://vissl.ai/
Blogpost: https://ai.facebook.com/blog/seer-the-start-of-a-more-powerful-flexible-and-accessible-era-for-computer-vision
Tutorials in Google Colab: https://vissl.ai/tutorials/
Self-supervised Pretraining of Visual Features in the Wild
Facebook also published its ultimate SElf-supERvised (SEER) model.
- They pretrained it on a 1B random, unlabeled and uncurated Instagram images ๐.
- SEER outperformed SOTA self-supervised systems, reaching 84.2% top-1 accuracy on ImageNet.
- SEER also outperformed SOTA supervised models on downstream tasks, including low-shot, object detection, segmentation, and image classification.
- When trained with just 10% of the ImageNet, SEER still achieved 77.9% top-1 accuracy on the full data set. When trained with just 1% of the annotated ImageNet examples, SEER achieved 60.5% top-1 accuracy.
- SEER is based on recent RegNet achitecture . Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet l models while being up to 5x faster on GPUs.
๐ Paper
๐ Blogpost
โ๏ธ I guess the source code will be published as a part of VISSL soon.
Facebook also published its ultimate SElf-supERvised (SEER) model.
- They pretrained it on a 1B random, unlabeled and uncurated Instagram images ๐.
- SEER outperformed SOTA self-supervised systems, reaching 84.2% top-1 accuracy on ImageNet.
- SEER also outperformed SOTA supervised models on downstream tasks, including low-shot, object detection, segmentation, and image classification.
- When trained with just 10% of the ImageNet, SEER still achieved 77.9% top-1 accuracy on the full data set. When trained with just 1% of the annotated ImageNet examples, SEER achieved 60.5% top-1 accuracy.
- SEER is based on recent RegNet achitecture . Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet l models while being up to 5x faster on GPUs.
๐ Paper
๐ Blogpost
โ๏ธ I guess the source code will be published as a part of VISSL soon.
Meta
SEER: The start of a more powerful, flexible, and accessible era for computer vision
The future of AI is in creating systems that can learn directly from whatever information theyโre given โ whether itโs text, images, or another type of data โ without relying on carefully curated and labeled data sets to teach them how to recognize objectsโฆ
This media is not supported in your browser
VIEW IN TELEGRAM
Synthesized StyleGAN2 portrait was tuned using a textual description using CLIP encoder. A man was transformed into a vampire by navigating in the latent space using a query "an image of a man resembling a vampire, with the face of Count Dracula". Video attached.
For me this looks like a sorcery โจ.
โ Link to the source twitt
๐ Colab notebook
For me this looks like a sorcery โจ.
โ Link to the source twitt
๐ Colab notebook
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
Yann LeCun, FAIR
New self-supervised learning loss: compute cross-correlation matrix between the features of two distorted versions of a sample and make it as close to the identity matrix as possible.
+ This naturally avoids representation collapse and causes the representation vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors.
+ It is also robust to the training batch size.
+ Comparable to SOTA self-supervised methods (similar results as BYOL), but the method is conceptually simpler.
โ๏ธ My favorite part, training resources: 32x V100 GPUs, approx. 124 hours
๐ Paper
๐ Code (will be released soon)
Yann LeCun, FAIR
New self-supervised learning loss: compute cross-correlation matrix between the features of two distorted versions of a sample and make it as close to the identity matrix as possible.
+ This naturally avoids representation collapse and causes the representation vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors.
+ It is also robust to the training batch size.
+ Comparable to SOTA self-supervised methods (similar results as BYOL), but the method is conceptually simpler.
โ๏ธ My favorite part, training resources: 32x V100 GPUs, approx. 124 hours
๐ Paper
๐ Code (will be released soon)
Self-supervised learning: The dark matter of intelligence
Blog post by Yann LeCun and Ishan Misra - well-known experts in self-supervised learning at FAIR.
They talk about:
- Self-supervised learning as a paradigm in general
- Self-supervised learning as predictive learning,
- Self-supervised learning for language versus vision
- Modeling the uncertainty in prediction
- A unified view of self-supervised methods
- Self-supervised learning at Facebook
Some excerpts:
As babies, we learn how the world works largely by observation. We form generalized predictive models about objects in the world by learning concepts such as object permanence and gravity. Later in life, we observe the world, act on it, observe again, and build hypotheses to explain how our actions change our environment by trial and error.
We believe that self-supervised learning (SSL) is one of the most promising ways to build such background knowledge and approximate a form of common sense in AI systems.
๐ Read more here.
Blog post by Yann LeCun and Ishan Misra - well-known experts in self-supervised learning at FAIR.
They talk about:
- Self-supervised learning as a paradigm in general
- Self-supervised learning as predictive learning,
- Self-supervised learning for language versus vision
- Modeling the uncertainty in prediction
- A unified view of self-supervised methods
- Self-supervised learning at Facebook
Some excerpts:
As babies, we learn how the world works largely by observation. We form generalized predictive models about objects in the world by learning concepts such as object permanence and gravity. Later in life, we observe the world, act on it, observe again, and build hypotheses to explain how our actions change our environment by trial and error.
We believe that self-supervised learning (SSL) is one of the most promising ways to build such background knowledge and approximate a form of common sense in AI systems.
๐ Read more here.
โLong term, progress in AI will come from programs that just watch videos all day and learn like a baby. ... Childrean learn by watching the spectacle of the world.
But when the spectacle of the world is captured by a camera, it's a video." -
@Yann Lecun
I can only add here, that AI might be also learnign from interacting with its environment (at least a simulated one).
Blogpost with high-level reflection on self-supervised learning at wired.com.
But when the spectacle of the world is captured by a camera, it's a video." -
@Yann Lecun
I can only add here, that AI might be also learnign from interacting with its environment (at least a simulated one).
Blogpost with high-level reflection on self-supervised learning at wired.com.
Wired
Facebookโs New AI Teaches Itself to See With Less Human Help
Most image recognition algorithms require lots of labeled pictures. This new approach eliminates the need for most of the labeling.