βοΈ Model architecture:
πͺ Limitations:
- Training requires time-synchronized input videos from multiple static cameras with known intrinsic and extrinsic parameters.
- Training for a single 10-seconds video is still quite slow for any real-life application: It takes about a week with 8 x V100 GPUs (~1300 GPU hours).
- Blur in the moving regions in highly dynamic scenes with large and fast motions.
- Apparent artifacts when rendering from viewpoints beyond the bounds of the training views (baseline NERF model has the same problems).
z_t
is a time-variant learnable 1024-dimensional latent code. The rest is almost the same as in NERF.πͺ Limitations:
- Training requires time-synchronized input videos from multiple static cameras with known intrinsic and extrinsic parameters.
- Training for a single 10-seconds video is still quite slow for any real-life application: It takes about a week with 8 x V100 GPUs (~1300 GPU hours).
- Blur in the moving regions in highly dynamic scenes with large and fast motions.
- Apparent artifacts when rendering from viewpoints beyond the bounds of the training views (baseline NERF model has the same problems).
VQGAN: Taming Transformers for High-Resolution Image Synthesis
from my lab in Heidelberg University
Paper explained on my YouTube channel!
Authors introduce VQGAN which combines the efficiency of convolutional approaches with the expressivity of transformers. VQGAN is essentially a GAN that learns a codebook of context-rich visual parts and uses it to quantize the bottleneck representation at every forward pass.
The self-attention model is used to learn a prior distribution of codewords.
Sampling from this model then allows producing plausible constellations of the codewords which are then fed through a decoder to generate realistic images in arbitrary resolution.
π Paper
βοΈ Code (with pretrained models)
π Colab notebook
π Colab notebook to compare the first stage models in VQGAN and in DALL-E
πͺπ»π¦Ύπ€πΌ
βΆοΈ YouTube Video explanation
from my lab in Heidelberg University
Paper explained on my YouTube channel!
Authors introduce VQGAN which combines the efficiency of convolutional approaches with the expressivity of transformers. VQGAN is essentially a GAN that learns a codebook of context-rich visual parts and uses it to quantize the bottleneck representation at every forward pass.
The self-attention model is used to learn a prior distribution of codewords.
Sampling from this model then allows producing plausible constellations of the codewords which are then fed through a decoder to generate realistic images in arbitrary resolution.
π Paper
βοΈ Code (with pretrained models)
π Colab notebook
π Colab notebook to compare the first stage models in VQGAN and in DALL-E
πͺπ»π¦Ύπ€πΌ
βΆοΈ YouTube Video explanation
Facebook open-sourced a library for state-of-the-art self-supervised learning: VISSL.
+ It contains reproducible reference implementation of SOTA self-supervision approaches (like SimCLR, MoCo, PIRL, SwAV etc) and their components that can be reused. Also supports supervised trainings.
+ Easy to train model on 1-gpu, multi-gpu and multi-node. Seamless scaling to large scale data and model sizes with FP16, LARC etc.
Finally somebody unified all recent works in one modular framework. I don't know about you, but I'm very happy π!
VISSL: https://vissl.ai/
Blogpost: https://ai.facebook.com/blog/seer-the-start-of-a-more-powerful-flexible-and-accessible-era-for-computer-vision
Tutorials in Google Colab: https://vissl.ai/tutorials/
+ It contains reproducible reference implementation of SOTA self-supervision approaches (like SimCLR, MoCo, PIRL, SwAV etc) and their components that can be reused. Also supports supervised trainings.
+ Easy to train model on 1-gpu, multi-gpu and multi-node. Seamless scaling to large scale data and model sizes with FP16, LARC etc.
Finally somebody unified all recent works in one modular framework. I don't know about you, but I'm very happy π!
VISSL: https://vissl.ai/
Blogpost: https://ai.facebook.com/blog/seer-the-start-of-a-more-powerful-flexible-and-accessible-era-for-computer-vision
Tutorials in Google Colab: https://vissl.ai/tutorials/
Self-supervised Pretraining of Visual Features in the Wild
Facebook also published its ultimate SElf-supERvised (SEER) model.
- They pretrained it on a 1B random, unlabeled and uncurated Instagram images π.
- SEER outperformed SOTA self-supervised systems, reaching 84.2% top-1 accuracy on ImageNet.
- SEER also outperformed SOTA supervised models on downstream tasks, including low-shot, object detection, segmentation, and image classification.
- When trained with just 10% of the ImageNet, SEER still achieved 77.9% top-1 accuracy on the full data set. When trained with just 1% of the annotated ImageNet examples, SEER achieved 60.5% top-1 accuracy.
- SEER is based on recent RegNet achitecture . Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet l models while being up to 5x faster on GPUs.
π Paper
π Blogpost
βοΈ I guess the source code will be published as a part of VISSL soon.
Facebook also published its ultimate SElf-supERvised (SEER) model.
- They pretrained it on a 1B random, unlabeled and uncurated Instagram images π.
- SEER outperformed SOTA self-supervised systems, reaching 84.2% top-1 accuracy on ImageNet.
- SEER also outperformed SOTA supervised models on downstream tasks, including low-shot, object detection, segmentation, and image classification.
- When trained with just 10% of the ImageNet, SEER still achieved 77.9% top-1 accuracy on the full data set. When trained with just 1% of the annotated ImageNet examples, SEER achieved 60.5% top-1 accuracy.
- SEER is based on recent RegNet achitecture . Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet l models while being up to 5x faster on GPUs.
π Paper
π Blogpost
βοΈ I guess the source code will be published as a part of VISSL soon.
Meta
SEER: The start of a more powerful, flexible, and accessible era for computer vision
The future of AI is in creating systems that can learn directly from whatever information theyβre given β whether itβs text, images, or another type of data β without relying on carefully curated and labeled data sets to teach them how to recognize objectsβ¦
This media is not supported in your browser
VIEW IN TELEGRAM
Synthesized StyleGAN2 portrait was tuned using a textual description using CLIP encoder. A man was transformed into a vampire by navigating in the latent space using a query "an image of a man resembling a vampire, with the face of Count Dracula". Video attached.
For me this looks like a sorcery β¨.
β Link to the source twitt
π Colab notebook
For me this looks like a sorcery β¨.
β Link to the source twitt
π Colab notebook
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
Yann LeCun, FAIR
New self-supervised learning loss: compute cross-correlation matrix between the features of two distorted versions of a sample and make it as close to the identity matrix as possible.
+ This naturally avoids representation collapse and causes the representation vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors.
+ It is also robust to the training batch size.
+ Comparable to SOTA self-supervised methods (similar results as BYOL), but the method is conceptually simpler.
βοΈ My favorite part, training resources: 32x V100 GPUs, approx. 124 hours
π Paper
π Code (will be released soon)
Yann LeCun, FAIR
New self-supervised learning loss: compute cross-correlation matrix between the features of two distorted versions of a sample and make it as close to the identity matrix as possible.
+ This naturally avoids representation collapse and causes the representation vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors.
+ It is also robust to the training batch size.
+ Comparable to SOTA self-supervised methods (similar results as BYOL), but the method is conceptually simpler.
βοΈ My favorite part, training resources: 32x V100 GPUs, approx. 124 hours
π Paper
π Code (will be released soon)
Self-supervised learning: The dark matter of intelligence
Blog post by Yann LeCun and Ishan Misra - well-known experts in self-supervised learning at FAIR.
They talk about:
- Self-supervised learning as a paradigm in general
- Self-supervised learning as predictive learning,
- Self-supervised learning for language versus vision
- Modeling the uncertainty in prediction
- A unified view of self-supervised methods
- Self-supervised learning at Facebook
Some excerpts:
As babies, we learn how the world works largely by observation. We form generalized predictive models about objects in the world by learning concepts such as object permanence and gravity. Later in life, we observe the world, act on it, observe again, and build hypotheses to explain how our actions change our environment by trial and error.
We believe that self-supervised learning (SSL) is one of the most promising ways to build such background knowledge and approximate a form of common sense in AI systems.
π Read more here.
Blog post by Yann LeCun and Ishan Misra - well-known experts in self-supervised learning at FAIR.
They talk about:
- Self-supervised learning as a paradigm in general
- Self-supervised learning as predictive learning,
- Self-supervised learning for language versus vision
- Modeling the uncertainty in prediction
- A unified view of self-supervised methods
- Self-supervised learning at Facebook
Some excerpts:
As babies, we learn how the world works largely by observation. We form generalized predictive models about objects in the world by learning concepts such as object permanence and gravity. Later in life, we observe the world, act on it, observe again, and build hypotheses to explain how our actions change our environment by trial and error.
We believe that self-supervised learning (SSL) is one of the most promising ways to build such background knowledge and approximate a form of common sense in AI systems.
π Read more here.
βLong term, progress in AI will come from programs that just watch videos all day and learn like a baby. ... Childrean learn by watching the spectacle of the world.
But when the spectacle of the world is captured by a camera, it's a video." -
@Yann Lecun
I can only add here, that AI might be also learnign from interacting with its environment (at least a simulated one).
Blogpost with high-level reflection on self-supervised learning at wired.com.
But when the spectacle of the world is captured by a camera, it's a video." -
@Yann Lecun
I can only add here, that AI might be also learnign from interacting with its environment (at least a simulated one).
Blogpost with high-level reflection on self-supervised learning at wired.com.
Wired
Facebookβs New AI Teaches Itself to See With Less Human Help
Most image recognition algorithms require lots of labeled pictures. This new approach eliminates the need for most of the labeling.
This media is not supported in your browser
VIEW IN TELEGRAM
Visualising Neurons in Artificial Neural Networks
What a surprise, openAI discovered yet another time that neurons can be interpretable π now they showed neurons for their recently hyped CLIP model.
https://openai.com/blog/multimodal-neurons/
What a surprise, openAI discovered yet another time that neurons can be interpretable π now they showed neurons for their recently hyped CLIP model.
https://openai.com/blog/multimodal-neurons/
Regarding the typographic attack in the previous post. Apparently, It can be avoided if you give proper query text string. For example βwait a second, this is just an apple with a label saying iPodβ will get a higher confidence than the βiPodβ
This was discovered by Yannic.
This was discovered by Yannic.
This media is not supported in your browser
VIEW IN TELEGRAM
It is Sunday, pancake time ππ». So I could not resist sharing this spectacular Deep Fake with you.
Neural Funk: AI generates endless breakbeats
Enthusiasts from Skoltech have trained a WaveGAN on 7500 vintage drum loops, then used the resulting model to generate thousands of new drum loops.
I have attached my favorite 6-minute sample (147 bpm). Love it!
The result was obtained by moving a point slowly through a random trajectory in the modelβs latent space. Each point in the latent space corresponds to either an existing or non-existing break. Linear movement between two points results in a smooth transition between two corresponding breaks.
The pace of progress in synthetic audio and image generation is mind-blowing. Will we be able to generate infinite-movies? Imagine an infinite Harry Potter story or an endless New Year's speech of Putin π
βΆοΈ A 6-hour Neural Funk on YouTube
π§ A 6-hour sequence in wav format
πColab notebook with pretrained models
Enthusiasts from Skoltech have trained a WaveGAN on 7500 vintage drum loops, then used the resulting model to generate thousands of new drum loops.
I have attached my favorite 6-minute sample (147 bpm). Love it!
The result was obtained by moving a point slowly through a random trajectory in the modelβs latent space. Each point in the latent space corresponds to either an existing or non-existing break. Linear movement between two points results in a smooth transition between two corresponding breaks.
The pace of progress in synthetic audio and image generation is mind-blowing. Will we be able to generate infinite-movies? Imagine an infinite Harry Potter story or an endless New Year's speech of Putin π
βΆοΈ A 6-hour Neural Funk on YouTube
π§ A 6-hour sequence in wav format
πColab notebook with pretrained models
ββInterview with Natalia Neverova - Research Lead at Facebook AI Research
Natalia Neverova was one of my research advisors during my internship at Facebook AI Research. In this interview, she talks about the research at FAIR, which students do they prefer to hire, 3D reconstruction of people and animals (3D animals π was exactly my research project at FAIR).
π Link to the interview (unfortunately, only in Russian)
Natalia Neverova was one of my research advisors during my internship at Facebook AI Research. In this interview, she talks about the research at FAIR, which students do they prefer to hire, 3D reconstruction of people and animals (3D animals π was exactly my research project at FAIR).
π Link to the interview (unfortunately, only in Russian)
YouTube
Transferring Dense Pose to Proximal Animal Classes (CVRP2020)
Frame-by-frame results produced by our model after self-training.
Project url: https://asanakoy.github.io/densepose-evolution/
Project url: https://asanakoy.github.io/densepose-evolution/