Gradient Dude
2.54K subscribers
180 photos
50 videos
2 files
169 links
TL;DR for DL/CV/ML/AI papers from an author of publications at top-tier AI conferences (CVPR, NIPS, ICCV,ECCV).

Most ML feeds go for fluff, we go for the real meat.

YouTube: youtube.com/c/gradientdude
IG instagram.com/gradientdude
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
Boston Dynamics unveiled a New Robot for working at warehouses!

Watch Stretch - new case handling robot - move, groove and unload trucks.
Forwarded from Self Supervised Boy
Interactive Weak Supervision paper from ICLR 2021.

In contrast to classical active learning where experts are queried to assess individual samples, the idea of this paper is to assess labeling heuristics being automatically generated. Authors argue that since experts are good in writing such heuristics from scratch, they should be able to label auto-generated heuristics. To be able to rank non-assessed heuristics authors proposed to train an ensemble of models to predict the assessors' mark for the heuristic. As input for these models authors proposed to use fingerprint of the heuristic: concatenated predictions on some subset of data.

There is no very fancy results, there is some concerns raised by reviewers, and there are some strange notations in this paper. Yet the idea looks interesting to me.

With a bit deeper description (and one unanswered question) here.
Source (and rebuttal comments with important links) there.
CvT: Introducing Convolutions to Vision Transformers

Another improvement for Vision transformers! Inject inductive biases of CNNs (i.e. shift, scale, and distortion invariance) to the ViT architecture while maintaining the flexibility of Transformers.

How?
Main architectural novelties:
- Hierarchical architecture
- New convolutional token embedding
- Convolutional projections before self-attention instead of the linear which was used in ViT. This is where convolutions come into play.

Results:
Almost SOTA on Imagenet 1K and 22K: 83.3% and 87.7%.
Almost because Swin Transformers with local window self-attention layers and downsampling layers are a bit stronger (see the image with results) and perhaps faster.

🤔 Looks like it is a trend now to incorporate useful structural properties of CNN into Transformers. I'm pretty sure, we will see more papers like this in the next few months.

📝 Paper arxiv.org/abs/2103.15808
New aggregator of trending papers🗞
It uses number of tweets as a paper hotness score.

You can also create your own reading lists there, add notes about papers and follow other users.

https://42papers.com/

Here is my profile https://42papers.com/u/gradient-dude-933
​​StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery 🔥
Adobe Research

Contrastive Language-Image Pretraining (CLIP) models in order to navigate image editing by text queries.
1. Take pretrained CLIP, pretrained StyleGAN, and pretrained ArcFace network for face recognition.
2. Project an input image in StyleGAN latent vector w_s.
3. Now, given a source latent code w_s∈ W+, and a directive in natural language, or a text prompt t, we iteratively minimize the sum of three losses by changing the latent code w:
a) Distance between generated by StyleGAN image and the text query;
b) Regularization loss penalizing large deviation of the source vector w_s
c) Identity loss, which makes sure that the identity of the generated face is the same as the original one. This is done by minimizing the distance between images in the ArcFace face recognition network embedding space.

Such an image editing process requires iterative optimization of the latent code w (usually 200-300 iterations) for several minutes. To make it faster authors propose a feed-forward method, where instead of optimization, another neural network predicts the residuals which are added to the latent code w to produce the desired image alterations.

The overall idea of the paper is not super novel and has been around for some time already. But this is the first formal paper on such an image editing approach using CLIP and StyleGAN.
Other related papers, recently discussed in this channel:
▪️Paint by Word
▪️Using latent space regression to analyze and leverage compositionality in GANs


📝 StyleCLIP Paper
⚙️ StyleCLIp code
Cool results👌🏻
​​HaloNet: Scaling Local Self-Attention for Parameter Efficient Visual Backbones
Google research

Novel computer vision backbone - HaloNet. Yes, yet another one! 🤷🏼‍♂️

Authors develop a new family of parameter-efficient local self-attention models, HaloNets, that outperform EfficientNet in the parameter-accuracy tradeoff on ImageNet.
HaloNets show strong results on ImageNet-1k, and promising improvements (up to 4.4x inference speedups) over strong baselines when pretrained on ImageNet-21k with comparable settings.
The ideas are similar to Swin Transformers and CvT: local self-attention, attention-based downsampling layers, a mix of regular convolutions, and self-attention blocks.

In their previous work, the authors used pixel-centered windows, similar to convolutions. Here, they develop a block-centered formulation for better efficiency on matrix accelerators (GPU, TPU). They also introduce the attention downsampling layer

When applied to the detection and instance segmentation, the proposed local self-attention improves on top of strong convolutional baselines. Interestingly, local self-attention with 14x14 receptive fields performs nearly as well as 35x35.

📝 Paper
No code yet!
Bored? Here is yet another Fast&Furious backbone for you!

New day - new SOTA on ImageNet🤯
​​EfficientNetV2: Smaller Models and Faster Training

A new paper from Google Brain with a new SOTA architecture called EfficientNetV2. The authors develop a new family of CNN models that are optimized both for accuracy and training speed. The main improvements are:

- an improved training-aware neural architecture search with new building blocks and ideas to jointly optimize training speed and parameter efficiency;
- a new approach to progressive learning that adjusts regularization along with the image size;

As a result, the new approach can reach SOTA results while training faster (up to 11x) and smaller (up to 6.8x).

Paper: https://arxiv.org/abs/2104.00298

Code will be available here:
https://github.com/google/automl/efficientnetv2

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-effnetv2

#cv #sota #nas #deeplearning