Self Supervised Boy
161 subscribers
9 photos
46 links
Reading papers on self/semi/weak supervised DL methods. Papers here: https://www.notion.so/Self-Supervised-Boy-papers-reading-751aa85ffca948d28feacc45dc3cb0c0
contact me @martolod
Download Telegram
PiCIE Unsupervised Semantic Segmentation using Invariance and Equivariance in Clustering
Paper from CVPR'21

There is one of more or less classical approaches to deep unsupervised segmentation: cluster your embeddings and use it as pseudo labels. Add tricks, repeat multiple times. In this paper the authors made one step further to unite it with self-supervision. They designed a loss function to enforce invariance of these clustered representation to the color augmentations and equivariance to the spatial augmentations.

The algorithm of the loss calculation is:
1. Get two representations of the same image. Both disturbed with different color augmentations but with same spatial augmentation. in first case image is disturbed before going through the network and in second — output of the network is disturbed. So, ideally both outputs should be identical, it will show invariance and equivariance to color and spatial augmentations respectively. I will name these representations z1 and z2.
2. For each of those outputs, run KMeans clustering of the embeddings. I will name obtained centroids µ1 and µ2.
3. The next step is going to finally mix those two spaces. Let say that L(z, µ) is a loss, that for each vector in z brings it closer to the nearest vector of µ. (prototype learning waves). Then:
3.1. We enforce clustering in each representation with L(z1, µ1) + L(z2, µ2).
3.2. We enforce that this clustering itself should hold across the representations with L(z1, µ2) + L(z2, µ1).

And that's it. Training with this approach achieves SoTA on the unsupervised segmentation and shows qualitatively good object masks. The most improved part of the segmentation is thing (foreground object) segmentation, which is systematically problematic for unsupervised learning, because of the huge imbalance in class sizes.

More here.
Source here.
MERLIN: another example of how important it is to know your data.

Imaging with SAR (Synthetic Aperture Radar) introduces a very specific type of noise, called speckles. The problem with training a denoising model for this case is obtaining very specific data. It should be the same data from point of view of information but should have decorrelated noise, which can cost a lot in areas where SAR is applied, e.g. in satellite imagery.

The authors proposed to utilize the structure of the image received by SAR. These images are obtained as a pair of values per pixel: amplitude and phase. Typically, phase is considered non-important for the imaging, as amplitude is of interest.
Authors demonstrate, that using the statistical model of the speckle noise, they can extract two noisy images from this information, having the same information, but with the different but i.i.d. noise. This way, authors can apply the Noise2Noise framework to this case, and train a NN to predict true non-noisy amplitude.

It allows training a neural network for each detector specifically, without the requirement to obtain expensive training data or to construct an artificial one.

Source: here
👍1
How Useful is Self-Supervised Pretraining for Visual Tasks?

A relatively old paper (CVPR2020), by our fast life standards. Nevertheless, it has a pair of practical takeaways.

Authors created a synthetic dataset with several degrees of freedom to vary difficulty. It varies from almost monochrome objects to randomized textures and positioning on image.

The target was to compare how good different self-supervised approaches help to tune for different downstream tasks. From classification to depth estimation.

Two practical takeways are:
1. The self-supervised method utility is wildly dependent on task, markup amount and even data complexity.
2. A linear evaluation score, so popular in papers, has almost no correlation with actual fine-tuning results.

Authors found out, that there is no improvement by self-supervised training when lots of labeled data presented (which became kinda well known since then). Based on this, they hypothesise, that improvement of SSL pre-training is rather kind of a regularization than optimization. That is, SSL pre-training helps to find wider optimum, not better. Though, to claim this, some kind of loss plane investigation would be more helpful.

Source: here
👍2
Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration
from NeurIPS2021.

It was already noted, that quality of the contrastive learning may suffer from intense augmentation. In this paper, the authors make one step further and try to understand the source of this.

The main hypothesis is, if augmentations are too intense, the assumption of invariance of the image information to augmentation just breaks. That is, we augment images so hard, that it isn't meaningful to ask a model to predict close embeddings for such different inputs.

To mitigate this, the authors proposed to model a distribution of the embeddings of views (positive samples, different augmentations of the same image) as a normal distribution with a shared covariance matrix (experiments show that shared covariance matrix is somehow very effective). And then add weight to each component of the loss with a normalized distance between two views which are pulled together in this component. The distance here is Mahalanobis distance defined by the fitted distribution.

To put it simpler: if two positive samples are too far away from each other, maybe they are not so positive after all?

This makes contrastive methods to not over relate on the assumption of the invariance to augmentation. And also makes them more aware of what happens in the embedded space itself.

Authors demonstrate consistent improvement for different contrastive losses.
source: here
👍1
Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere
from the ICML2020.

Previously it was noted, that if one swaps contrastive loss with a tighter bound on MI, the downstream quality decreases. The authors propose, therefore, to move from InfoMax intuition to rather simple concepts: alignment and uniformity. The former enforces that positive pairs stay as close as possible and the latter enforces that all samples stay as evenly distributed as possible.

These components are empirically important for downstream performance. Furthermore, their direct optimization may outperform the classical contrastive loss training.

With images and a bit longer: here
Source: here
Well, there was more than three years since the last post here. In these three years a lot has changed. I'm done with my PhD in Heidelberg Uni, and moved on to JetBrains to lead a team on AI agents. With all this on my hands, I will have even less time for writing the reviews I'd like to read. But on the other hand, I'd still like to share the papers I read.

So, instead, I will post here links to the papers that I read. You can view this experiment as copycatting the @j_links but with a bias towards LLMs and probably agents.
🔥10👍5