Self Supervised Boy
161 subscribers
9 photos
46 links
Reading papers on self/semi/weak supervised DL methods. Papers here: https://www.notion.so/Self-Supervised-Boy-papers-reading-751aa85ffca948d28feacc45dc3cb0c0
contact me @martolod
Download Telegram
Channel created
Who are you? I'm PhD student doing DL research, preferably about weak/self supervision. Or even unsupervised things as well.
What happens? I'm writing here some reviews of papers I read.
Why the hell? Because it allows me to practice writing, and to understand papers I read deeper.
So what? I will be happy if it's somehow interesting to someone else. Anyways, here's my archive: https://www.notion.so/Self-Supervised-Boy-papers-reading-751aa85ffca948d28feacc45dc3cb0c0.
Self-training über alles. Another paper on self-training by Le Quoc.
They compared self-training with supervised and self-supervised pre-training for different tasks. Self-training seemingly works better, while pre-training even hurts final quality when enough labeled data is available or strong augmentation is applied.
Main practical takeaway is, self-training adds quality even after pre-training. So, it could be worthy to self-train your baseline models to have better start.
More detailed with tables here: https://www.notion.so/Rethinking-Pre-training-and-Self-training-e00596e346fa4261af68db7409fbbde6
Source here: https://arxiv.org/pdf/2006.06882.pdf