It has been less than a week since Mark Zuckerberg promised face tracking in Oculus devices and HTC rapidly announced VIVE Facial Tracker which seamlessly tracks 38 facial movements across the lips, jaw, teeth, tongue, chin, and cheeks.
Amazing how this seamingly simple technology significantly improves virtual experience.
With VR becoming more profitable, companies like Valve and Facebook continue to invest in the technology. And now rumors are swirling that Apple is working on a mixed-reality headset as well.
This is my approximate interpretation of the Russian post from @ai_newz
Amazing how this seamingly simple technology significantly improves virtual experience.
With VR becoming more profitable, companies like Valve and Facebook continue to invest in the technology. And now rumors are swirling that Apple is working on a mixed-reality headset as well.
This is my approximate interpretation of the Russian post from @ai_newz
TechRadar
HTC Vive has a new VR trick β full facial tracking
It'll require a new accessory though, with improved body tracking incoming too via a new add-on.
This media is not supported in your browser
VIEW IN TELEGRAM
Example of HTC VIVE Face tracking in action.
Some psychedelic neural art. The first one is pretty awesome and indeed worth printing on a t-shirt. Thanks @krasniy_doshik.
This media is not supported in your browser
VIEW IN TELEGRAM
MIT 6.S192: Deep Learning for Art, Aesthetics, and Creativity
Privet guys!
As you could notice I'm fond of neural art and artistic style transfer and have even published some papers on this topic (ECCV18, CVPR19, ICCV19). That's why today I'm very happy to share an awesome mini-course from MIT on Neural Art and Creativityπ©πΌβπ¨. This course has a lineup of great invited speakers like Phillip Isola (MIT), Alyosha Efros (UC Berkeley), Jeff Clune (OpenAI), etc. The video lectures are free and available online.
π http://deepcreativity.csail.mit.edu
Privet guys!
As you could notice I'm fond of neural art and artistic style transfer and have even published some papers on this topic (ECCV18, CVPR19, ICCV19). That's why today I'm very happy to share an awesome mini-course from MIT on Neural Art and Creativityπ©πΌβπ¨. This course has a lineup of great invited speakers like Phillip Isola (MIT), Alyosha Efros (UC Berkeley), Jeff Clune (OpenAI), etc. The video lectures are free and available online.
π http://deepcreativity.csail.mit.edu
Transformers Comprise the Fourth Pillar of Deep Learning
ARK Invest - one of the biggest asset-management companies and it is focused on disruptive technologies. They are convinced that Transformers is the next big thing and as recent language models with billions of parameters are very computationally demanding ARK Invest bets a lot on the growth of the AI chip market π¦Ύ.
According to their research, Deep Learning had added a mindblowing $1 trillion in equity market capitalization to companies like Alphabet, Amazon, Nvidia, and TSMC as of year-end 2019 and perhaps another $250-500 billion in 2020. They predict that AI would contribute roughly $30 trillion to global equity market cap creation over the next 20 years.
π£ Source post
ARK Invest - one of the biggest asset-management companies and it is focused on disruptive technologies. They are convinced that Transformers is the next big thing and as recent language models with billions of parameters are very computationally demanding ARK Invest bets a lot on the growth of the AI chip market π¦Ύ.
According to their research, Deep Learning had added a mindblowing $1 trillion in equity market capitalization to companies like Alphabet, Amazon, Nvidia, and TSMC as of year-end 2019 and perhaps another $250-500 billion in 2020. They predict that AI would contribute roughly $30 trillion to global equity market cap creation over the next 20 years.
π£ Source post
Google and Facebook Datacenter AI Workloads as of year 2018 (before the raise of Transformers π). Multi-layer perceptrons (MLPs) here are responsible for ranking and recommendations for search and content feeds like Instagram, Netflix, and YouTube.
β
Have you seen anywhere any recent stats on this matter? Would be very interesting to see and compare.
β
Have you seen anywhere any recent stats on this matter? Would be very interesting to see and compare.
ββSelf-training Improves Pre-training for Natural Language Understanding
Facebook AI & Stanford
Most semi-supervised NLP approaches require specifically in-domain unlabeled data. It means that for the best results, the unlabeled portion of the data which we want to use for semi-supervised training must be from the same domain as the annotated dataset.
This paper proposes SenAugment - a method that constructs task-specific in-domain unannotated datasets on the fly from the large external bank of sentences. So for any new NLP task where we have only a small dataset, we don't need to bother anymore to collect a very similar unannotated dataset if we want to use semi-supervised training.
Now we can sort of cheat to improve the performance of an NLP model on almost any downstream task using Self-training (which is also called Teacher-Student training):
1. We retrieve the most relevant sentences (few millions of them) for the current downstream task from the external bank. For retrieval we use the embedding space of a sentence encoder - Transformer, pre-trained with masked language modeling and finetuned to maximize cosine similarity between similar sentences.
2. We train the teacher model - a RoBERTa-Large model finetuned on the downstream task.
3. Then we use a teacher model to annotate the retrieved unlabeled in-domain sentences. We perform additional filtering by keeping the ones that have the high-confident predictions.
4. As our student model, we then finetune a new RoBERTa-Large using KL-divergence on the synthetic data by considering the post-softmax class probabilities as labels (i.e., not only the most confident class but the entire class distribution is used as a label for every sentence).
Such a self-training procedure significantly boosts the performance compared to the baseline. And the positive effect is higher when fewer GT annotated sentences are available.
As a large-scale external bank of unannotated sentences, authors use CommonCrowl. In particular, they use a corpus with 5 billion sentences (100B words). Because of its scale and diversity, the sentence bank contains data from various domains and with different styles, allowing to retrieve relevant data for many downstream tasks. To retrieve the most relevant sentences for a specific downstream task, we need to obtain an embedding for the task. Several options exist: (1) average embeddings of all sentences in the training set; (2) average embeddings for every class; (3) keep original sentences embeddings.
π Paper
π Code
#paper_explained #nlp
Facebook AI & Stanford
Most semi-supervised NLP approaches require specifically in-domain unlabeled data. It means that for the best results, the unlabeled portion of the data which we want to use for semi-supervised training must be from the same domain as the annotated dataset.
This paper proposes SenAugment - a method that constructs task-specific in-domain unannotated datasets on the fly from the large external bank of sentences. So for any new NLP task where we have only a small dataset, we don't need to bother anymore to collect a very similar unannotated dataset if we want to use semi-supervised training.
Now we can sort of cheat to improve the performance of an NLP model on almost any downstream task using Self-training (which is also called Teacher-Student training):
1. We retrieve the most relevant sentences (few millions of them) for the current downstream task from the external bank. For retrieval we use the embedding space of a sentence encoder - Transformer, pre-trained with masked language modeling and finetuned to maximize cosine similarity between similar sentences.
2. We train the teacher model - a RoBERTa-Large model finetuned on the downstream task.
3. Then we use a teacher model to annotate the retrieved unlabeled in-domain sentences. We perform additional filtering by keeping the ones that have the high-confident predictions.
4. As our student model, we then finetune a new RoBERTa-Large using KL-divergence on the synthetic data by considering the post-softmax class probabilities as labels (i.e., not only the most confident class but the entire class distribution is used as a label for every sentence).
Such a self-training procedure significantly boosts the performance compared to the baseline. And the positive effect is higher when fewer GT annotated sentences are available.
As a large-scale external bank of unannotated sentences, authors use CommonCrowl. In particular, they use a corpus with 5 billion sentences (100B words). Because of its scale and diversity, the sentence bank contains data from various domains and with different styles, allowing to retrieve relevant data for many downstream tasks. To retrieve the most relevant sentences for a specific downstream task, we need to obtain an embedding for the task. Several options exist: (1) average embeddings of all sentences in the training set; (2) average embeddings for every class; (3) keep original sentences embeddings.
π Paper
π Code
#paper_explained #nlp
ββWhat happens if you augment your training dataset with a load of stylized images as well?
Someone trained a StyleGAN2-ada on the images augmented with style transfer and synced the output with audio πΆ.
Someone trained a StyleGAN2-ada on the images augmented with style transfer and synced the output with audio πΆ.
YouTube
StyleGAN2-ada-pytorch audio reactive weirdness
So what happens if you augment your dataset with a load of style-transfer images as well? Well, I guess it sort of seems to work. Now I think I need to up my dataset size from 3000 to over 9000! I should probably test with 256x256 images first, right? Thinkβ¦
Designing, Visualizing and Understanding Deep Neural Networks, CS182
Sergey Levine released his new lectures for deep learning class, CS182! This is an introductory deep learning course (advanced undergraduate + graduate) covering a broad range of deep learning topics. Prof. Levine is an Assistant Professor at UC Berkeley and is the head of the Robotic Artificial Intelligence and Learning Lab, I have posted about him a few months ago.
π Course website
βΆοΈ Lectures playlist
Sergey Levine released his new lectures for deep learning class, CS182! This is an introductory deep learning course (advanced undergraduate + graduate) covering a broad range of deep learning topics. Prof. Levine is an Assistant Professor at UC Berkeley and is the head of the Robotic Artificial Intelligence and Learning Lab, I have posted about him a few months ago.
π Course website
βΆοΈ Lectures playlist
This media is not supported in your browser
VIEW IN TELEGRAM
Neural Corgi π€
StyleGAN2-ADA trained on cute Corgi images. Looks amazing!
1. Scrape 350k Corgi images from Instagram.
2. Detect dogs using YOLOv3.
3. Remove small detections and dogs not facing in the camera.
4. Remove duplicates and crop the images. Around 130k 1024x1024 were obtained at this step.
5. Upsample crops to 1024 x 1024.
6. Train StyleGAN2-ADA for 5 million iterations for 18 days on Tesla V100.
7. Profit ?!
π Colab
π Code and dataset
StyleGAN2-ADA trained on cute Corgi images. Looks amazing!
1. Scrape 350k Corgi images from Instagram.
2. Detect dogs using YOLOv3.
3. Remove small detections and dogs not facing in the camera.
4. Remove duplicates and crop the images. Around 130k 1024x1024 were obtained at this step.
5. Upsample crops to 1024 x 1024.
6. Train StyleGAN2-ADA for 5 million iterations for 18 days on Tesla V100.
7. Profit ?!
π Colab
π Code and dataset
MacaquePose: A Novel βIn the Wildβ Macaque Monkey Pose Dataset
Recently, Computer vision for animals is getting more traction. Several works on this topic have already been discussed in this channel: post [1], post [2] , post [3].
β Why?
Pose estimation is fundamental for analyzing the relationship between the animalβs behaviors and its brain functions and malfunctions. And Macaque monkeys are excellent non-human primate models, especially for studying neuroscience.
Another possible application is Instagram / Snapchat masks and effects for your cute quadruple friends.
π Dataset
This dataset provides keypoints for macaques in naturalistic scenes, it consists of 13k images and 16k monkey instances.
- 17 keypoints and instance segmentation for each monkey in COCO format.
- Annotations are of high quality because crowd-sourced annotations were curated and refined by 8 researchers working specifically with macaques.
Recently, Computer vision for animals is getting more traction. Several works on this topic have already been discussed in this channel: post [1], post [2] , post [3].
β Why?
Pose estimation is fundamental for analyzing the relationship between the animalβs behaviors and its brain functions and malfunctions. And Macaque monkeys are excellent non-human primate models, especially for studying neuroscience.
Another possible application is Instagram / Snapchat masks and effects for your cute quadruple friends.
π Dataset
This dataset provides keypoints for macaques in naturalistic scenes, it consists of 13k images and 16k monkey instances.
- 17 keypoints and instance segmentation for each monkey in COCO format.
- Annotations are of high quality because crowd-sourced annotations were curated and refined by 8 researchers working specifically with macaques.
π Interesting findings
The most challenging for both human annotators and the DeepLabCut baseline is to predict the positions of shoulders and hips. Another point of failure for Neural Networks is self occlusions.
π Paper
π Dataset
βοΈ Pretrained models in DeepLabCut Model Zoo
π Colab
The most challenging for both human annotators and the DeepLabCut baseline is to predict the positions of shoulders and hips. Another point of failure for Neural Networks is self occlusions.
π Paper
π Dataset
βοΈ Pretrained models in DeepLabCut Model Zoo
π Colab
This media is not supported in your browser
VIEW IN TELEGRAM
I totally need glasses that move with my eyebrows. (c) Yann LeCun
The quality is wicked because of the pesky twitter compression.
The quality is wicked because of the pesky twitter compression.
CLIP + StyleGAN. Searching in StyleGAN latent space using description embedded with CLIP.
Queries: "A pony that looks like Beyonce", "... like Billie Eilish", ".. like Rihanna"
π The basic idea
Generate an image with StyleGAN and pass the image to CLIP for the loss against a CLIP text query representation. You then backprop through both networks and optimize a latent space in StyleGAN.
π€¬ Drawbacks 1) it only works on text it knows 2) needs some cherry picking, only about 1/5 are really good.
Source twitt.
Queries: "A pony that looks like Beyonce", "... like Billie Eilish", ".. like Rihanna"
π The basic idea
Generate an image with StyleGAN and pass the image to CLIP for the loss against a CLIP text query representation. You then backprop through both networks and optimize a latent space in StyleGAN.
π€¬ Drawbacks 1) it only works on text it knows 2) needs some cherry picking, only about 1/5 are really good.
Source twitt.