Data Science by ODS.ai ๐Ÿฆœ
44.8K subscribers
786 photos
85 videos
7 files
1.86K links
First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @malev
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
in the walls
by matt bierner

to make a scene from a horror movie when a face comes out of a wall like the 1st season of "Very Strange Things"
based on the arkit with ar & facetracking from the front camera


on the app store only: https://apps.apple.com/ru/app/in-the-walls/id1522257130?l=en

#arkit #ar #app
Caption: ArXiV paper, in Experiment, in Production

Source: https://vk.com/wall-166490678_545
Forwarded from Graph Machine Learning
Simple scalable graph neural networks

Michael Bronstein continues a marathon of great blog posts on GML. In a new post he describes their recent work on scaling GNNs to large network. There is a good introduction to sampling-based methods (e.g. SAGE, GraphSAINT, ClusterGCN), which sample a subgraph of a large graph and then train GNN only on a subgraph.

Then, he describes that it can be beneficial just precompute r-hop matrices, A^r X, and use MLP on these features. This way, you use topology of your graph and you apply mini-batch training with MLP.

What's cool is that the algorithm is already available in pytorch-geometric as a transform, which is implemented via sparseTensor matrix multiplication.
๐Ÿ‘1
๐Ÿ“ Post "Simple scalable graph neural networks" published, discuss!
โ€‹โ€‹NeuralCam Live release on the #ProductHunt

App turns iPhone into the better camera for Zoom calls with auto bluring in case of unwanted gestures.
It was clear that global pandemic and pressure on the remote culture will be foundation for new ideas and solutions, such as this.
There is nothing groundbraking about technology, but execution and market is what matters. Apple or Google might even buy this startup instead of simply copying the features and making default cameras more smart.


ProductHunt: https://www.producthunt.com/posts/neuralcam-live

#aiproduct #dataproduct #camera #aicamera #cv #DL
Dont hesitate to click ยซCommentยป button and share your ideas or links to other pandemic solutions
Anonymous Poll
49%
Will share
51%
Not today
๐Ÿ“ Post "Google AI Blog: On-device Supermarket Productโ€ฆ" published, discuss!
nlp newsletter 14: nlp beyond english, big bird, monitoring ml models, breaking into nlp, arxiv dataset,โ€ฆ
by elvis saravia @dair.ai

in our point of view in this newsletter showcase the next interesting links
* demos and applications gpt3
* monitoring ml models
* Big Bird: Transformers for Longer Sequences by reducing the complexity of the attention mechanism to linear complexity in the number of tokens
* competition contradictory, my dear watson: detecting contradiction and entailment in multilingual text using tpus
* competition hate speech and offensive content identification in indo-european languages
* why u should do nlp beyond :en: by sebastian ruder
* covost v2: expanding the largest, most diverse multilingual speech-to-text translation data set
* panel discussion about the future of conversational ai systems
* โ€ฆ


blog post: https://dair.ai/NLP_Newsletter_14-en/

#nlp #news
โ€‹โ€‹REALM: Integrating Retrieval into Language Representation Models
by google research

A new paper from google with a novel approach for language model pre-training, which augments a language representation model with a knowledge retriever.

The idea is the following: we take a sentence or a piece of text and augment it with additional knowledge (pass original text and additional texts to the model).

An example:
The masked text is:

We paid twenty __ at the Buckingham Palace gift shop.


Knowledge retriever could add the following information to it:
Buckingham Palace is the London residence of the British monarchy.
The official currency of the United Kingdom is the Pound.


blog post: https://ai.googleblog.com/2020/08/realm-integrating-retrieval-into.html
paper: https://arxiv.org/abs/2002.08909
github: https://github.com/google-research/language/tree/master/language/realm

#nlp #languagemodel #knowledgeretriever #icml2020
โ€‹โ€‹A utility tool powered by fzf for using git interactively.

This tool is designed to help you use git more efficiently. It's lightweight and easy to use.

Also integrates with: diff-so-fancy, delta, bat, emoji-cli.

https://github.com/wfxr/forgit

#shell #git
Forwarded from Graph Machine Learning
The Quantum Graph Recurrent Neural Network

This demonstration by pennylane investigates quantum graph recurrent neural networks (QGRNN), which are the quantum analogue of a classical graph recurrent neural network, and a subclass of the more general quantum graph neural network ansatz. Both the QGNN and QGRNN were introduced in this paper (2019) by Google X.
mingpt โ€“ a minimal pytorch re-implementation of the openai generative pretrained transformer training
by karpathy

small, clean, interpretable and educational, as most of the currently available ones are a bit sprawling. this implementation is appropriately about 300 lines of code, including boilerplate and a totally unnecessary custom causal self-attention module. all that's going on is that a sequence of indices goes into a sequence of transformer blocks, and a probability distribution of the next index comes out.

with a bpe encoder, distributed training and maybe fp16 this implementation may be able to reproduce gpt-1/gpt-2 results, though they haven't tried $$$. gpt-3 is likely out of reach as his understanding is that it does not fit into gpu memory and requires a more careful model-parallel treatment.


https://twitter.com/karpathy/status/1295410274095095810?s=20

#nlp #karpathy #gpt #torch
โ€‹โ€‹Language-agnostic BERT Sentence Embedding

Authors adopt multilingual BERT to produce language-agnostic sentence embeddings for 109 languages.
The model combines a masked language model (MLM) and a translation language model (TLM) pretraining with a translation ranking task using bi-directional dual encoders.
The resulting multilingual sentence embeddings improve average bi-text retrieval accuracy over 112 languages to 83.7% on Tatoeba (previous state-of-the-art was 65.5%)

blogpost: https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html
paper: https://arxiv.org/abs/2007.01852
bodel on tf hub: https://tfhub.dev/google/LaBSE/1

#deeplearning #transformers #nlp #tensorflow #sentenceembeddings
โ€‹โ€‹Philosopher AI โ€” website to generate text with #GPT3

Tool to generate text on different topics. Sensible topics such as sex, religion or even nationality are blocked.

Great way to spread the awareness on #ai and to show nontechnical friends that #Skynet is not a problem to be concerned with yet.

Website: https://philosopherai.com/philosopher/humanity-on-mars-73ac00

#nlu #nlp
โ€‹โ€‹A terminal-based presentation tool with colors and effects.

Present your stuff without leaving your terminal!

Personal opinion: this might be a really cool thing for live-coding sessions for people using vim/emacs. The context switch would be minimal.

https://github.com/vinayak-mehta/present

#python
Most of the Scots NLP models used Wikipedia for training are wrong

One person who had done 200,000 edits and written 20,000 articles of Scots Wikipedia was not using Scots language but rather faking it. Since Wikipedia texts are often used as a dataset for #NLU / #NLP / #NMT neural nets training, those models using it as an input had a flaw.

Reddit thread: https://www.reddit.com/r/Scotland/comments/ig9jia/ive_discovered_that_almost_every_single_article/

#datasets #translation #scots #wikipedia
Forwarded from PDP-11๐Ÿš€
The latest paper by David Patterson & Google TPU team reveals details of the world most efficient and one of the most powerful supercomputers for DNN Acceleration - TPU v3. The one which was used to train BERT.
We recommend that you definitely read the full text, but here are insights and tldr highlights

Key Insight:
The co-design of an ML-specific programming system (TensorFlow), compiler (XLA), architecture (TPU), floating-point arithmetic (Brain float16), interconnect (ICI), and chip (TPUv2/v3) let production ML applications scale at 96%โ€“99% of perfect linear speedup and 10x gains in performance/ Watt over the most efficient general-purpose supercomputers.

More highlights:

๐Ÿฃ๐Ÿค๐Ÿ” Three generations
There are 3 generations of TPU now released, TPU v1 used fixpoint arithmetic and was used for inference only. TPU v2 and v3 operate in floating-point and used for training. TPU v4 results were presented in MLPerf summer release, but there is no public information available. The TPU architecture differs from CPU with
โ–ช๏ธ Two Dimensional array processing units (instead of 1D vector SIMDs in CPU)
โ–ช๏ธNarrower data (8-16 bits)
โ–ช๏ธ Drop complex CPU features - caches and branch prediction

๐Ÿฎ๐Ÿคœ๐Ÿค Fewer cores per chip (two oxen vs 1024 chickens)
NVidia put thousands of CUDA cores inside their chip. TPU v3 has only 2 TensorCores per chip. It's way easier to generate a program for 2 beefier cores than to swarm of wimpier cores.
Each TensorCore includes the following units:-
โ–ช๏ธICI(Inter Core Interconnects) - connect core across different chips-
โ–ช๏ธHBM, stacked DRAM on the same interposes substrate-
โ–ช๏ธCore Sequencer - manages instructions and performs scalar operations-
โ–ช๏ธVector Processing Unit, performs vectors operation for 1D and 2D vectors-
โ–ช๏ธMatrix Multiply Unit (MXU)

๐Ÿฑ๐Ÿถโ“ From inference to training chip
Key challenges on the way from inference chip V1 to training hardware V2
โ–ช๏ธ Harder parallelization
โ–ช๏ธ More computation
โ–ช๏ธ More memory
โ–ช๏ธ More programmability
โ–ช๏ธ Wider dynamic range of data

โœ‚๏ธ๐Ÿงฎโœ‚๏ธ Brain Float
IEEE FP16 and FP32 use (1+8+23) and (1+5+7) bits for the sign, exponent, and mantissa values respectively. In practice, DNN doesn't need mantissa precision of FP32, but the dynamic range of FP16 is not enough. Using of FP16 also requires loss scaling.
The compromised bf16 keeps the same 8 bits for exponent, as FP32, but reduced mantissa - only 7 bits instead of 23.
BF16 delivers reducing space usage and power consumption with no loss scaling in software required.

๐Ÿฉ๐Ÿงฌโšก๏ธ Torus topology and ICI
TPU v1 was an accelerator card for CPU 'based computer. TPUv2 and v3 are building blocks of the supercomputer. Chips connected with ICI interface, each running at ~500Gbits/s. ICU enables direct connection between chips, so no need of any extra interfaces. GPU/CPU based supercomputers have to apply NVLink and PCI-E inside computer chase and InfiniBand network and switches to connect them.
Chips in TPUv2 and v3 clusters are connected in 2D Torus topology (doughnut ) and achieve an unbelievable linear scale of performance growth with increasing of chips number.


๐Ÿ› โš™๏ธ๐Ÿ–ฅ XLA compiler (to orchestrate them all)
TF programs are graphs of operations, where tensor-arrays are first-class citizens. XLA compiler front-end transforms the TF graph into an intermediate representation, which is then efficiently mapped into selected TPU (or CPU/GPU) architectures. XLA maps TF graph parallelism across hundreds of chips, TensorCores per chip, multiple units per core. XLA provides precise reasoning about memory use at every point in the program.
Young XLA compiler has more opportunities to improve than a more mature CUDA stack.


๐ŸŒฒ๐Ÿฐ๐ŸฆŠ Green Power (Forest animals approves)
TPU v3 supercomputer already climbed on the 4th row of TOP500 ranking, but what is remarkable - it demonstrates an overwhelming 146.3 GFLops/Watt performance. The nearest competitor has 10 times and lower number.

Original Paper
A Domain Specific Computer for training DNN