Forwarded from ะะฐั
ะพะดะบะธ ะฒ ะพะฟะตะฝัะพััะต
โโA utility tool powered by fzf for using git interactively.
This tool is designed to help you use git more efficiently. It's lightweight and easy to use.
Also integrates with: diff-so-fancy, delta, bat, emoji-cli.
https://github.com/wfxr/forgit
#shell #git
This tool is designed to help you use git more efficiently. It's lightweight and easy to use.
Also integrates with: diff-so-fancy, delta, bat, emoji-cli.
https://github.com/wfxr/forgit
#shell #git
Forwarded from Graph Machine Learning
The Quantum Graph Recurrent Neural Network
This demonstration by pennylane investigates quantum graph recurrent neural networks (QGRNN), which are the quantum analogue of a classical graph recurrent neural network, and a subclass of the more general quantum graph neural network ansatz. Both the QGNN and QGRNN were introduced in this paper (2019) by Google X.
This demonstration by pennylane investigates quantum graph recurrent neural networks (QGRNN), which are the quantum analogue of a classical graph recurrent neural network, and a subclass of the more general quantum graph neural network ansatz. Both the QGNN and QGRNN were introduced in this paper (2019) by Google X.
โโClothing Dataset: Call for Action
Help to collect a public-domain dataset with images of clothes
Medium post: https://medium.com/data-science-insider/clothing-dataset-call-for-action-3cad023246c1
#dataset #clothing #cv #calltoarms
Help to collect a public-domain dataset with images of clothes
Medium post: https://medium.com/data-science-insider/clothing-dataset-call-for-action-3cad023246c1
#dataset #clothing #cv #calltoarms
โโGANs used to create photorealistic images of Roman Emperors
Project post: https://voshart.com/ROMAN-EMPEROR-PROJECT
Medium: https://medium.com/@voshart/photoreal-roman-emperor-project-236be7f06c8f
#GAN #Art #history #DL
Project post: https://voshart.com/ROMAN-EMPEROR-PROJECT
Medium: https://medium.com/@voshart/photoreal-roman-emperor-project-236be7f06c8f
#GAN #Art #history #DL
mingpt โ a minimal pytorch re-implementation of the openai generative pretrained transformer training
by karpathy
small, clean, interpretable and educational, as most of the currently available ones are a bit sprawling. this implementation is appropriately about 300 lines of code, including boilerplate and a totally unnecessary custom causal self-attention module. all that's going on is that a sequence of indices goes into a sequence of transformer blocks, and a probability distribution of the next index comes out.
with a bpe encoder, distributed training and maybe fp16 this implementation may be able to reproduce gpt-1/gpt-2 results, though they haven't tried $$$. gpt-3 is likely out of reach as his understanding is that it does not fit into gpu memory and requires a more careful model-parallel treatment.
https://twitter.com/karpathy/status/1295410274095095810?s=20
#nlp #karpathy #gpt #torch
by karpathy
small, clean, interpretable and educational, as most of the currently available ones are a bit sprawling. this implementation is appropriately about 300 lines of code, including boilerplate and a totally unnecessary custom causal self-attention module. all that's going on is that a sequence of indices goes into a sequence of transformer blocks, and a probability distribution of the next index comes out.
with a bpe encoder, distributed training and maybe fp16 this implementation may be able to reproduce gpt-1/gpt-2 results, though they haven't tried $$$. gpt-3 is likely out of reach as his understanding is that it does not fit into gpu memory and requires a more careful model-parallel treatment.
https://twitter.com/karpathy/status/1295410274095095810?s=20
#nlp #karpathy #gpt #torch
Twitter
Andrej Karpathy
I wrote a minimal/educational GPT training library in PyTorch, am calling it minGPT as it is only around ~300 lines of code: https://t.co/79S9lShJRN +demos for addition and character-level language model. (quick weekend project, may contain sharp edges)
โโLanguage-agnostic BERT Sentence Embedding
Authors adopt multilingual BERT to produce language-agnostic sentence embeddings for 109 languages.
The model combines a masked language model (MLM) and a translation language model (TLM) pretraining with a translation ranking task using bi-directional dual encoders.
The resulting multilingual sentence embeddings improve average bi-text retrieval accuracy over 112 languages to 83.7% on Tatoeba (previous state-of-the-art was 65.5%)
blogpost: https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html
paper: https://arxiv.org/abs/2007.01852
bodel on tf hub: https://tfhub.dev/google/LaBSE/1
#deeplearning #transformers #nlp #tensorflow #sentenceembeddings
Authors adopt multilingual BERT to produce language-agnostic sentence embeddings for 109 languages.
The model combines a masked language model (MLM) and a translation language model (TLM) pretraining with a translation ranking task using bi-directional dual encoders.
The resulting multilingual sentence embeddings improve average bi-text retrieval accuracy over 112 languages to 83.7% on Tatoeba (previous state-of-the-art was 65.5%)
blogpost: https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html
paper: https://arxiv.org/abs/2007.01852
bodel on tf hub: https://tfhub.dev/google/LaBSE/1
#deeplearning #transformers #nlp #tensorflow #sentenceembeddings
โโPhilosopher AI โ website to generate text with #GPT3
Tool to generate text on different topics. Sensible topics such as sex, religion or even nationality are blocked.
Great way to spread the awareness on #ai and to show nontechnical friends that #Skynet is not a problem to be concerned with yet.
Website: https://philosopherai.com/philosopher/humanity-on-mars-73ac00
#nlu #nlp
Tool to generate text on different topics. Sensible topics such as sex, religion or even nationality are blocked.
Great way to spread the awareness on #ai and to show nontechnical friends that #Skynet is not a problem to be concerned with yet.
Website: https://philosopherai.com/philosopher/humanity-on-mars-73ac00
#nlu #nlp
Forwarded from ะะฐั
ะพะดะบะธ ะฒ ะพะฟะตะฝัะพััะต
โโA terminal-based presentation tool with colors and effects.
Present your stuff without leaving your terminal!
Personal opinion: this might be a really cool thing for live-coding sessions for people using vim/emacs. The context switch would be minimal.
https://github.com/vinayak-mehta/present
#python
Present your stuff without leaving your terminal!
Personal opinion: this might be a really cool thing for live-coding sessions for people using vim/emacs. The context switch would be minimal.
https://github.com/vinayak-mehta/present
#python
Most of the Scots NLP models used Wikipedia for training are wrong
One person who had done 200,000 edits and written 20,000 articles of Scots Wikipedia was not using Scots language but rather faking it. Since Wikipedia texts are often used as a dataset for #NLU / #NLP / #NMT neural nets training, those models using it as an input had a flaw.
Reddit thread: https://www.reddit.com/r/Scotland/comments/ig9jia/ive_discovered_that_almost_every_single_article/
#datasets #translation #scots #wikipedia
One person who had done 200,000 edits and written 20,000 articles of Scots Wikipedia was not using Scots language but rather faking it. Since Wikipedia texts are often used as a dataset for #NLU / #NLP / #NMT neural nets training, those models using it as an input had a flaw.
Reddit thread: https://www.reddit.com/r/Scotland/comments/ig9jia/ive_discovered_that_almost_every_single_article/
#datasets #translation #scots #wikipedia
Reddit
From the Scotland community on Reddit: Iโve discovered that almost every single article on the Scots version of Wikipedia is writtenโฆ
Explore this post and more from the Scotland community
Forwarded from PDP-11๐
The latest paper by David Patterson & Google TPU team reveals details of the world most efficient and one of the most powerful supercomputers for DNN Acceleration - TPU v3. The one which was used to train BERT.
We recommend that you definitely read the full text, but here are insights and tldr highlights
Key Insight:
The co-design of an ML-specific programming system (TensorFlow), compiler (XLA), architecture (TPU), floating-point arithmetic (Brain float16), interconnect (ICI), and chip (TPUv2/v3) let production ML applications scale at 96%โ99% of perfect linear speedup and 10x gains in performance/ Watt over the most efficient general-purpose supercomputers.
More highlights:
๐ฃ๐ค๐ Three generations
There are 3 generations of TPU now released, TPU v1 used fixpoint arithmetic and was used for inference only. TPU v2 and v3 operate in floating-point and used for training. TPU v4 results were presented in MLPerf summer release, but there is no public information available. The TPU architecture differs from CPU with
โช๏ธ Two Dimensional array processing units (instead of 1D vector SIMDs in CPU)
โช๏ธNarrower data (8-16 bits)
โช๏ธ Drop complex CPU features - caches and branch prediction
๐ฎ๐ค๐ค Fewer cores per chip (two oxen vs 1024 chickens)
NVidia put thousands of CUDA cores inside their chip. TPU v3 has only 2 TensorCores per chip. It's way easier to generate a program for 2 beefier cores than to swarm of wimpier cores.
Each TensorCore includes the following units:-
โช๏ธ
โช๏ธ
โช๏ธ
โช๏ธ
โช๏ธ
๐ฑ๐ถโ From inference to training chip
Key challenges on the way from inference chip V1 to training hardware V2
โช๏ธ Harder parallelization
โช๏ธ More computation
โช๏ธ More memory
โช๏ธ More programmability
โช๏ธ Wider dynamic range of data
โ๏ธ๐งฎโ๏ธ Brain Float
The compromised
๐ฉ๐งฌโก๏ธ Torus topology and ICI
TPU v1 was an accelerator card for CPU 'based computer. TPUv2 and v3 are building blocks of the supercomputer. Chips connected with ICI interface, each running at ~500Gbits/s. ICU enables direct connection between chips, so no need of any extra interfaces. GPU/CPU based supercomputers have to apply NVLink and PCI-E inside computer chase and InfiniBand network and switches to connect them.
Chips in TPUv2 and v3 clusters are connected in 2D Torus topology (doughnut ) and achieve an unbelievable linear scale of performance growth with increasing of chips number.
๐ โ๏ธ๐ฅ XLA compiler (to orchestrate them all)
TF programs are graphs of operations, where tensor-arrays are first-class citizens. XLA compiler front-end transforms the TF graph into an intermediate representation, which is then efficiently mapped into selected TPU (or CPU/GPU) architectures. XLA maps TF graph parallelism across hundreds of chips, TensorCores per chip, multiple units per core. XLA provides precise reasoning about memory use at every point in the program.
Young XLA compiler has more opportunities to improve than a more mature CUDA stack.
๐ฒ๐ฐ๐ฆ Green Power (Forest animals approves)
TPU v3 supercomputer already climbed on the 4th row of TOP500 ranking, but what is remarkable - it demonstrates an overwhelming 146.3 GFLops/Watt performance. The nearest competitor has 10 times and lower number.
Original Paper
A Domain Specific Computer for training DNN
We recommend that you definitely read the full text, but here are insights and tldr highlights
Key Insight:
The co-design of an ML-specific programming system (TensorFlow), compiler (XLA), architecture (TPU), floating-point arithmetic (Brain float16), interconnect (ICI), and chip (TPUv2/v3) let production ML applications scale at 96%โ99% of perfect linear speedup and 10x gains in performance/ Watt over the most efficient general-purpose supercomputers.
More highlights:
๐ฃ๐ค๐ Three generations
There are 3 generations of TPU now released, TPU v1 used fixpoint arithmetic and was used for inference only. TPU v2 and v3 operate in floating-point and used for training. TPU v4 results were presented in MLPerf summer release, but there is no public information available. The TPU architecture differs from CPU with
โช๏ธ Two Dimensional array processing units (instead of 1D vector SIMDs in CPU)
โช๏ธNarrower data (8-16 bits)
โช๏ธ Drop complex CPU features - caches and branch prediction
๐ฎ๐ค๐ค Fewer cores per chip (two oxen vs 1024 chickens)
NVidia put thousands of CUDA cores inside their chip. TPU v3 has only 2 TensorCores per chip. It's way easier to generate a program for 2 beefier cores than to swarm of wimpier cores.
Each TensorCore includes the following units:-
โช๏ธ
ICI(Inter Core Interconnects) - connect core across different chips- โช๏ธ
HBM, stacked DRAM on the same interposes substrate- โช๏ธ
Core Sequencer - manages instructions and performs scalar operations- โช๏ธ
Vector Processing Unit, performs vectors operation for 1D and 2D vectors- โช๏ธ
Matrix Multiply Unit (MXU) ๐ฑ๐ถโ From inference to training chip
Key challenges on the way from inference chip V1 to training hardware V2
โช๏ธ Harder parallelization
โช๏ธ More computation
โช๏ธ More memory
โช๏ธ More programmability
โช๏ธ Wider dynamic range of data
โ๏ธ๐งฎโ๏ธ Brain Float
IEEE FP16 and FP32 use (1+8+23) and (1+5+7) bits for the sign, exponent, and mantissa values respectively. In practice, DNN doesn't need mantissa precision of FP32, but the dynamic range of FP16 is not enough. Using of FP16 also requires loss scaling.The compromised
bf16 keeps the same 8 bits for exponent, as FP32, but reduced mantissa - only 7 bits instead of 23. BF16 delivers reducing space usage and power consumption with no loss scaling in software required. ๐ฉ๐งฌโก๏ธ Torus topology and ICI
TPU v1 was an accelerator card for CPU 'based computer. TPUv2 and v3 are building blocks of the supercomputer. Chips connected with ICI interface, each running at ~500Gbits/s. ICU enables direct connection between chips, so no need of any extra interfaces. GPU/CPU based supercomputers have to apply NVLink and PCI-E inside computer chase and InfiniBand network and switches to connect them.
Chips in TPUv2 and v3 clusters are connected in 2D Torus topology (doughnut ) and achieve an unbelievable linear scale of performance growth with increasing of chips number.
๐ โ๏ธ๐ฅ XLA compiler (to orchestrate them all)
TF programs are graphs of operations, where tensor-arrays are first-class citizens. XLA compiler front-end transforms the TF graph into an intermediate representation, which is then efficiently mapped into selected TPU (or CPU/GPU) architectures. XLA maps TF graph parallelism across hundreds of chips, TensorCores per chip, multiple units per core. XLA provides precise reasoning about memory use at every point in the program.
Young XLA compiler has more opportunities to improve than a more mature CUDA stack.
๐ฒ๐ฐ๐ฆ Green Power (Forest animals approves)
TPU v3 supercomputer already climbed on the 4th row of TOP500 ranking, but what is remarkable - it demonstrates an overwhelming 146.3 GFLops/Watt performance. The nearest competitor has 10 times and lower number.
Original Paper
A Domain Specific Computer for training DNN
Open Data Science Online Event Announce & Call for speakers!
Data Fest 2020 - Online & Global, September 19-20
Data Fest is global free conference series where we unite all researchers, engineers, and developers around Data Science and related areas. Most of the tracks (sections) will be in English, some of them in Russian.
It was tricky to promise an increase in geography in 2020, but weโve managed. We've completely reimagined what an online conference can be and invite you to try:
โข Youtube Livestream on September 19-20 from 11:00 to 19:00 Moscow time.
โข Networking in spatial.chat - the closest you can get to an online festival with a great number of rooms with topics of interest.
โข All materials will be hosted on the ODS.ai platform in our new format - online tracks.
All materials will be open, however, to participate in networking you will have to register with your profile on ODS.AI website: https://datafest.ru/2020/ (English version coming soon on Thursday)
After the Data Fest is over, all the valuable information and insights gathered in the preparation and the event will be published on the ODS.ai platform as tracks:
โข The tracks are united by topics - there are ML, Graph ML, Big Data, from pet-project to startup, Career, and many more. We already have 35+ announced tracks, and the list is not yet final - everyone should find something of their interest.
โข Data Fest is literally the premier event, and some tracksโ organisers will host their regular events in the weeks following the Fest. So stay tuned.
โข Some tracks will be in English - remember we told that ODS.AI goes global?
If you want to become a part of a great story that is about to begin, the call for speakers is publicly open!
To become a part of the program just write to the track organizers directly, you can find a list of them on the website.
Or if you feel shy, you can simply submit your talks ideas via this form: https://forms.gle/8qPMu2pndHZcNxvL9
If you are willing to give a talk on ยซDS without MLยป topic: from Excel Data Science to any Heuristics and cases of applying Algorithms for solving business tasks, reach out directly to @malev.
Stay safe, stay sane, and see you on Data Fest Online! ๐
https://datafest.ru/2020/
You can ask you questions in the comments below โฌ๏ธ
Data Fest 2020 - Online & Global, September 19-20
Data Fest is global free conference series where we unite all researchers, engineers, and developers around Data Science and related areas. Most of the tracks (sections) will be in English, some of them in Russian.
It was tricky to promise an increase in geography in 2020, but weโve managed. We've completely reimagined what an online conference can be and invite you to try:
โข Youtube Livestream on September 19-20 from 11:00 to 19:00 Moscow time.
โข Networking in spatial.chat - the closest you can get to an online festival with a great number of rooms with topics of interest.
โข All materials will be hosted on the ODS.ai platform in our new format - online tracks.
All materials will be open, however, to participate in networking you will have to register with your profile on ODS.AI website: https://datafest.ru/2020/ (English version coming soon on Thursday)
After the Data Fest is over, all the valuable information and insights gathered in the preparation and the event will be published on the ODS.ai platform as tracks:
โข The tracks are united by topics - there are ML, Graph ML, Big Data, from pet-project to startup, Career, and many more. We already have 35+ announced tracks, and the list is not yet final - everyone should find something of their interest.
โข Data Fest is literally the premier event, and some tracksโ organisers will host their regular events in the weeks following the Fest. So stay tuned.
โข Some tracks will be in English - remember we told that ODS.AI goes global?
If you want to become a part of a great story that is about to begin, the call for speakers is publicly open!
To become a part of the program just write to the track organizers directly, you can find a list of them on the website.
Or if you feel shy, you can simply submit your talks ideas via this form: https://forms.gle/8qPMu2pndHZcNxvL9
If you are willing to give a talk on ยซDS without MLยป topic: from Excel Data Science to any Heuristics and cases of applying Algorithms for solving business tasks, reach out directly to @malev.
Stay safe, stay sane, and see you on Data Fest Online! ๐
https://datafest.ru/2020/
You can ask you questions in the comments below โฌ๏ธ
โโNvidia announced new card RTX 3090
RTX 3090 is roughly 2 times more powerful than 2080.
There is probably no point in getting 3080 because RAM volume is only 10G.
But what really matters, is how it was presented. Purely technological product for mostly proffesionals, techheads and gamers was presented with absolute brialliancy. That is much more exciting then the release itself.
YouTube: https://www.youtube.com/watch?v=E98hC9e__Xs
#Nvidia #GPU #techstack
RTX 3090 is roughly 2 times more powerful than 2080.
There is probably no point in getting 3080 because RAM volume is only 10G.
But what really matters, is how it was presented. Purely technological product for mostly proffesionals, techheads and gamers was presented with absolute brialliancy. That is much more exciting then the release itself.
YouTube: https://www.youtube.com/watch?v=E98hC9e__Xs
#Nvidia #GPU #techstack
Lo-Fi Player
The team from the magenta project, that does research about deep learning and music powered by TensorFlow in Google, obviously, release a new fun project lofi-player powered by their open-source library magenta.js.
So it's basically a lo-fi music generator which popular genre on youtube streams and other kinds of stuff. You can customize the vibe on your manner and wish from sad to moody, slow to fast, etc.
It is based on their earlier work MusicVae to sample latent space of music and MelodyRNN to generate music sequences from different instruments. The project is not about new research, but to show what can do with an already done library in a creative way.
They also create a stream on youtube to listen lo-fi generated by that application and users in chat can together tune lo-fi player in stream :)
#magenta #lo-fi #music #google #tensorflow #fun
The team from the magenta project, that does research about deep learning and music powered by TensorFlow in Google, obviously, release a new fun project lofi-player powered by their open-source library magenta.js.
So it's basically a lo-fi music generator which popular genre on youtube streams and other kinds of stuff. You can customize the vibe on your manner and wish from sad to moody, slow to fast, etc.
It is based on their earlier work MusicVae to sample latent space of music and MelodyRNN to generate music sequences from different instruments. The project is not about new research, but to show what can do with an already done library in a creative way.
They also create a stream on youtube to listen lo-fi generated by that application and users in chat can together tune lo-fi player in stream :)
#magenta #lo-fi #music #google #tensorflow #fun
Lo-Fi Player
Interactive lofi beat player.
Forwarded from Graph Machine Learning
DeepMind's Traffic Prediction with Advanced Graph Neural Networks
A new blog post by DeepMind has been released recently that describes how you can apply GNN for travel time predictions. There are not many details about the model itself (which makes me wonder if deep net trained across all supersegments would suffice), but there are curious details about training.
1. As the road network is huge I suppose, they use sampling sampling of subgraphs in proportion to traffic density. This should be similar to GraphSAGE-like approaches.
2. Sampled subgraphs can vary a lot in a single batch. So they use RL to select subgraph properly. I guess it's some form of imitation learning that selects graphs in a batch based on some objective value.
3. They use MetaGradients algorithm to select a learning rate, which was previously used to parametrize returns in RL. I guess it parametrizes learning rate instead in this blog post.
A new blog post by DeepMind has been released recently that describes how you can apply GNN for travel time predictions. There are not many details about the model itself (which makes me wonder if deep net trained across all supersegments would suffice), but there are curious details about training.
1. As the road network is huge I suppose, they use sampling sampling of subgraphs in proportion to traffic density. This should be similar to GraphSAGE-like approaches.
2. Sampled subgraphs can vary a lot in a single batch. So they use RL to select subgraph properly. I guess it's some form of imitation learning that selects graphs in a batch based on some objective value.
3. They use MetaGradients algorithm to select a learning rate, which was previously used to parametrize returns in RL. I guess it parametrizes learning rate instead in this blog post.
Google DeepMind
Traffic prediction with advanced Graph Neural Networks
By partnering with Google, DeepMind is able to bring the benefits of AI to billions of people all over the world. From reuniting a speech-impaired user with his original voice, to helping users discoโฆ
Data Science by ODS.ai ๐ฆ
โโNvidia announced new card RTX 3090 RTX 3090 is roughly 2 times more powerful than 2080. There is probably no point in getting 3080 because RAM volume is only 10G. But what really matters, is how it was presented. Purely technological product for mostlyโฆ
#NVidia performance per dollar
โโ๐ฅNew Seaborn vizaulization library release
- completely new and improved distributions module, with a modern API and many new features, like these histograms and kernel density plots
- support for empirical distribution plots, a better way to compare multiple distributions
- better overall handling of categorical, datetime, and log-scaled data
- new perceptually-uniform colormaps that are optimized for use in scatter or line plots
- an API update that requires keyword arguments in most places, laying the groundwork for smoother integration of planned future enhancements
Medium post: https://medium.com/@michaelwaskom/announcing-the-release-of-seaborn-0-11-3df0341af042
Whats new: https://seaborn.pydata.org/whatsnew.html
#vizualization #seaborn
- completely new and improved distributions module, with a modern API and many new features, like these histograms and kernel density plots
- support for empirical distribution plots, a better way to compare multiple distributions
- better overall handling of categorical, datetime, and log-scaled data
- new perceptually-uniform colormaps that are optimized for use in scatter or line plots
- an API update that requires keyword arguments in most places, laying the groundwork for smoother integration of planned future enhancements
Medium post: https://medium.com/@michaelwaskom/announcing-the-release-of-seaborn-0-11-3df0341af042
Whats new: https://seaborn.pydata.org/whatsnew.html
#vizualization #seaborn
โโ๐ฑFull body 3D scan with the iPhone
Our friends from in3D.io released their app for digitizing humans with simple UX โ just with a single scan of 360 turn. They use TrueDepth camera in iPhones to get photoreal quality.
Avatar is auto-rigged, there are a bunch of funny animations available in the app. You can export the model to a file, GTA V or Second Life.
Looking forward to Fortnite integration after Epic Games solve their issues!
Website: https://in3D.io
App: https://apple.co/3h7LEsT
#3dmodel #3dscan #truedepth #dstartup #ios
Our friends from in3D.io released their app for digitizing humans with simple UX โ just with a single scan of 360 turn. They use TrueDepth camera in iPhones to get photoreal quality.
Avatar is auto-rigged, there are a bunch of funny animations available in the app. You can export the model to a file, GTA V or Second Life.
Looking forward to Fortnite integration after Epic Games solve their issues!
Website: https://in3D.io
App: https://apple.co/3h7LEsT
#3dmodel #3dscan #truedepth #dstartup #ios
Forwarded from Binary Tree
Diagrams lets you draw the cloud system architecture in Python code. It was born for prototyping a new system architecture design without any design tools. You can also describe or visualize the existing system architecture as well. Diagrams currently supports main major providers including: AWS, Azure, GCP, Kubernetes, Alibaba Cloud, Oracle Cloud etc... It also supports On-Premise nodes, SaaS and major Programming frameworks and languages.
#python, #diagram, #drawing, #prototyping, #architecture
#python, #diagram, #drawing, #prototyping, #architecture
๐1
โโpytorch lightning bolts
from linear, logistic regression on tpu-s to pre-trained gan-s
plb โ is a collection of pytorch lightning implementations of popular models that are well tested and optimized for speed on multiple gpu-s and tpu-s
it is a new community built dl research and production toolbox, featuring a collection of well established and sota models and components, pre-trained weights, callbacks, loss functions, data sets, and data modules
everything is implemented in lightning and tested benchmarked, documented, and works on cpu-s, tpu-s, gpu-s, and 16-bit precision
more u can read at the blog post
github: https://github.com/PyTorchLightning/pytorch-lightning-bolts
#pytorchlightning #bolts #multiple
from linear, logistic regression on tpu-s to pre-trained gan-s
plb โ is a collection of pytorch lightning implementations of popular models that are well tested and optimized for speed on multiple gpu-s and tpu-s
it is a new community built dl research and production toolbox, featuring a collection of well established and sota models and components, pre-trained weights, callbacks, loss functions, data sets, and data modules
everything is implemented in lightning and tested benchmarked, documented, and works on cpu-s, tpu-s, gpu-s, and 16-bit precision
more u can read at the blog post
github: https://github.com/PyTorchLightning/pytorch-lightning-bolts
#pytorchlightning #bolts #multiple
๐1