Federated Learning: Collaborative Machine Learning without Centralized Training Data
https://ai.googleblog.com/2017/04/federated-learning-collaborative.html
https://ai.googleblog.com/2017/04/federated-learning-collaborative.html
Glow is a machine learning compiler and execution engine for hardware accelerators. It is designed to be used as a backend for high-level machine learning frameworks. The compiler is designed to allow state of the art compiler optimizations and code generation of neural network graphs
https://github.com/pytorch/glow
https://github.com/pytorch/glow
Papers With Code now includes 950+ ML tasks, 500+ evaluation tables (including SOTA results) and 8500+ papers with code.
https://paperswithcode.com
https://paperswithcode.com
huggingface.co
Trending Papers - Hugging Face
Your daily dose of AI research from AK
لینک جدید گروه
https://xn--r1a.website/joinchat/A3HTSj3_zWPMCwcByv1aKg
https://xn--r1a.website/joinchat/A3HTSj3_zWPMCwcByv1aKg
Deep learning channel pinned «لینک جدید گروه https://xn--r1a.website/joinchat/A3HTSj3_zWPMCwcByv1aKg»
Training deep learning models with vast amounts of data is necessary to achieve accurate results. Data in the wild, or even prepared data sets, is usually not in the form that can be directly fed into neural network. This is where NVIDIA DALI data preprocessing comes into play. DALI is a set of highly optimized building blocks plus an execution engine to accelerate input data pre-processing for deep learning applications.
https://devblogs.nvidia.com/fast-ai-data-preprocessing-with-nvidia-dali/
https://devblogs.nvidia.com/fast-ai-data-preprocessing-with-nvidia-dali/
Forwarded from رویدادهای ملی و بین المللی
بینایی انسان و ماشین
سمپوزیوم علوم اعصاب دانشگاه صنعتی شریف
- آشنایی با سیستم بینایی انسان
- پردازش تصاویر دیجیتال و بینایی ماشین
- آشنایی با تصویر برداری پزشکی و کاربردها
- آشنایی با پردازش تصاویر پزشکی
🗓 تاریخ برگزاری : پنجشنبه 2 اسفند
💶 هزینه ثبت نام : 150 هزار تومان
🎓 اعطای مدرک معتبر دانشگاه صنعتی شریف
⚠️ ظرفیت ثبت نام محدود
https://pedu.sharif.edu/events/details/1034
#اسفند1397
#Workshop #Machine_Vision #Machine_Learning #Python #OpenCV #TensorFlow #SNS #FK
#Tehran #SUT
sns.ee.sharif.edu
https://xn--r1a.website/convent/3445
سمپوزیوم علوم اعصاب دانشگاه صنعتی شریف
- آشنایی با سیستم بینایی انسان
- پردازش تصاویر دیجیتال و بینایی ماشین
- آشنایی با تصویر برداری پزشکی و کاربردها
- آشنایی با پردازش تصاویر پزشکی
🗓 تاریخ برگزاری : پنجشنبه 2 اسفند
💶 هزینه ثبت نام : 150 هزار تومان
🎓 اعطای مدرک معتبر دانشگاه صنعتی شریف
⚠️ ظرفیت ثبت نام محدود
https://pedu.sharif.edu/events/details/1034
#اسفند1397
#Workshop #Machine_Vision #Machine_Learning #Python #OpenCV #TensorFlow #SNS #FK
#Tehran #SUT
sns.ee.sharif.edu
https://xn--r1a.website/convent/3445
در حال حاضر خلاصه کلاس CS ۲۳۰ - یادگیری عمیق استنفورد به زبان فارس در دسترس است
راهنمای کوتاه نکات و ترفندهای یادگیری عمیق
https://stanford.edu/~shervine/l/fa/teaching/cs-230/cheatsheet-deep-learning-tips-and-tricks
راهنمای کوتاه شبکههای عصبی پیچشی (کانولوشنی)
https://stanford.edu/~shervine/l/fa/teaching/cs-230/cheatsheet-convolutional-neural-networks
راهنمای کوتاه شبکههای عصبی برگشتی
https://stanford.edu/~shervine/l/fa/teaching/cs-230/cheatsheet-recurrent-neural-networks
راهنمای کوتاه نکات و ترفندهای یادگیری عمیق
https://stanford.edu/~shervine/l/fa/teaching/cs-230/cheatsheet-deep-learning-tips-and-tricks
راهنمای کوتاه شبکههای عصبی پیچشی (کانولوشنی)
https://stanford.edu/~shervine/l/fa/teaching/cs-230/cheatsheet-convolutional-neural-networks
راهنمای کوتاه شبکههای عصبی برگشتی
https://stanford.edu/~shervine/l/fa/teaching/cs-230/cheatsheet-recurrent-neural-networks
stanford.edu
CS ۲۳۰ - راهنمای کوتاه نکات و ترفندهای یادگیری عمیق
Teaching page of Shervine Amidi, Graduate Student at Stanford University.
Notebooks for Neural Processes (NPs) and Attentive Neural Processes (ANPs)
https://github.com/deepmind/neural-processes/blob/master/attentive_neural_process.ipynb
https://github.com/deepmind/neural-processes/blob/master/attentive_neural_process.ipynb
GitHub
neural-processes/attentive_neural_process.ipynb at master · deepmind/neural-processes
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs). - neural-...
PlotNeuralNet
Latex code for drawing neural networks for reports and presentation. Have a look into examples to see how they are made. Additionally, lets consolidate any improvements that you make and fix any bugs to help more people with this code.
https://github.com/HarisIqbal88/PlotNeuralNet
Latex code for drawing neural networks for reports and presentation. Have a look into examples to see how they are made. Additionally, lets consolidate any improvements that you make and fix any bugs to help more people with this code.
https://github.com/HarisIqbal88/PlotNeuralNet
Seven Myths in Machine Learning Research
Myth1: TensorFlow is a Tensor manipulation library
Myth 2: Image datasets are representative of real images found in the wild
Myth 3: Machine Learning researchers do not use the test set for validation
Myth 4: Every datapoint is used in training a neural network
Myth 5: We need (batch) normalization to train very deep residual networks
Myth 6: Attention > Convolution
Myth 7: Saliency maps are robust ways to interpret neural networks
https://crazyoscarchang.github.io/2019/02/16/seven-myths-in-machine-learning-research/
Myth1: TensorFlow is a Tensor manipulation library
Myth 2: Image datasets are representative of real images found in the wild
Myth 3: Machine Learning researchers do not use the test set for validation
Myth 4: Every datapoint is used in training a neural network
Myth 5: We need (batch) normalization to train very deep residual networks
Myth 6: Attention > Convolution
Myth 7: Saliency maps are robust ways to interpret neural networks
https://crazyoscarchang.github.io/2019/02/16/seven-myths-in-machine-learning-research/
Google AI Blog: Introducing GPipe, an Open Source Library for Efficiently Training Large-scale Neural Network Models
http://ai.googleblog.com/2019/03/introducing-gpipe-open-source-library.html
http://ai.googleblog.com/2019/03/introducing-gpipe-open-source-library.html
Googleblog
Introducing GPipe, an Open Source Library for Efficiently Training Large-scale Neural Network Models
Feature visualizations, letting us see through the eyes of the neural network:
https://distill.pub/2019/activation-atlas/
https://distill.pub/2019/activation-atlas/
Distill
Activation Atlas
By using feature inversion to visualize millions of activations from an image classification network, we create an explorable activation atlas of features the network has learned and what concepts it typically represents.
#TensorFlow Lite running on a Cortex M4 developer board, handling simple speech keyword recognition
https://petewarden.com/2019/03/07/launching-tensorflow-lite-for-microcontrollers/
https://petewarden.com/2019/03/07/launching-tensorflow-lite-for-microcontrollers/
Pete Warden's blog
Launching TensorFlow Lite for Microcontrollers
I’ve been spending a lot of my time over the last year working on getting machine learning running on microcontrollers, and so it was great to finally start talking about it in public for the…
A step-by-step #tutorial showing how to perform #FederatedLearning using the same infrastructure #Google uses on 10s of millions of smartphones with #TensorFlow
https://medium.com/tensorflow/introducing-tensorflow-federated-a4147aa20041
https://medium.com/tensorflow/introducing-tensorflow-federated-a4147aa20041
Medium
Introducing TensorFlow Federated
Posted by Alex Ingerman (Product Manager) and Krzys Ostrowski (Research Scientist)
Forwarded from Tensorflow(@CVision) (Alireza Akhavan)
#آموزش #ویدیو #سورس_کد
اسلایدها:
https://www.slideshare.net/Alirezaakhavanpour/deep-face-recognition-oneshot-learning
تشخیص چهره- بخش اول- علیرضا اخوان پور-1
•one-shot learning: Face Verification & Recognition
•Siamese network
•facenet triplet loss
https://www.aparat.com/v/xdC7r
تشخیص چهره - علیرضا اخوان پور - بخش دوم
• one-shot learning: Face Verification & Recognition
• Discriminative Feature
• Center loss
https://www.aparat.com/v/qmo3u
کدهای این ارائه
در گیت هاب هوش پارت و همچنین آدرس گیت هاب زیر موجود است:
https://github.com/Alireza-Akhavan/deep-face-recognition
🙏Thanks to: @partdpai
#face #face_recognition #verification
اسلایدها:
https://www.slideshare.net/Alirezaakhavanpour/deep-face-recognition-oneshot-learning
تشخیص چهره- بخش اول- علیرضا اخوان پور-1
•one-shot learning: Face Verification & Recognition
•Siamese network
•facenet triplet loss
https://www.aparat.com/v/xdC7r
تشخیص چهره - علیرضا اخوان پور - بخش دوم
• one-shot learning: Face Verification & Recognition
• Discriminative Feature
• Center loss
https://www.aparat.com/v/qmo3u
کدهای این ارائه
در گیت هاب هوش پارت و همچنین آدرس گیت هاب زیر موجود است:
https://github.com/Alireza-Akhavan/deep-face-recognition
🙏Thanks to: @partdpai
#face #face_recognition #verification
www.slideshare.net
Deep face recognition & one-shot learning
خرید دوره: http://class.vision/deep-face-recognition/ one-shot learning and Face Verification Recognition Siamese network Discriminative Feature Facenet pape…
A new summary of The Evolved #Transformer
The recipe:
1. Take the human-designed Transformer
2. Add a new neural architecture search - PDH
3. Whisk with ~200 TPUs to get a much better model - #ET
https://www.lyrn.ai/2019/03/12/the-evolved-transformer/
The recipe:
1. Take the human-designed Transformer
2. Add a new neural architecture search - PDH
3. Whisk with ~200 TPUs to get a much better model - #ET
https://www.lyrn.ai/2019/03/12/the-evolved-transformer/
Lyrn.AI
The Evolved Transformer – Enhancing Transformer with Neural Architecture Search | Lyrn.AI
Neural architecture search (NAS) is the process of algorithmically searching for new designs of neural networks. Though researchers have developed sophisticated architectures over the years, the ability to find the most efficient ones is limited, and recently…
Rich Sutton the Bitter Lesson
http://www.incompleteideas.net/IncIdeas/BitterLesson.html
Whiteson, who strongly disagrees with Sutton's point of view, believes that the history of AI teaches us that leveraging computation always eventually wins out over leveraging human knowledge.
Sutton says that the intrinsic complexity of the world means we shouldn’t build prior knowledge into our systems. But I conclude the exact opposite: that complexity leads to crippling intractability for the search and learning approaches on which Sutton proposes to rely.
https://twitter.com/shimon8282/status/1106534178676506624?s=19
http://www.incompleteideas.net/IncIdeas/BitterLesson.html
Whiteson, who strongly disagrees with Sutton's point of view, believes that the history of AI teaches us that leveraging computation always eventually wins out over leveraging human knowledge.
Sutton says that the intrinsic complexity of the world means we shouldn’t build prior knowledge into our systems. But I conclude the exact opposite: that complexity leads to crippling intractability for the search and learning approaches on which Sutton proposes to rely.
https://twitter.com/shimon8282/status/1106534178676506624?s=19
Twitter
Shimon Whiteson
Rich Sutton has a new blog post entitled “The Bitter Lesson” (https://t.co/K3GEgyGsD4) that I strongly disagree with. In it, he argues that the history of AI teaches us that leveraging computation always eventually wins out over leveraging human knowledge.
پنج لکچر ابتدایی کورس cs224n استنفورد -زمستان ۲۰۱۹ (پردازش زبان طبیعی) منتشر شد.
ویدئوهای بعدی نیز تا زمان اتمام رسمی این کورس منتشر خواهند شد
https://www.youtube.com/playlist?list=PLoROMvodv4rOhcuXMZkNm7j3fVwBBY42z
ویدئوهای بعدی نیز تا زمان اتمام رسمی این کورس منتشر خواهند شد
https://www.youtube.com/playlist?list=PLoROMvodv4rOhcuXMZkNm7j3fVwBBY42z
YouTube
Stanford CS224N: Natural Language Processing with Deep Learning Course | Winter 2019
For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/ai