Neural Networks | Нейронные сети
11.6K subscribers
802 photos
184 videos
170 files
9.45K links
Все о машинном обучении

По всем вопросам - @notxxx1

№ 4959169263
Download Telegram
🎥 Машинное обучение 12. RNN
👁 1 раз 5060 сек.
Лекция 30.04.2019
Лектор: Радослав Нейчев

Снимал: Александр Гришутин
Монтировал: Александр Васильев
​Dataset bridges human vision and machine learning

🔗 Dataset bridges human vision and machine learning
Neuroscientists and computer vision scientists say a new dataset of unprecedented size—comprising brain scans of four volunteers who each viewed 5,000 images—will help researchers better understand ...
🎥 Deep Neural Networks step by step final prediction model #part 5
👁 1 раз 1737 сек.
In this tutorial we will use the functions we had implemented in the previous parts to build a deep network, and apply it to cat vs dog classification. Hopefully, we will see an improvement in accuracy relative to our previous logistic regression implementation. After this part we will be able to build and apply a deep neural network to supervised learning using only numpy library.

Full tutorial code and cats vs dogs image data-set can be found on my GitHub page: https://github.com/pythonlessons/Logistic-r
🎥 Deep Learning and Modern NLP - Zachary S Brown
👁 1 раз 5393 сек.
In this tutorial, we’ll cover the fundamental building blocks of neural network architectures and how they are utilized to tackle problems in modern natural language processing. Topics covered will include an overview of language vector representations, text classification, named entity recognition, and sequence to sequence modeling approaches. An emphasis will be placed on the shape of these types of problems from the perspective of deep learning architectures. This will help to develop an intuition for id
🎥 Leveraging NLP and Deep Learning for Document Recommendations in the CloudGuoqiong Song Intel
👁 1 раз 1254 сек.
Efficient recommender systems are critical for the success of many industries, such as job recommendation, news recommendation, ecommerce, etc. This talk will illustrate how to build an efficient document recommender system by leveraging Natural Language Processing(NLP) and Deep Neural Networks (DNNs). The end-to-end flow of the document recommender system is build on AWS at scale, using Analytics Zoo for Spark and BigDL. The system first processes text rich documents into embeddings by incorporating Global
🎥 TWiML x Fast ai v3 Deep Learning Part 2 Study Group - Lesson 12 - Spring 2019 1080p
👁 1 раз 5738 сек.
**SUBSCRIBE AND TURN ON NOTIFICATIONS** **twimlai.com**

This video is a recap of our TWiML Online Fast.ai Deep Learning Part 2 Study Group.

In this session, we had a mini presentation on Deep Representation Learning for Trigger Monitoring and a review Lesson 12 of the Fast.ai v3 Deep Learning Part 2 course.

It’s not too late to join the study group. Just follow these simple steps:

1. Head over to twimlai.com/meetup, and sign up for the programs you're interested in, including either of the Fast.ai study
​Digging Into Self-Supervised Monocular Depth Estimation

https://arxiv.org/abs/1806.01260

https://github.com/nianticlabs/monodepth2

🔗 Digging Into Self-Supervised Monocular Depth Estimation
Per-pixel ground-truth depth data is challenging to acquire at scale. To overcome this limitation, self-supervised learning has emerged as a promising alternative for training models to perform monocular depth estimation. In this paper, we propose a set of improvements, which together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods. Research on self-supervised monocular training usually explores increasingly complex architectures, loss functions, and image formation models, all of which have recently helped to close the gap with fully-supervised methods. We show that a surprisingly simple model, and associated design choices, lead to superior predictions. In particular, we propose (i) a minimum reprojection loss, designed to robustly handle occlusions, (ii) a full-resolution multi-scale sampling method that reduces visual artifacts, and (iii) an auto-masking loss to ignore training pixels that violate camera motion assumptions. We demonstrate t