Наш телеграм канал - tglink.me/ai_machinelearning_big_data
https://towardsdatascience.com/the-digital-survivors-dc3311cb9602?source=collection_home---4------4---------------------
🔗 The Digital Survivors
“Soon, there will be no survivors to tell their stories, and your children would hear about the Holocaust only by books and videos”
https://towardsdatascience.com/the-digital-survivors-dc3311cb9602?source=collection_home---4------4---------------------
🔗 The Digital Survivors
“Soon, there will be no survivors to tell their stories, and your children would hear about the Holocaust only by books and videos”
Towards Data Science
The Digital Survivors — Analytic Perspective of the Holocaust
“Soon, there will be no survivors to tell their stories, and your children would hear about the Holocaust only by books and videos”
Neural Networks and Fibonacci Numbers
🔗 Neural Networks and Fibonacci Numbers
Does Fibonacci Initialized ANN overpowers our standard random initialization approach ? I don’t know !! Let’s experiment and see…
🔗 Neural Networks and Fibonacci Numbers
Does Fibonacci Initialized ANN overpowers our standard random initialization approach ? I don’t know !! Let’s experiment and see…
Towards Data Science
Neural Networks and Fibonacci Numbers
Does Fibonacci Initialized ANN overpowers our standard random initialization approach ? I don’t know !! Let’s experiment and see…
Representing music with Word2vec?
🔗 Representing music with Word2vec?
Machine learning algorithms have transformed the field of vision and NLP. But what about music? These last few years, the field of music…
🔗 Representing music with Word2vec?
Machine learning algorithms have transformed the field of vision and NLP. But what about music? These last few years, the field of music…
Towards Data Science
Representing music with Word2vec?
Machine learning algorithms have transformed the field of vision and NLP. But what about music? These last few years, the field of music…
🎥 TWiML x Fast ai v3 Deep Learning Part 2 Study Group - Lesson 12 - Spring 2019 1080p
👁 1 раз ⏳ 5738 сек.
👁 1 раз ⏳ 5738 сек.
**SUBSCRIBE AND TURN ON NOTIFICATIONS** **twimlai.com**
This video is a recap of our TWiML Online Fast.ai Deep Learning Part 2 Study Group.
In this session, we had a mini presentation on Deep Representation Learning for Trigger Monitoring and a review Lesson 12 of the Fast.ai v3 Deep Learning Part 2 course.
It’s not too late to join the study group. Just follow these simple steps:
1. Head over to twimlai.com/meetup, and sign up for the programs you're interested in, including either of the Fast.ai studyVk
TWiML x Fast ai v3 Deep Learning Part 2 Study Group - Lesson 12 - Spring 2019 1080p
**SUBSCRIBE AND TURN ON NOTIFICATIONS** **twimlai.com**
This video is a recap of our TWiML Online Fast.ai Deep Learning Part 2 Study Group.
In this session, we had a mini presentation on Deep Representation Learning for Trigger Monitoring and a review Lesson…
This video is a recap of our TWiML Online Fast.ai Deep Learning Part 2 Study Group.
In this session, we had a mini presentation on Deep Representation Learning for Trigger Monitoring and a review Lesson…
Digging Into Self-Supervised Monocular Depth Estimation
https://arxiv.org/abs/1806.01260
https://github.com/nianticlabs/monodepth2
🔗 Digging Into Self-Supervised Monocular Depth Estimation
Per-pixel ground-truth depth data is challenging to acquire at scale. To overcome this limitation, self-supervised learning has emerged as a promising alternative for training models to perform monocular depth estimation. In this paper, we propose a set of improvements, which together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods. Research on self-supervised monocular training usually explores increasingly complex architectures, loss functions, and image formation models, all of which have recently helped to close the gap with fully-supervised methods. We show that a surprisingly simple model, and associated design choices, lead to superior predictions. In particular, we propose (i) a minimum reprojection loss, designed to robustly handle occlusions, (ii) a full-resolution multi-scale sampling method that reduces visual artifacts, and (iii) an auto-masking loss to ignore training pixels that violate camera motion assumptions. We demonstrate t
https://arxiv.org/abs/1806.01260
https://github.com/nianticlabs/monodepth2
🔗 Digging Into Self-Supervised Monocular Depth Estimation
Per-pixel ground-truth depth data is challenging to acquire at scale. To overcome this limitation, self-supervised learning has emerged as a promising alternative for training models to perform monocular depth estimation. In this paper, we propose a set of improvements, which together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods. Research on self-supervised monocular training usually explores increasingly complex architectures, loss functions, and image formation models, all of which have recently helped to close the gap with fully-supervised methods. We show that a surprisingly simple model, and associated design choices, lead to superior predictions. In particular, we propose (i) a minimum reprojection loss, designed to robustly handle occlusions, (ii) a full-resolution multi-scale sampling method that reduces visual artifacts, and (iii) an auto-masking loss to ignore training pixels that violate camera motion assumptions. We demonstrate t
arXiv.org
Digging Into Self-Supervised Monocular Depth Estimation
Per-pixel ground-truth depth data is challenging to acquire at scale. To overcome this limitation, self-supervised learning has emerged as a promising alternative for training models to perform...
🎥 TWiML x Fast ai v3 Deep Learning Part 2 Study Group - Lesson 13 - Spring 2019 1080p
👁 1 раз ⏳ 5669 сек.
👁 1 раз ⏳ 5669 сек.
**SUBSCRIBE AND TURN ON NOTIFICATIONS** **twimlai.com**
This video is a recap of our TWiML Online Fast.ai Deep Learning Part 2 Study Group.
In this session, we review Lesson 13, on Clone the Fast.ai Repo, of the Fast.ai v3 Deep Learning Part 2 course.
It’s not too late to join the study group. Just follow these simple steps:
1. Head over to twimlai.com/meetup, and sign up for the programs you're interested in, including either of the Fast.ai study groups or our Monthly Meetup groups.
2. Use the email iVk
TWiML x Fast ai v3 Deep Learning Part 2 Study Group - Lesson 13 - Spring 2019 1080p
**SUBSCRIBE AND TURN ON NOTIFICATIONS** **twimlai.com**
This video is a recap of our TWiML Online Fast.ai Deep Learning Part 2 Study Group.
In this session, we review Lesson 13, on Clone the Fast.ai Repo, of the Fast.ai v3 Deep Learning Part 2 course.
…
This video is a recap of our TWiML Online Fast.ai Deep Learning Part 2 Study Group.
In this session, we review Lesson 13, on Clone the Fast.ai Repo, of the Fast.ai v3 Deep Learning Part 2 course.
…
🎥 Иван Ямщиков - Как поговорить с алгоритмом
👁 12 раз ⏳ 4034 сек.
👁 12 раз ⏳ 4034 сек.
Что такое машинное обучение? Где оно применяется? Какие задачи и проблемы стоят перед машинным обучением в обработке языка, текстов и речи? Почему язык - это сложно? Как говорят машины? Какова история машинного обучения?
Рассказывает Иван Ямщиков, PhD, научный сотрудник Института Макса Планка в Лейпциге, AI-евангелист компании ABBYY.
Наш подкаст-канал:
http://nauka-pro.ru/podcasting
Выражаем благодарность Просветительскому проекту "МатЧасть" за помощь в организации съёмок.
Друзья, если вы хотите поддVk
Иван Ямщиков - Как поговорить с алгоритмом
Что такое машинное обучение? Где оно применяется? Какие задачи и проблемы стоят перед машинным обучением в обработке языка, текстов и речи? Почему язык - это сложно? Как говорят машины? Какова история машинного обучения?
Рассказывает Иван Ямщиков, PhD, научный…
Рассказывает Иван Ямщиков, PhD, научный…
🎥 Building Autonomous Systems with Machine Teaching - THR3006
👁 1 раз ⏳ 1205 сек.
👁 1 раз ⏳ 1205 сек.
Building autonomous systems with traditional Machine Learning techniques is difficult. Machine Teaching is a new approach to building intelligence using deep reinforcement learning. Come to this session to learn how to use machine teaching to apply expert knowledge to create deep reinforcement models for control industrial systems like bulldozers, oil drills, and more.Vk
Building Autonomous Systems with Machine Teaching - THR3006
Building autonomous systems with traditional Machine Learning techniques is difficult. Machine Teaching is a new approach to building intelligence using deep reinforcement learning. Come to this session to learn how to use machine teaching to apply expert…
🎥 2019 - Guy Royse - Deep Learning like a Viking: Building Convolutional Neural Networks with Keras
👁 1 раз ⏳ 3298 сек.
👁 1 раз ⏳ 3298 сек.
The Vikings came from the land of ice and snow, from the midnight sun, where the hot springs flow. In addition to longships and bad attitudes, they had a system of writing that we, in modern times, have dubbed the Younger Futhark (or ᚠᚢᚦᚬᚱᚴ if you're a Viking). These sigils are more commonly called runes and have been mimicked in fantasy literature and role-playing games for decades. Of course, having an alphabet, runic or otherwise, solves lots of problems. But, it also introduces others. The Vikings had tVk
2019 - Guy Royse - Deep Learning like a Viking: Building Convolutional Neural Networks with Keras
The Vikings came from the land of ice and snow, from the midnight sun, where the hot springs flow. In addition to longships and bad attitudes, they had a system of writing that we, in modern times, have dubbed the Younger Futhark (or ᚠᚢᚦᚬᚱᚴ if you're a Viking).…
ISSCC2019: Intelligence on Silicon: From Deep Neural Network Accelerators to Brain-Mimicking AI-SoCs
🔗 ISSCC2019: Intelligence on Silicon: From Deep Neural Network Accelerators to Brain-Mimicking AI-SoCs
Hoi-Jun Yoo, KAIST, Daejeon, Korea
Deep learning is influencing not only the technology itself but also our everyday lives.
Formerly, most AI functionalities and applications were centralized on datacenters. However,
the primary platform for AI has recently shifted to mobile devices. With the increasing demand
on mobile AI, conventional hardware solutions face their ordeal because of their low energy
efficiency on such power hungry applications. For the past few years, dedicated DNN
accelerators inference
🔗 ISSCC2019: Intelligence on Silicon: From Deep Neural Network Accelerators to Brain-Mimicking AI-SoCs
Hoi-Jun Yoo, KAIST, Daejeon, Korea
Deep learning is influencing not only the technology itself but also our everyday lives.
Formerly, most AI functionalities and applications were centralized on datacenters. However,
the primary platform for AI has recently shifted to mobile devices. With the increasing demand
on mobile AI, conventional hardware solutions face their ordeal because of their low energy
efficiency on such power hungry applications. For the past few years, dedicated DNN
accelerators inference
YouTube
ISSCC2019: Intelligence on Silicon: From Deep Neural Network Accelerators to Brain-Mimicking AI-SoCs
Hoi-Jun Yoo, KAIST, Daejeon, Korea
Deep learning is influencing not only the technology itself but also our everyday lives.
Formerly, most AI functionalities and applications were centralized on datacenters. However,
the primary platform for AI has recently…
Deep learning is influencing not only the technology itself but also our everyday lives.
Formerly, most AI functionalities and applications were centralized on datacenters. However,
the primary platform for AI has recently…
ISSCC 2019: Deep Learning Hardware: Past, Present, and Future - Yann LeCun
🔗 ISSCC 2019: Deep Learning Hardware: Past, Present, and Future - Yann LeCun
Yann LeCun, Facebook AI Research & New York University, New York, NY
Deep learning has caused revolutions in computer understanding of images, audio, and text,
enabling new applications such as information search and filtering, autonomous driving,
radiology screening, real-time language translation, and virtual assistants. But almost all these
successes largely use supervised learning, which requires human-annotated data, or
reinforcement learning, which requires too many trials to be practical in most rea
🔗 ISSCC 2019: Deep Learning Hardware: Past, Present, and Future - Yann LeCun
Yann LeCun, Facebook AI Research & New York University, New York, NY
Deep learning has caused revolutions in computer understanding of images, audio, and text,
enabling new applications such as information search and filtering, autonomous driving,
radiology screening, real-time language translation, and virtual assistants. But almost all these
successes largely use supervised learning, which requires human-annotated data, or
reinforcement learning, which requires too many trials to be practical in most rea
YouTube
ISSCC 2019: Deep Learning Hardware: Past, Present, and Future - Yann LeCun
Yann LeCun, Facebook AI Research & New York University, New York, NY
Deep learning has caused revolutions in computer understanding of images, audio, and text,
enabling new applications such as information search and filtering, autonomous driving,
radiology…
Deep learning has caused revolutions in computer understanding of images, audio, and text,
enabling new applications such as information search and filtering, autonomous driving,
radiology…
🎥 Swift for TensorFlow (Google I/O'19)
👁 1 раз ⏳ 1705 сек.
👁 1 раз ⏳ 1705 сек.
Swift for TensorFlow is a platform for the next generation of machine learning that leverages innovations like first-class differentiable programming to seamlessly integrate deep neural networks with traditional software development. In this session, learn how Swift for TensorFlow can make advanced machine learning research easier and why Jeremy Howard’s fast.ai has chosen it for the latest iteration of their deep learning course.
Watch more #io19 here: Machine Learning at Google I/O 2019 Playlist → https:Vk
Swift for TensorFlow (Google I/O'19)
Swift for TensorFlow is a platform for the next generation of machine learning that leverages innovations like first-class differentiable programming to seamlessly integrate deep neural networks with traditional software development. In this session, learn…
🎥 Exploring the Deep Learning Framework PyTorch - Stephanie Kim
👁 1 раз ⏳ 2159 сек.
👁 1 раз ⏳ 2159 сек.
STEPHANIE KIM | SOFTWARE ENGINEER AT ALGORITHMIA
Users rapidly adopted PyTorch 1.0 for many reasons. PyTorch is intuitive to learn, and its modularity enhances debugging and visibility. Additionally, unlike other frameworks such as Tensorflow, PyTorch supports dynamic computation graphs that allow network behavior changes on the fly. This talk showcases PyTorch benefits like TorchScript, which allows models to be exported in non-Python environments. We’ll also discuss pre-release serialization and performaVk
Exploring the Deep Learning Framework PyTorch - Stephanie Kim
STEPHANIE KIM | SOFTWARE ENGINEER AT ALGORITHMIA
Users rapidly adopted PyTorch 1.0 for many reasons. PyTorch is intuitive to learn, and its modularity enhances debugging and visibility. Additionally, unlike other frameworks such as Tensorflow, PyTorch supports…
Users rapidly adopted PyTorch 1.0 for many reasons. PyTorch is intuitive to learn, and its modularity enhances debugging and visibility. Additionally, unlike other frameworks such as Tensorflow, PyTorch supports…
Review: SENet — Squeeze-and-Excitation Network, Winner of ILSVRC 2017 (Image Classification)
🔗 Review: SENet — Squeeze-and-Excitation Network, Winner of ILSVRC 2017 (Image Classification)
With SE Blocks, Surpasses ResNet, Inception-v4, PolyNet, ResNeXt, MobileNetV1, DenseNet, PyramidNet, DPN, ShuffleNet V1
🔗 Review: SENet — Squeeze-and-Excitation Network, Winner of ILSVRC 2017 (Image Classification)
With SE Blocks, Surpasses ResNet, Inception-v4, PolyNet, ResNeXt, MobileNetV1, DenseNet, PyramidNet, DPN, ShuffleNet V1
Towards Data Science
Review: SENet — Squeeze-and-Excitation Network, Winner of ILSVRC 2017 (Image Classification)
With SE Blocks, Surpasses ResNet, Inception-v4, PolyNet, ResNeXt, MobileNetV1, DenseNet, PyramidNet, DPN, ShuffleNet V1
🎥 Train Custom Machine Learning Models with No Data Science Expertise (Google I/O'19)
👁 1 раз ⏳ 1315 сек.
👁 1 раз ⏳ 1315 сек.
Cloud AutoML is a suite of machine learning products that enables developers with limited machine learning expertise to train high-quality models specific to their needs, by leveraging Google’s state-of-the-art neural architecture search technology. Learn the power and ease-of-use of Cloud AutoML Tables, Video Intelligence, and Natural Language, and take a look at how Cloud AutoML would fare if it were to participate in data science competitions.
Watch more #io19 here:
GCP at Google I/O 2019 Playlist → htVk
Train Custom Machine Learning Models with No Data Science Expertise (Google I/O'19)
Cloud AutoML is a suite of machine learning products that enables developers with limited machine learning expertise to train high-quality models specific to their needs, by leveraging Google’s state-of-the-art neural architecture search technology. Learn…
🎥 Machine Learning is the New Chicken Sexer - This Week in Google 507
👁 1 раз ⏳ 7351 сек.
👁 1 раз ⏳ 7351 сек.
Google I/O Highlights
This Week's Stories
-- Google I/O Highlights
-- Pixel 3A Unboxing
-- Nest Hub Max > Google Home Hub
-- AR Walking Directions in Google Maps
-- Google Leans into Helpfulness vs Privacy
-- Project Euphonia: Speech-to-text for People who are Hard to Understand
-- Trouble in Chromebook Land
-- AR Coming to Google Search
-- Next-Generation Google Assistant
-- Changes to Android Auto
-- Assistant Speeds Up
-- Google Maps Gets Incognito Mode
-- Android Q Preview
-- Protest PlanVk
Machine Learning is the New Chicken Sexer - This Week in Google 507
Google I/O Highlights
This Week's Stories
-- Google I/O Highlights
-- Pixel 3A Unboxing
-- Nest Hub Max > Google Home Hub
-- AR Walking Directions in Google Maps
-- Google Leans into Helpfulness vs Privacy
-- Project Euphonia: Speech-to-text for People…
This Week's Stories
-- Google I/O Highlights
-- Pixel 3A Unboxing
-- Nest Hub Max > Google Home Hub
-- AR Walking Directions in Google Maps
-- Google Leans into Helpfulness vs Privacy
-- Project Euphonia: Speech-to-text for People…
🎥 ML Kit: Machine Learning for Mobile with Firebase (Google I/O'19)
👁 1 раз ⏳ 2284 сек.
👁 1 раз ⏳ 2284 сек.
ML Kit allows you to harness the power of machine learning in your iOS and Android apps without needing to be an expert in it. Leverage powerful, but simple-to-use on-device and cloud-based APIs for Vision and Natural Language Processing, or train and/or deploy your own models. Understand some big additions to ML Kit and how to use these to enable smarter, richer experiences to your users.
Watch more #io19 here:
Firebase at Google I/O 2019 Playlist → https://goo.gle/2GSFVqN
Google I/O 2019 All Sessions PlVk
ML Kit: Machine Learning for Mobile with Firebase (Google I/O'19)
ML Kit allows you to harness the power of machine learning in your iOS and Android apps without needing to be an expert in it. Leverage powerful, but simple-to-use on-device and cloud-based APIs for Vision and Natural Language Processing, or train and/or…
My First Usage of Natural Language Processing (NLP) in Industry
🔗 My First Usage of Natural Language Processing (NLP) in Industry
People hear a lot about how fantastic NLP is, but often for many it can be hard to see how it can be applied in a commercial setting. With…
🔗 My First Usage of Natural Language Processing (NLP) in Industry
People hear a lot about how fantastic NLP is, but often for many it can be hard to see how it can be applied in a commercial setting. With…
Towards Data Science
My First Usage of Natural Language Processing (NLP) in Industry
People hear a lot about how fantastic NLP is, but often for many it can be hard to see how it can be applied in a commercial setting. With…