Neural Quantum States — представление волновой функции нейронной сетью
В этой статье мы рассмотрим необычное применение нейронных сетей в целом и ограниченных машин Больцмана в частности для решения двух сложных задач квантовой механики — поиска энергии основного состояния и аппроксимации волновой функции системы многих тел.
https://habr.com/ru/company/raiffeisenbank/blog/445516/
🔗 Neural Quantum States — представление волновой функции нейронной сетью
В этой статье мы рассмотрим необычное применение нейронных сетей в целом и ограниченных машин Больцмана в частности для решения двух сложных задач квантовой меха...
В этой статье мы рассмотрим необычное применение нейронных сетей в целом и ограниченных машин Больцмана в частности для решения двух сложных задач квантовой механики — поиска энергии основного состояния и аппроксимации волновой функции системы многих тел.
https://habr.com/ru/company/raiffeisenbank/blog/445516/
🔗 Neural Quantum States — представление волновой функции нейронной сетью
В этой статье мы рассмотрим необычное применение нейронных сетей в целом и ограниченных машин Больцмана в частности для решения двух сложных задач квантовой меха...
Хабр
Neural Quantum States — представление волновой функции нейронной сетью
В этой статье мы рассмотрим необычное применение нейронных сетей в целом и ограниченных машин Больцмана в частности для решения двух сложных задач квантовой механики — поиска энергии основного...
Unifying Physics and Deep Learning with TossingBot
🔗 Unifying Physics and Deep Learning with TossingBot
Posted by Andy Zeng, Student Researcher, Robotics at Google Though considerable progress has been made in enabling robots to grasp objec...
🔗 Unifying Physics and Deep Learning with TossingBot
Posted by Andy Zeng, Student Researcher, Robotics at Google Though considerable progress has been made in enabling robots to grasp objec...
Google AI Blog
Unifying Physics and Deep Learning with TossingBot
Posted by Andy Zeng, Student Researcher, Robotics at Google Though considerable progress has been made in enabling robots to grasp objec...
Word Vectors and Lexical Semantics (Part 1)
🔗 Word Vectors and Lexical Semantics (Part 1)
The following are my personal notes based on the Deep NLP course by Oxford University held in 2017. The material is available at [1].
🔗 Word Vectors and Lexical Semantics (Part 1)
The following are my personal notes based on the Deep NLP course by Oxford University held in 2017. The material is available at [1].
Towards Data Science
Word Vectors and Lexical Semantics (Part 1)
The following are my personal notes based on the Deep NLP course by Oxford University held in 2017. The material is available at [1].
Training deep neural networks on a GPU with PyTorch
🔗 Training deep neural networks on a GPU with PyTorch
Part 4 of “PyTorch: Zero to GANs”
🔗 Training deep neural networks on a GPU with PyTorch
Part 4 of “PyTorch: Zero to GANs”
Medium
Training Deep Neural Networks on a GPU with PyTorch
Part 4 of “PyTorch: Zero to GANs”
How to correctly select a sample from a huge dataset in machine learning
🔗 How to correctly select a sample from a huge dataset in machine learning
Choosing a small, representative dataset from a large population can improve model training reliability
🔗 How to correctly select a sample from a huge dataset in machine learning
Choosing a small, representative dataset from a large population can improve model training reliability
Medium
How to correctly select a sample from a huge dataset in machine learning
Choosing a small, representative dataset from a large population can improve model training reliability
Monte Carlo Integration is Magic
🔗 Monte Carlo Integration is Magic
How to compute an integral in 3 lines of code
🔗 Monte Carlo Integration is Magic
How to compute an integral in 3 lines of code
Towards Data Science
Monte Carlo Integration is Magic
How to compute an integral in 3 lines of code
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition
https://www.shortscience.org/paper?bibtexKey=journals/corr/1406.4729
🔗 Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition - ShortScience.org
Spatial Pyramid Pooling (SPP) is a technique which allows Convolutional Neural Networks (CNNs) to use input images of any size, not only $224\text{px} \times 224\text{px}$ as most architectures do. (However, there is a lower bound for the size of the input image). ## Idea * Convolutional layers operate on any size, but fully connected layers need fixed-size inputs * Solution: * Add a new SPP layer on top of the last convolutional layer, before the fully connected layer * Use an approach similar to bag of words (BoW), but maintain the spatial information. The BoW approach is used for text classification, where the order of the words is discarded and only the number of occurences is kept. * The SPP layer operates on each feature map independently. * The output of the SPP layer is of dimension $k \cdot M$, where $k$ is the number of feature maps the SPP layer got as input and $M$ is the number of bins. Example: We could use spatial pyramid pooling with
https://www.shortscience.org/paper?bibtexKey=journals/corr/1406.4729
🔗 Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition - ShortScience.org
Spatial Pyramid Pooling (SPP) is a technique which allows Convolutional Neural Networks (CNNs) to use input images of any size, not only $224\text{px} \times 224\text{px}$ as most architectures do. (However, there is a lower bound for the size of the input image). ## Idea * Convolutional layers operate on any size, but fully connected layers need fixed-size inputs * Solution: * Add a new SPP layer on top of the last convolutional layer, before the fully connected layer * Use an approach similar to bag of words (BoW), but maintain the spatial information. The BoW approach is used for text classification, where the order of the words is discarded and only the number of occurences is kept. * The SPP layer operates on each feature map independently. * The output of the SPP layer is of dimension $k \cdot M$, where $k$ is the number of feature maps the SPP layer got as input and $M$ is the number of bins. Example: We could use spatial pyramid pooling with
shortscience.org
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition - ShortScience.org
Spatial Pyramid Pooling (SPP) is a technique which allows Convolutional Neural Networks (CNNs) to us...
🎥 Михаил Цветков: «Обзор проектов Intel в области машинного обучения и систем AI»
👁 6 раз ⏳ 3342 сек.
👁 6 раз ⏳ 3342 сек.
Доклад «Обзор проектов Intel в области машинного обучения и систем AI: новые аппаратные платформы и оптимизация на программном уровне» будет читать Михаил Цветков, лидер технической группы Intel в России. Михаил закончил Физический факультет и аспирантуру по направлению Физика полупроводниковых приборов и микроэлектроника Воронежского Университета. Работал в подразделениях Intel Labs и Intel Architecture Group в Intel с 2008 года. Специализация — разработка пространственных структур обработки данных на FPGAVk
Михаил Цветков: «Обзор проектов Intel в области машинного обучения и систем AI»
Доклад «Обзор проектов Intel в области машинного обучения и систем AI: новые аппаратные платформы и оптимизация на программном уровне» будет читать Михаил Цветков, лидер технической группы Intel в России. Михаил закончил Физический факультет и аспирантуру…
🎥 Машинное обучение на реальных кейсах
👁 1 раз ⏳ 5198 сек.
👁 1 раз ⏳ 5198 сек.
Спикер Андрей Латыш, инженер по машинному обучению анализу данных в The Product Engine.
Программа:
- история машинного обучения;
- суть машинного обучения;
- распространенные алгоритмы с демонстрацией и объяснением на практических, бизнес-примерах.
Компьютерная Школа Hillel
site: https://ithillel.ua
тел.: +38 (097) 156-58-27
fb: https://www.facebook.com/hillel.it.school
in: https://www.instagram.com/hillel_itschool
tw: https://twitter.com/hillel_itschool
ln: https://www.linkedin.com/company/hillel_itschVk
Машинное обучение на реальных кейсах
Спикер Андрей Латыш, инженер по машинному обучению анализу данных в The Product Engine.
Программа:
- история машинного обучения;
- суть машинного обучения;
- распространенные алгоритмы с демонстрацией и объяснением на практических, бизнес-примерах.
Компьютерная…
Программа:
- история машинного обучения;
- суть машинного обучения;
- распространенные алгоритмы с демонстрацией и объяснением на практических, бизнес-примерах.
Компьютерная…
https://www.youtube.com/watch?v=Ejsr3S79gcQ/
🎥 Байесовские методы в машинном обучении. Лекция 1
👁 1 раз ⏳ 5449 сек.
🎥 Байесовские методы в машинном обучении. Лекция 1
👁 1 раз ⏳ 5449 сек.
Лектор: профессор Ветров Дмитрий ПетровичYouTube
Байесовские методы в машинном обучении. Лекция 1
Лектор: профессор Ветров Дмитрий Петрович
🎥 Google Cloud Next, Machine Learning, & more! (This Week in Cloud)
👁 1 раз ⏳ 141 сек.
👁 1 раз ⏳ 141 сек.
Here to bring you the latest news in the cloud is Google Cloud Developer Advocate Stephanie Wong.
Learn more about these announcements → https://bit.ly/2TqGuME
• Weekly updates (blog) → https://bit.ly/2TmTgvS
• Get Cloud Certified → https://bit.ly/2Tqmwlh
• TensorFlow Deep Learning VM Instances → https://bit.ly/2Toph6Q
• G Suite Updates → https://bit.ly/2TqHHUc
This Week in The Cloud is a new series where we serve you the lowest latency news → https://bit.ly/ThisWeek-inCloud
Tune in every week for a nVk
Google Cloud Next, Machine Learning, & more! (This Week in Cloud)
Here to bring you the latest news in the cloud is Google Cloud Developer Advocate Stephanie Wong.
Learn more about these announcements → https://bit.ly/2TqGuME
• Weekly updates (blog) → https://bit.ly/2TmTgvS
• Get Cloud Certified → https://bit.ly/2Tqmwlh…
Learn more about these announcements → https://bit.ly/2TqGuME
• Weekly updates (blog) → https://bit.ly/2TmTgvS
• Get Cloud Certified → https://bit.ly/2Tqmwlh…
🎥 Stanford CS230: Deep Learning | Autumn 2018 | Lecture 3 - Full-Cycle Deep Learning Projects
👁 1 раз ⏳ 4697 сек.
👁 1 раз ⏳ 4697 сек.
Andrew Ng, Adjunct Professor & Kian Katanforoosh, Lecturer - Stanford University
http://onlinehub.stanford.edu/
Andrew Ng
Adjunct Professor, Computer Science
Kian Katanforoosh
Lecturer, Computer Science
To follow along with the course schedule and syllabus, visit:
http://cs230.stanford.edu/
To get the latest news on Stanford’s upcoming professional programs in Artificial Intelligence, visit: http://learn.stanford.edu/AI.html
To view all online courses and programs offered by Stanford, visit: http:Vk
Stanford CS230: Deep Learning | Autumn 2018 | Lecture 3 - Full-Cycle Deep Learning Projects
Andrew Ng, Adjunct Professor & Kian Katanforoosh, Lecturer - Stanford University
http://onlinehub.stanford.edu/
Andrew Ng
Adjunct Professor, Computer Science
Kian Katanforoosh
Lecturer, Computer Science
To follow along with the course schedule and syllabus…
http://onlinehub.stanford.edu/
Andrew Ng
Adjunct Professor, Computer Science
Kian Katanforoosh
Lecturer, Computer Science
To follow along with the course schedule and syllabus…
🎥 Stanford CS230: Deep Learning | Autumn 2018 | Lecture 2 - Deep Learning Intuition
👁 1 раз ⏳ 4967 сек.
👁 1 раз ⏳ 4967 сек.
Andrew Ng, Adjunct Professor & Kian Katanforoosh, Lecturer - Stanford University
http://onlinehub.stanford.edu/
Andrew Ng
Adjunct Professor, Computer Science
Kian Katanforoosh
Lecturer, Computer Science
To follow along with the course schedule and syllabus, visit:
http://cs230.stanford.edu/
To get the latest news on Stanford’s upcoming professional programs in Artificial Intelligence, visit: http://learn.stanford.edu/AI.html
To view all online courses and programs offered by Stanford, visit: http:Vk
Stanford CS230: Deep Learning | Autumn 2018 | Lecture 2 - Deep Learning Intuition
Andrew Ng, Adjunct Professor & Kian Katanforoosh, Lecturer - Stanford University
http://onlinehub.stanford.edu/
Andrew Ng
Adjunct Professor, Computer Science
Kian Katanforoosh
Lecturer, Computer Science
To follow along with the course schedule and syllabus…
http://onlinehub.stanford.edu/
Andrew Ng
Adjunct Professor, Computer Science
Kian Katanforoosh
Lecturer, Computer Science
To follow along with the course schedule and syllabus…
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
🎥 Евгений Борисов: «Настоящие и будущие супер-решения Supermicro для систем ИИ и анализа данных»
👁 1 раз ⏳ 4546 сек.
🎥 Евгений Борисов: «Настоящие и будущие супер-решения Supermicro для систем ИИ и анализа данных»
👁 1 раз ⏳ 4546 сек.
Продукция компании Supermicro — производителя серверных платформ — является ключевым звеном в цепи «превращения умных чипов в законченные функциональные устройства». Она применяется в крупных дата-центрах и в первую очередь — в ЦОДах, ориентированных на системы искусственного интеллекта (AI/ML/DL).
Практическая аппаратная часть систем искусственного интеллекта для решений конкретных задач начинается с серверных платформ. В Supermicro-платформы устанавливаются «ИИ-акселераторы» разной функциональности: CPU,Vk
Евгений Борисов: «Настоящие и будущие супер-решения Supermicro для систем ИИ и анализа данных»
Продукция компании Supermicro — производителя серверных платформ — является ключевым звеном в цепи «превращения умных чипов в законченные функциональные устройства». Она применяется в крупных дата-центрах и в первую очередь — в ЦОДах, ориентированных на системы…
🎥 Байесовские методы в машинном обучении. Лекция 6
👁 1 раз ⏳ 5159 сек.
👁 1 раз ⏳ 5159 сек.
Лектор: профессор Ветров Дмитрий ПетровичVk
Байесовские методы в машинном обучении. Лекция 6
Лектор: профессор Ветров Дмитрий Петрович
🎥 Байесовские методы в машинном обучении. Лекция 5
👁 1 раз ⏳ 4723 сек.
👁 1 раз ⏳ 4723 сек.
Лектор: профессор Ветров Дмитрий ПетровичVk
Байесовские методы в машинном обучении. Лекция 5
Лектор: профессор Ветров Дмитрий Петрович
🎥 Байесовские методы в машинном обучении. Лекция 4
👁 1 раз ⏳ 5100 сек.
👁 1 раз ⏳ 5100 сек.
Лектор: профессор Ветров Дмитрий ПетровичVk
Байесовские методы в машинном обучении. Лекция 4
Лектор: профессор Ветров Дмитрий Петрович
🎥 Байесовские методы в машинном обучении. Лекция 3
👁 1 раз ⏳ 4232 сек.
👁 1 раз ⏳ 4232 сек.
Лектор: профессор Ветров Дмитрий ПетровичVK Видео
Байесовские методы в машинном обучении. Лекция 3
Лектор: профессор Ветров Дмитрий Петрович
Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks
🔗 Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks
For this tutorial in my Reinforcement Learning series, we are going to be exploring a family of RL algorithms called Q-Learning algorithms…
🔗 Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks
For this tutorial in my Reinforcement Learning series, we are going to be exploring a family of RL algorithms called Q-Learning algorithms…
Medium
Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks
For this tutorial in my Reinforcement Learning series, we are going to be exploring a family of RL algorithms called Q-Learning algorithms…
https://arxiv.org/abs/1903.06048
🔗 MSG-GAN: Multi-Scale Gradient GAN for Stable Image Synthesis
While Generative Adversarial Networks (GANs) have seen huge successes in image synthesis tasks, they are notoriously difficult to use, in part due to instability during training. One commonly accepted reason for this instability is that gradients passing from the discriminator to the generator can quickly become uninformative, due to a learning imbalance during training. In this work, we propose the Multi-Scale Gradient Generative Adversarial Network (MSG-GAN), a simple but effective technique for addressing this problem which allows the flow of gradients from the discriminator to the generator at multiple scales. This technique provides a stable approach for generating synchronized multi-scale images. We present a very intuitive implementation of the mathematical MSG-GAN framework which uses the concatenation operation in the discriminator computations. We empirically validate the effect of our MSG-GAN approach through experiments on the CIFAR10 and Oxford102 flowers datasets and compare it with other relevant techniques which perform multi-scale image synthesis. In addition, we also provide details of our experiment on CelebA-HQ dataset for synthesizing 1024 x 1024 high resolution images.
🔗 MSG-GAN: Multi-Scale Gradient GAN for Stable Image Synthesis
While Generative Adversarial Networks (GANs) have seen huge successes in image synthesis tasks, they are notoriously difficult to use, in part due to instability during training. One commonly accepted reason for this instability is that gradients passing from the discriminator to the generator can quickly become uninformative, due to a learning imbalance during training. In this work, we propose the Multi-Scale Gradient Generative Adversarial Network (MSG-GAN), a simple but effective technique for addressing this problem which allows the flow of gradients from the discriminator to the generator at multiple scales. This technique provides a stable approach for generating synchronized multi-scale images. We present a very intuitive implementation of the mathematical MSG-GAN framework which uses the concatenation operation in the discriminator computations. We empirically validate the effect of our MSG-GAN approach through experiments on the CIFAR10 and Oxford102 flowers datasets and compare it with other relevant techniques which perform multi-scale image synthesis. In addition, we also provide details of our experiment on CelebA-HQ dataset for synthesizing 1024 x 1024 high resolution images.
arXiv.org
MSG-GAN: Multi-Scale Gradients for Generative Adversarial Networks
While Generative Adversarial Networks (GANs) have seen huge successes in image synthesis tasks, they are notoriously difficult to adapt to different datasets, in part due to instability during...