Neural Networks | Нейронные сети
11.6K subscribers
801 photos
182 videos
170 files
9.45K links
Все о машинном обучении

По всем вопросам - @notxxx1

№ 4959169263
Download Telegram
​5 New Generative Adversarial Network (GAN) Architectures For Image Synthesis

🔗 5 New Generative Adversarial Network (GAN) Architectures For Image Synthesis
AI image synthesis has made impressive progress since Generative Adversarial Networks (GANs) were introduced in 2014. GANs were originally only capable of generating small, blurry, black-and-white pictures, but now we can generate high-resolution, realistic and colorful pictures that you can hardly distinguish from real photographs. Here we have summarized for you 5 recently introduced GAN architectures …
​Нейронные сети для форекс. Обучение нейронной сети для прогнозирования движения валют на форекс. Exс

🔗 Нейронные сети для форекс. Обучение нейронной сети для прогнозирования движения валют на форекс. Exс
Написал нейронную сеть в эксель: 10 входов, один скрытый слой с 6 нейронами, 2 выходных нейрона. Кому интересно http://forex-bonus.online/archives/7?unapproved=2&moderation-hash=b708ae4d52c54a6357638024ec964263#comment-2 Прогнозирует направление движения цены евро доллара на следующий дневной бар после выхода новости по процентной ставке в США. Не сложно переделать под другие задачи.
🎥 Ask a Machine Learning Engineer Anything (live) | March 2019
👁 1 раз 3607 сек.
Ask machine learning engineer anything!

Every month or so I host a livestream session on my channel where I answer your questions live on stream. Don't worry if your question doesn't get answered, message me anytime and I'll do my best to get back to you.

Thanks for stopping by :)

CONNECT:
Web - http://bit.ly/mrdbourkeweb
Quora - http://bit.ly/mrdbourkequora
Medium - http://bit.ly/mrdbourkemedium
Twitter - http://bit.ly/mrdbourketwitter
LinkedIn - http://bit.ly/mrdbourkelinkedin
Email updates: http://b
🎥 Overview of differential equations | Chapter 1
👁 1 раз 1636 сек.
How do you study what cannot be solved?
Home page: https://3blue1brown.com/
Special thanks to these supporters: http://3b1b.co/de1thanks

Steven Strogatz NYT article on the math of love:
https://opinionator.blogs.nytimes.com/2009/05/26/guest-column-loves-me-loves-me-not-do-the-math/

If you're looking for books on this topic, I'd recommend the one by Vladimir Arnold, "Ordinary Differential Equations"

Also, more Strogatz fun, you may enjoy his text "Nonlinear Dynamics And Chaos"

------------

If you want t
​Разрабатываем теорию информации как проект с открытым исходным кодом

найден очень полезный способ описания процессов формирования и преобразования информации,
сформирован теоретический базис этого способа
публикация в чисто-теоретическом виде (без сопровождения объяснениями и примерами) будет доступна только труженикам науки,
формирование примеров — это очень большой объем работы,
времени для занятия этой темой мало, совершенно не хватает двух рук, а из доступной техники — пока только смартфон,
а способ очень красив.
https://habr.com/ru/post/446066/

🔗 Как опубликовать теорию информации в современном IT-мире
Есть проблема: найден очень полезный способ описания процессов формирования и преобразования информации, сформирован теоретический базис этого способа публикаци...
🎥 NeuroSAT: An AI That Learned Solving Logic Problems
👁 3 раз 300 сек.
❤️ This video has been kindly supported by my friends at Arm Research. Check them out here! - http://bit.ly/2TqOWAu

📝 The paper "Learning a SAT Solver from Single-Bit Supervision" is available here:
https://arxiv.org/abs/1802.03685

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Hadd
Машинное обучение от Google Developers для новичков

1. Hello World
2. Visualizing a Decision Tree
3. What Makes a Good Feature?
4. Let’s Write a Pipeline
5. Writing Our First Classifier
6. Train an Image Classifier with TensorFlow for Poets
7. Classifying Handwritten Digits with TF.Learn
8. Let’s Write a Decision Tree Classifier from Scratch
9. Intro to Feature Engineering with TensorFlow

🎥 Hello World - Machine Learning Recipes #1
👁 5886 раз 413 сек.
Six lines of Python is all it takes to write your first machine learning program! In this episode, we'll briefly introduce what machine learning is...

🎥 Visualizing a Decision Tree - Machine Learning Recipes #2
👁 1023 раз 402 сек.
Last episode, we treated our Decision Tree as a blackbox. In this episode, we'll build one on a real dataset, add code to visualize it, and practic...

🎥 What Makes a Good Feature? - Machine Learning Recipes #3
👁 561 раз 341 сек.
Good features are informative, independent, and simple. In this episode, we'll introduce these concepts by using a histogram to visualize a feature...

🎥 Let’s Write a Pipeline - Machine Learning Recipes #4
👁 502 раз 474 сек.
In this episode, we’ll write a basic pipeline for supervised learning with just 12 lines of code. Along the way, we'll talk about training and te...

🎥 Writing Our First Classifier - Machine Learning Recipes #5
👁 379 раз 524 сек.
Welcome back! It's time to write our first classifier. This is a milestone if you’re new to machine learning. We'll start with our code from epis...

🎥 Train an Image Classifier with TensorFlow for Poets - Machine Learning Recipes #6
👁 449 раз 427 сек.
Monet or Picasso? In this episode, we’ll train our own image classifier, using TensorFlow for Poets. Along the way, I’ll introduce Deep Learnin...

🎥 Let’s Write a Decision Tree Classifier from Scratch - Machine Learning Recipes #8
👁 85 раз 593 сек.
Hey everyone! Glad to be back! Decision Tree classifiers are intuitive, interpretable, and one of my favorite supervised learning algorithms. In th...

🎥 Intro to Feature Engineering with TensorFlow - Machine Learning Recipes #9
👁 84 раз 458 сек.
Hey everyone! Here’s an intro to techniques you can use to represent your features - including Bucketing, Crossing, Hashing, and Embedding - and ...

🎥 Getting Started with Weka - Machine Learning Recipes #10
👁 88 раз 564 сек.
Hey everyone! In this video, I’ll walk you through using Weka - The very first machine learning library I’ve ever tried. What’s great is that Weka ...
🎥 Stanford CS234: Reinforcement Learning | Winter 2019 | Lecture 4 - Model Free Control
👁 1 раз 4666 сек.
Professor Emma Brunskill, Stanford University
http://onlinehub.stanford.edu/

Professor Emma Brunskill
Assistant Professor, Computer Science
Stanford AI for Human Impact Lab
Stanford Artificial Intelligence Lab
Statistical Machine Learning Group

To follow along with the course schedule and syllabus, visit: http://web.stanford.edu/class/cs234/index.html

To get the latest news on Stanford’s upcoming professional programs in Artificial Intelligence, visit: http://learn.stanford.edu/AI.html

To view a
🎥 Why Deep Learning Works: Implicit Self-Regularization in DNNs, Michael W. Mahoney 20190225
👁 1 раз 4737 сек.
Michael W. Mahoney UC Berkeley

Random Matrix Theory (RMT) is applied to analyze the weight matrices of Deep Neural Networks (DNNs), including both production quality, pre-trained models and smaller models trained from scratch. Empirical and theoretical results clearly indicate that the DNN training process itself implicitly implements a form of self-regularization, implicitly sculpting a more regularized energy or penalty landscape. In particular, the empirical spectral density (ESD) of DNN layer matrices