Neural Networks | Нейронные сети
11.6K subscribers
803 photos
184 videos
170 files
9.45K links
Все о машинном обучении

По всем вопросам - @notxxx1

№ 4959169263
Download Telegram
​Искусственный интеллект общего назначения. ТЗ, текущее состояние, перспективы
В наше время словами «искусственный интеллект» называют очень много различных систем — от нейросети для распознавания картинок до бота для игры в Quake. В википедии дано замечательное определение ИИ — это «свойство интеллектуальных систем выполнять творческие функции, которые традиционно считаются прерогативой человека». То есть из определения явно видно — если некую функцию успешно удалось автоматизировать, то она перестаёт считаться искусственным интеллектом.
Тем не менее, когда задача «создать искусственный интеллект» была поставлена впервые, под ИИ подразумевалось нечто иное. Сейчас эта цель называется «Сильный ИИ» или «ИИ общего назначения».

🔗 Искусственный интеллект общего назначения. ТЗ, текущее состояние, перспективы
В наше время словами «искусственный интеллект» называют очень много различных систем — от нейросети для распознавания картинок до бота для игры в Quake. В википе...
​Список полезных книг по анализу данных, математике, data science и machine learning
Хабр, привет!

Написал пост, который идет строго в закладки, он со списком полезнейших книг по анализу данных, математике, data science и machine learning. Они будут полезны как новичкам, так и профессионалам. Для удобства можете читать здесь или использовать удобный google docs, в нем книги разбиты по столбцам и категориям. Пользуйтесь и прокачивайте скиллы сами + делитесь с коллегами.

Конечно, весь список книг неполный. Поэтому добавляйте в комментарии свои полезные ссылки на крутые книги, самые топовые из них я добавлю в список.

Книги по анализу данных, математике, data science и machine learning:

🔗 Список полезных книг по анализу данных, математике, data science и machine learning
Хабр, привет! Написал пост, который идет строго в закладки, он со списком полезнейших книг по анализу данных, математике, data science и machine learning. Они б...
​Good resources for machine learning mathematics with lecture notes on machine learning mathematics such as Probability, Statistics, Algebra, Number Theory, Geometry etc. All in one for ML mathematics
https://github.com/Niraj-Lunavat/Maths-for-Artificial-Intelligence

🔗 Niraj-Lunavat/Maths-for-Artificial-Intelligence
Master mathematics for machine learning, Artificial Intelligence. A curated list of awesome mathematics resources. - Niraj-Lunavat/Maths-for-Artificial-Intelligence
​Making the Invisible Visible: Action Recognition Through Walls and Occlusions
https://arxiv.org/abs/1909.09300

🔗 Making the Invisible Visible: Action Recognition Through Walls and Occlusions
Understanding people's actions and interactions typically depends on seeing them. Automating the process of action recognition from visual data has been the topic of much research in the computer vision community. But what if it is too dark, or if the person is occluded or behind a wall? In this paper, we introduce a neural network model that can detect human actions through walls and occlusions, and in poor lighting conditions. Our model takes radio frequency (RF) signals as input, generates 3D human skeletons as an intermediate representation, and recognizes actions and interactions of multiple people over time. By translating the input to an intermediate skeleton-based representation, our model can learn from both vision-based and RF-based datasets, and allow the two tasks to help each other. We show that our model achieves comparable accuracy to vision-based action recognition systems in visible scenarios, yet continues to work accurately when people are not visible, hence addressing scenarios that are beyond the limit of today's vision-based action recognition.
​Regina Barzilay: Deep Learning for Cancer Diagnosis and Treatment | Artificial Intelligence Podcast

🔗 Regina Barzilay: Deep Learning for Cancer Diagnosis and Treatment | Artificial Intelligence Podcast
Regina Barzilay is a professor at MIT and a world-class researcher in natural language processing and applications of deep learning to chemistry and oncology, or the use of deep learning for early diagnosis, prevention and treatment of cancer. She has also been recognized for her teaching of several successful AI-related courses at MIT, including the popular Introduction to Machine Learning course. This conversation is part of the Artificial Intelligence podcast. INFO: Podcast website: https://lexfridman.c
​Continuous Probability Distributions for Machine Learning

https://machinelearningmastery.com/continuous-probability-distributions-for-machine-learning/

Наш телеграм канал - tglink.me/ai_machinelearning_big_data

🔗 Continuous Probability Distributions for Machine Learning
The probability for a continuous random variable can be summarized with a continuous probability distribution. Continuous probability distributions are encountered in machine learning, most notably in the distribution of numerical input and output variables for models and in the distribution of errors made by models. Knowledge of the normal continuous probability distribution is also required …
SANVis: Visual Analytics for Understanding Self-Attention Networks

Attention networks, a deep neural network architecture inspired by humans' attention mechanism, have seen significant success in image captioning, machine translation, and many other applications. Recently, they have been further evolved into an advanced approach called multi-head self-attention networks, which can encode a set of input vectors, e.g., word vectors in a sentence, into another set of vectors.

https://arxiv.org/abs/1909.09595

🔗 SANVis: Visual Analytics for Understanding Self-Attention Networks
Attention networks, a deep neural network architecture inspired by humans' attention mechanism, have seen significant success in image captioning, machine translation, and many other applications. Recently, they have been further evolved into an advanced approach called multi-head self-attention networks, which can encode a set of input vectors, e.g., word vectors in a sentence, into another set of vectors. Such encoding aims at simultaneously capturing diverse syntactic and semantic features within a set, each of which corresponds to a particular attention head, forming altogether multi-head attention. Meanwhile, the increased model complexity prevents users from easily understanding and manipulating the inner workings of models. To tackle the challenges, we present a visual analytics system called SANVis, which helps users understand the behaviors and the characteristics of multi-head self-attention networks. Using a state-of-the-art self-attention model called Transformer, we demonstrate usage scenario
​Corporate IT Technologists Can Make Good Data Analysts

🔗 Corporate IT Technologists Can Make Good Data Analysts
There’s a myth in the Data Science community about that full-stack Data Scientist who is going to swoop in and magically transform your data science ventures into a profitable one overnight. Don’t…
​Adding Interpretability to Multiclass Text Classification models

🔗 Adding Interpretability to Multiclass Text Classification models
It is one of the basic tenets of learning for me where I try to distill any concept in a more palatable form. As Feynman said: So, when I saw the ELI5 library that aims to interpret machine learning…
🎥 Jon Nordby - Audio Classification with Machine Learning
👁 1 раз 2607 сек.
"Audio Classification with Machine Learning
[EuroPython 2019 - Talk - 2019-07-11 - Singapore [PyData track]
[Basel, CH]

By Jon Nordby

Sound is a rich source of information about the world around us.
Modern deep learning approaches can give human-like performance on a range of sound classifiction tasks.
This makes it possible to build systems that use sound to for example:
understand speech, to analyze music, to assist in medical diagnostics, detect quality problems in manufacturing, and to study the behav