Neural Networks | Нейронные сети
11.6K subscribers
801 photos
183 videos
170 files
9.45K links
Все о машинном обучении

По всем вопросам - @notxxx1

№ 4959169263
Download Telegram
​How to Perform Feature Selection for Regression Data - Machine Learning Mastery

🔗 How to Perform Feature Selection for Regression Data - Machine Learning Mastery
Feature selection is the process of identifying and selecting a subset of input variables that are most relevant to the target variable. Perhaps the simplest case of feature selection is the case where there are numerical input variables and a numerical target for regression predictive modeling. This is because the strength of the relationship between each input variable and the target
​A Scalable and Cloud-Native Hyperparameter Tuning System

Katib is a Kubernetes-based system for Hyperparameter Tuning and Neural Architecture Search. Katib supports a number of ML frameworks, including TensorFlow, Apache MXNet, PyTorch, XGBoost, and others.

Github: https://github.com/kubeflow/katib

Getting started with Katib: https://www.kubeflow.org/docs/components/hyperparameter-tuning/hyperparameter/

Paper: https://arxiv.org/abs/2006.02085v1
Наш телеграм канал - tglink.me/ai_machinelearning_big_data

🔗 kubeflow/katib
Repository for hyperparameter tuning. Contribute to kubeflow/katib development by creating an account on GitHub.
​A Scalable and Cloud-Native Hyperparameter Tuning System

Katib is a Kubernetes-based system for Hyperparameter Tuning and Neural Architecture Search. Katib supports a number of ML frameworks, including TensorFlow, Apache MXNet, PyTorch, XGBoost, and others.

Github: https://github.com/kubeflow/katib

Getting started with Katib: https://www.kubeflow.org/docs/components/hyperparameter-tuning/hyperparameter/

Paper: https://arxiv.org/abs/2006.02085v1
Наш телеграм канал - tglink.me/ai_machinelearning_big_data

🔗 kubeflow/katib
Repository for hyperparameter tuning. Contribute to kubeflow/katib development by creating an account on GitHub.
​Интервью с роботом

🔗 Интервью с роботом
С точки зрения робота Сегодня мы продолжаем тему направленного обучения. Этот способ не требует больших данных, использует короткие выборки. Модет быть полезен д...
​Интервью с роботом

🔗 Интервью с роботом
С точки зрения робота Сегодня мы продолжаем тему направленного обучения. Этот способ не требует больших данных, использует короткие выборки. Модет быть полезен д...
​End-to-End Adversarial Text-to-Speech

https://arxiv.org/abs/2006.03575

🔗 End-to-End Adversarial Text-to-Speech
Modern text-to-speech synthesis pipelines typically involve multiple processing stages, each of which is designed or learnt independently from the rest. In this work, we take on the challenging task of learning to synthesise speech from normalised text or phonemes in an end-to-end manner, resulting in models which operate directly on character or phoneme input sequences and produce raw speech audio outputs. Our proposed generator is feed-forward and thus efficient for both training and inference, using a differentiable monotonic interpolation scheme to predict the duration of each input token. It learns to produce high fidelity audio through a combination of adversarial feedback and prediction losses constraining the generated audio to roughly match the ground truth in terms of its total duration and mel-spectrogram. To allow the model to capture temporal variation in the generated audio, we employ soft dynamic time warping in the spectrogram-based prediction loss. The resulting model achieves a mean opinion
​End-to-End Adversarial Text-to-Speech

https://arxiv.org/abs/2006.03575

🔗 End-to-End Adversarial Text-to-Speech
Modern text-to-speech synthesis pipelines typically involve multiple processing stages, each of which is designed or learnt independently from the rest. In this work, we take on the challenging task of learning to synthesise speech from normalised text or phonemes in an end-to-end manner, resulting in models which operate directly on character or phoneme input sequences and produce raw speech audio outputs. Our proposed generator is feed-forward and thus efficient for both training and inference, using a differentiable monotonic interpolation scheme to predict the duration of each input token. It learns to produce high fidelity audio through a combination of adversarial feedback and prediction losses constraining the generated audio to roughly match the ground truth in terms of its total duration and mel-spectrogram. To allow the model to capture temporal variation in the generated audio, we employ soft dynamic time warping in the spectrogram-based prediction loss. The resulting model achieves a mean opinion
​Recent Advances in Google Translate">
Recent Advances in Google Translate

🔗 Recent Advances in Google Translate
Posted by Isaac Caswell and Bowen Liang, Software Engineers, Google Research Advances in machine learning (ML) have driven improvements ...
​Recent Advances in Google Translate">
Recent Advances in Google Translate

🔗 Recent Advances in Google Translate
Posted by Isaac Caswell and Bowen Liang, Software Engineers, Google Research Advances in machine learning (ML) have driven improvements ...
🎥 Introduction to High Performance Computing for Machine Learning
👁 1 раз 1171 сек.
At some point running machine learning pipeline locally on a commercial grade computer is not feasible. In this video I will show you the basic of high performance computing applied to machine learning.

Acknowledgment:

Took the background song from Youtube library!



Socials:
Github: https://github.com/yacineMahdid/artificial-intelligence-and-machine-learning

Twitter: https://twitter.com/CodeThisCodeTh1
Linkedin: https://www.linkedin.com/in/yacine-mahdid-809425163/


From wikipedia:
"A supercomputer i
🎥 Chat Bot With PyTorch - NLP And Deep Learning - Python Tutorial (Part 1)
👁 1 раз 1242 сек.
In this Python Tutorial we build a simple chatbot using PyTorch and Deep Learning. I will also provide an introduction to some basic Natural Language Processing (NLP) techniques.

1) Theory + NLP concepts (Stemming, Tokenization, bag of words)
2) Create training data
3) PyTorch model and training
4) Save/load model and implement the chat

If you enjoyed this video, please subscribe to the channel!

Article "Contextual Chatbots with Tensorflow":
https://chatbotsmagazine.com/contextual-chat-bots-with-tensorfl