Turning your Mobile Phone Camera into an Object Detector (on your own!)
🔗 Turning your Mobile Phone Camera into an Object Detector (on your own!)
It’s time to unlock the potential of your camera!
🔗 Turning your Mobile Phone Camera into an Object Detector (on your own!)
It’s time to unlock the potential of your camera!
Towards Data Science
Turning your Mobile Phone Camera into an Object Detector (on your own!)
It’s time to unlock the potential of your camera!
🎥 Adopting Machine Learning at Scale
👁 1 раз ⏳ 1548 сек.
👁 1 раз ⏳ 1548 сек.
This real-world use case presents how Rabobank applies Machine Learning for Fraud Detection, as well as how Machine Learning can be adopted across the organization.
Speaker: Jan W Veldsink, Master in the art of AI at Nyenrode, Rabobank, and Grio.
Event: Machine Learning School in Seville, Spain, 2019.Vk
Adopting Machine Learning at Scale
This real-world use case presents how Rabobank applies Machine Learning for Fraud Detection, as well as how Machine Learning can be adopted across the organization.
Speaker: Jan W Veldsink, Master in the art of AI at Nyenrode, Rabobank, and Grio.
Event:…
Speaker: Jan W Veldsink, Master in the art of AI at Nyenrode, Rabobank, and Grio.
Event:…
Estimators, Loss Functions, Optimizers —Core of ML Algorithms
🔗 Estimators, Loss Functions, Optimizers —Core of ML Algorithms
In order to understand how a machine learning algorithm learns from data to predict an outcome, it is essential to understand the…
🔗 Estimators, Loss Functions, Optimizers —Core of ML Algorithms
In order to understand how a machine learning algorithm learns from data to predict an outcome, it is essential to understand the…
Towards Data Science
Estimators, Loss Functions, Optimizers —Core of ML Algorithms
In order to understand how a machine learning algorithm learns from data to predict an outcome, it is essential to understand the…
🎥 Machine Learning Tutorial Chap 3| Part-1 Simple Linear Regression | GreyAtom
👁 1 раз ⏳ 1975 сек.
👁 1 раз ⏳ 1975 сек.
Welcome to the #DataScienceFridays Rohit Ghosh, a deep learning scientist, and an Instructor at GreyAtom will take us through Simple Linear Regression in machine learning through an introduction series.
Simple Linear Regression is a machine learning algorithm based on supervised learning where the regression model uses independent variables to predict the outcome of a dependent variable. It is mostly used for finding out the relationship between variables and forecasting.
Study Simple Linear Regression iVk
Machine Learning Tutorial Chap 3| Part-1 Simple Linear Regression | GreyAtom
Welcome to the #DataScienceFridays Rohit Ghosh, a deep learning scientist, and an Instructor at GreyAtom will take us through Simple Linear Regression in machine learning through an introduction series.
Simple Linear Regression is a machine learning algorithm…
Simple Linear Regression is a machine learning algorithm…
🎥 [Uber Seattle] Horovod: Distributed Deep Learning on Spark
👁 1 раз ⏳ 1350 сек.
👁 1 раз ⏳ 1350 сек.
During this April 2019 meetup, Uber engineer Travis Addair introduces the concepts that make Horovod work, and walks through how to make use of Horovod on Spark to add distributed training to machine learning pipelines. Horovod is a distributed training framework for TensorFlow, PyTorch, Keras, and MXNet. Scaling to hundreds of GPUs, Horovod can reduce training time from hours to minutes with just a handful of lines added to existing single-GPU training processes.Vk
[Uber Seattle] Horovod: Distributed Deep Learning on Spark
During this April 2019 meetup, Uber engineer Travis Addair introduces the concepts that make Horovod work, and walks through how to make use of Horovod on Spark to add distributed training to machine learning pipelines. Horovod is a distributed training framework…
Text Classification Algorithms: A Survey
https://medium.com/text-classification-algorithms/text-classification-algorithms-a-survey-a215b7ab7e2d
🔗 Text Classification Algorithms: A Survey
Text feature extraction and pre-processing for classification algorithms are very significant. In this section, we start to talk about text cleaning since most of the documents contain a lot of…
https://medium.com/text-classification-algorithms/text-classification-algorithms-a-survey-a215b7ab7e2d
🔗 Text Classification Algorithms: A Survey
Text feature extraction and pre-processing for classification algorithms are very significant. In this section, we start to talk about text cleaning since most of the documents contain a lot of…
Medium
Text Classification Algorithms: A Survey
Text feature extraction and pre-processing for classification algorithms are very significant. In this section, we start to talk about text cleaning since most of the documents contain a lot of…
Data Demystified — DIKW model
🔗 Data Demystified — DIKW model
A data scientist is a person who is better at statistics than any software engineer and better at software engineering than any…
🔗 Data Demystified — DIKW model
A data scientist is a person who is better at statistics than any software engineer and better at software engineering than any…
Towards Data Science
Data Demystified — DIKW model
A data scientist is a person who is better at statistics than any software engineer and better at software engineering than any…
What Project Management Tools to Use for Data Science Projects
🔗 What Project Management Tools to Use for Data Science Projects
Traditional project management methodologies do not work as stand-alone approaches in data science. Knowing the strengths of each for…
🔗 What Project Management Tools to Use for Data Science Projects
Traditional project management methodologies do not work as stand-alone approaches in data science. Knowing the strengths of each for…
Towards Data Science
What Project Management Tools to Use for Data Science Projects
Traditional project management methodologies do not work as stand-alone approaches in data science. Knowing the strengths of each for…
MONet: Unsupervised Scene Decomposition and Representation
🔗 MONet: Unsupervised Scene Decomposition and Representation
The ability to decompose scenes in terms of abstract building blocks is crucial for general intelligence. Where those basic building blocks share meaningful properties, interactions and other regularities across scenes, such decompositions can simplify reasoning and facilitate imagination of novel scenarios. In particular, representing perceptual observations in terms of entities should improve data efficiency and transfer performance on a wide range of tasks. Thus we need models capable of discovering useful decompositions of scenes by identifying units with such regularities and representing them in a common format. To address this problem, we have developed the Multi-Object Network (MONet). In this model, a VAE is trained end-to-end together with a recurrent attention network -- in a purely unsupervised manner -- to provide attention masks around, and reconstructions of, regions of images. We show that this model is capable of learning to decompose and represent challenging 3D scenes into semantically mean
🔗 MONet: Unsupervised Scene Decomposition and Representation
The ability to decompose scenes in terms of abstract building blocks is crucial for general intelligence. Where those basic building blocks share meaningful properties, interactions and other regularities across scenes, such decompositions can simplify reasoning and facilitate imagination of novel scenarios. In particular, representing perceptual observations in terms of entities should improve data efficiency and transfer performance on a wide range of tasks. Thus we need models capable of discovering useful decompositions of scenes by identifying units with such regularities and representing them in a common format. To address this problem, we have developed the Multi-Object Network (MONet). In this model, a VAE is trained end-to-end together with a recurrent attention network -- in a purely unsupervised manner -- to provide attention masks around, and reconstructions of, regions of images. We show that this model is capable of learning to decompose and represent challenging 3D scenes into semantically mean
arXiv.org
MONet: Unsupervised Scene Decomposition and Representation
The ability to decompose scenes in terms of abstract building blocks is crucial for general intelligence. Where those basic building blocks share meaningful properties, interactions and other...
Evolution of Traditional Statistical Tests in the Age of Data
🔗 Evolution of Traditional Statistical Tests in the Age of Data
The difference between significance testing in it’s more research based/academic origins and it’s evolution in more dynamic application…
🔗 Evolution of Traditional Statistical Tests in the Age of Data
The difference between significance testing in it’s more research based/academic origins and it’s evolution in more dynamic application…
Towards Data Science
Evolution of Traditional Statistical Tests in the Age of Data
The difference between significance testing in it’s more research based/academic origins and it’s evolution in more dynamic application…
Segmenting Credit Card Customers with Machine Learning
🔗 Segmenting Credit Card Customers with Machine Learning
Identifying marketable segments with unsupervised machine learning
🔗 Segmenting Credit Card Customers with Machine Learning
Identifying marketable segments with unsupervised machine learning
Towards Data Science
Segmenting Credit Card Customers with Machine Learning
Identifying marketable segments with unsupervised machine learning
Principal Component Analysis for Dimensionality Reduction
🔗 Principal Component Analysis for Dimensionality Reduction
Learn how to perform PCA by learning the mathematics behind the algorithm and executing it step-by-step with Python!
🔗 Principal Component Analysis for Dimensionality Reduction
Learn how to perform PCA by learning the mathematics behind the algorithm and executing it step-by-step with Python!
Towards Data Science
Principal Component Analysis for Dimensionality Reduction
Learn how to perform PCA by learning the mathematics behind the algorithm and executing it step-by-step with Python!
Intelligent computing in Snowflake
🔗 Intelligent computing in Snowflake
In a little over a week, I’m heading over to Snowflake’s inaugural user summit in San Francisco, where I’ll be speaking on data sharing in…
🔗 Intelligent computing in Snowflake
In a little over a week, I’m heading over to Snowflake’s inaugural user summit in San Francisco, where I’ll be speaking on data sharing in…
Towards Data Science
Intelligent computing in Snowflake
In a little over a week, I’m heading over to Snowflake’s inaugural user summit in San Francisco, where I’ll be speaking on data sharing in…
How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits
🔗 How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits
We significantly reduce the cost of factoring integers and computing discrete logarithms over finite fields on a quantum computer by combining techniques from Griffiths-Niu 1996, Zalka 2006, Fowler 2012, Ekerå-Håstad 2017, Ekerå 2017, Ekerå 2018, Gidney-Fowler 2019, Gidney 2019. We estimate the approximate cost of our construction using plausible physical assumptions for large-scale superconducting qubit platforms: a planar grid of qubits with nearest-neighbor connectivity, a characteristic physical gate error rate of $10^{-3}$, a surface code cycle time of 1 microsecond, and a reaction time of 10 micro-seconds. We account for factors that are normally ignored such as noise, the need to make repeated attempts, and the spacetime layout of the computation. When factoring 2048 bit RSA integers, our construction's spacetime volume is a hundredfold less than comparable estimates from earlier works (Fowler et al. 2012, Gheorghiu et al. 2019). In the abstract circuit model (which ig
🔗 How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits
We significantly reduce the cost of factoring integers and computing discrete logarithms over finite fields on a quantum computer by combining techniques from Griffiths-Niu 1996, Zalka 2006, Fowler 2012, Ekerå-Håstad 2017, Ekerå 2017, Ekerå 2018, Gidney-Fowler 2019, Gidney 2019. We estimate the approximate cost of our construction using plausible physical assumptions for large-scale superconducting qubit platforms: a planar grid of qubits with nearest-neighbor connectivity, a characteristic physical gate error rate of $10^{-3}$, a surface code cycle time of 1 microsecond, and a reaction time of 10 micro-seconds. We account for factors that are normally ignored such as noise, the need to make repeated attempts, and the spacetime layout of the computation. When factoring 2048 bit RSA integers, our construction's spacetime volume is a hundredfold less than comparable estimates from earlier works (Fowler et al. 2012, Gheorghiu et al. 2019). In the abstract circuit model (which ig
arXiv.org
How to factor 2048 bit RSA integers in 8 hours using 20 million...
We significantly reduce the cost of factoring integers and computing discrete logarithms in finite fields on a quantum computer by combining techniques from Shor 1994, Griffiths-Niu 1996, Zalka...
Google researchers developed a way to peer inside the minds of deep-learning systems, and the results are delightfully weird.
https://www.technologyreview.com/f/610439/making-sense-of-neural-networks-febrile-dreams/
🔗 A new tool helps us understand what an AI is actually thinking
Google researchers developed a way to peer inside the minds of deep-learning systems, and the results are delightfully weird.What they did: The team built a tool that combines several techniques to provide people with a clearer idea of how neural networks make decisions.
https://www.technologyreview.com/f/610439/making-sense-of-neural-networks-febrile-dreams/
🔗 A new tool helps us understand what an AI is actually thinking
Google researchers developed a way to peer inside the minds of deep-learning systems, and the results are delightfully weird.What they did: The team built a tool that combines several techniques to provide people with a clearer idea of how neural networks make decisions.
MIT Technology Review
A new tool helps us understand what an AI is actually thinking
Google researchers developed a way to peer inside the minds of deep-learning systems, and the results are delightfully weird.What they did: The team built a tool that combines several techniques to provide people with a clearer idea of how neural networks…
KakaoBrain/torchgpipe
🔗 KakaoBrain/torchgpipe
A GPipe implementation in PyTorch. Contribute to KakaoBrain/torchgpipe development by creating an account on GitHub.
🔗 KakaoBrain/torchgpipe
A GPipe implementation in PyTorch. Contribute to KakaoBrain/torchgpipe development by creating an account on GitHub.
GitHub
GitHub - kakaobrain/torchgpipe: A GPipe implementation in PyTorch
A GPipe implementation in PyTorch. Contribute to kakaobrain/torchgpipe development by creating an account on GitHub.
Augmenting correlation structures in spatial data using deep generative models
https://arxiv.org/abs/1905.09796
🔗 Augmenting correlation structures in spatial data using deep generative models
State-of-the-art deep learning methods have shown a remarkable capacity to model complex data domains, but struggle with geospatial data. In this paper, we introduce SpaceGAN, a novel generative model for geospatial domains that learns neighbourhood structures through spatial conditioning. We propose to enhance spatial representation beyond mere spatial coordinates, by conditioning each data point on feature vectors of its spatial neighbours, thus allowing for a more flexible representation of the spatial structure. To overcome issues of training convergence, we employ a metric capturing the loss in local spatial autocorrelation between real and generated data as stopping criterion for SpaceGAN parametrization. This way, we ensure that the generator produces synthetic samples faithful to the spatial patterns observed in the input. SpaceGAN is successfully applied for data augmentation and outperforms compared to other methods of synthetic spatial data generation. Finally, we propose an ensemble learning frame
https://arxiv.org/abs/1905.09796
🔗 Augmenting correlation structures in spatial data using deep generative models
State-of-the-art deep learning methods have shown a remarkable capacity to model complex data domains, but struggle with geospatial data. In this paper, we introduce SpaceGAN, a novel generative model for geospatial domains that learns neighbourhood structures through spatial conditioning. We propose to enhance spatial representation beyond mere spatial coordinates, by conditioning each data point on feature vectors of its spatial neighbours, thus allowing for a more flexible representation of the spatial structure. To overcome issues of training convergence, we employ a metric capturing the loss in local spatial autocorrelation between real and generated data as stopping criterion for SpaceGAN parametrization. This way, we ensure that the generator produces synthetic samples faithful to the spatial patterns observed in the input. SpaceGAN is successfully applied for data augmentation and outperforms compared to other methods of synthetic spatial data generation. Finally, we propose an ensemble learning frame
arXiv.org
Augmenting correlation structures in spatial data using deep...
State-of-the-art deep learning methods have shown a remarkable capacity to model complex data domains, but struggle with geospatial data. In this paper, we introduce SpaceGAN, a novel generative...
Основы статистики
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
00 - Основы статистики. О курсе
01 - Основы статистики. Введение
02 - Основы статистики. Сравнение средних
03 - Основы статистики. Корреляция и регрессия
04 - Основы статистики. Анализ номинативных данных
05 - Основы статистики. Логистическая регрессия и непараметрические методы
06 - Основы статистики. Кластерный анализ и метод главных компонент
07 - Основы статистики. Подробнее о линейной регрессии
08 - Основы статистики. Смешанные регрессионные модели
09 - Основы статистики. Введение в bootstrap
🎥 00 - Основы статистики. О курсе
👁 737 раз ⏳ 76 сек.
🎥 01 - Основы статистики. Введение
👁 1349 раз ⏳ 3847 сек.
🎥 02 - Основы статистики. Сравнение средних
👁 359 раз ⏳ 4638 сек.
🎥 03 - Основы статистики. Корреляция и регрессия
👁 281 раз ⏳ 6792 сек.
🎥 04 - Основы статистики. Анализ номинативных данных
👁 231 раз ⏳ 7503 сек.
🎥 05 - Основы статистики. Логистическая регрессия и непараметрические методы
👁 163 раз ⏳ 8859 сек.
🎥 06 - Основы статистики. Кластерный анализ и метод главных компонент
👁 173 раз ⏳ 5970 сек.
🎥 07 - Основы статистики. Подробнее о линейной регрессии
👁 164 раз ⏳ 8245 сек.
🎥 08 - Основы статистики. Смешанные регрессионные модели
👁 216 раз ⏳ 3165 сек.
🎥 09 - Основы статистики. Введение в bootstrap
👁 143 раз ⏳ 3923 сек.
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
00 - Основы статистики. О курсе
01 - Основы статистики. Введение
02 - Основы статистики. Сравнение средних
03 - Основы статистики. Корреляция и регрессия
04 - Основы статистики. Анализ номинативных данных
05 - Основы статистики. Логистическая регрессия и непараметрические методы
06 - Основы статистики. Кластерный анализ и метод главных компонент
07 - Основы статистики. Подробнее о линейной регрессии
08 - Основы статистики. Смешанные регрессионные модели
09 - Основы статистики. Введение в bootstrap
🎥 00 - Основы статистики. О курсе
👁 737 раз ⏳ 76 сек.
Лектор: Анатолий Карпов
https://stepik.org/76🎥 01 - Основы статистики. Введение
👁 1349 раз ⏳ 3847 сек.
Лектор: Анатолий Карпов
1. 0:00 Общая информация о курсе
2. 1:32 Генеральная совокупность и выборка
2.1 1:32 Понятие генеральной совокупности и вы...🎥 02 - Основы статистики. Сравнение средних
👁 359 раз ⏳ 4638 сек.
Лектор: Анатолий Карпов
1. T-распределение
2. Сравнение двух средних; t-критерий Стьюдента
3. Проверка распределения на нормальность, QQ-Plot
4. О...🎥 03 - Основы статистики. Корреляция и регрессия
👁 281 раз ⏳ 6792 сек.
Лектор: Анатолий Карпов
1. Понятие корреляции
2. Условия применения коэффициента корреляции
3. Регрессия с одной независимой переменной
4. Гипотез...🎥 04 - Основы статистики. Анализ номинативных данных
👁 231 раз ⏳ 7503 сек.
Лектор: Анатолий Карпов
1. Постановка задачи
2. Расстояние Пирсона
3. Распределение Хи-квадрат Пирсона
4. Расчет p-уровня значимости
5. Анализ таб...🎥 05 - Основы статистики. Логистическая регрессия и непараметрические методы
👁 163 раз ⏳ 8859 сек.
Лектор: Анатолий Карпов
1. Логистическая регрессия. Постановка задачи.
2. Модель без предикторов. Intercept only model
3. Модель с одним номинатив...🎥 06 - Основы статистики. Кластерный анализ и метод главных компонент
👁 173 раз ⏳ 5970 сек.
Лектор: Анатолий Карпов
1. Кластерный анализ методом k - средних
2. Может ли кластерный анализ "ошибаться"?
3. Как определить оптимальное число кл...🎥 07 - Основы статистики. Подробнее о линейной регрессии
👁 164 раз ⏳ 8245 сек.
Лектор: Анатолий Карпов
1. Введение
2. Линейность взаимосвязи
3. Логарифмическая трансформация переменных
4. Проблема гетероскедастичности
5. Муль...🎥 08 - Основы статистики. Смешанные регрессионные модели
👁 216 раз ⏳ 3165 сек.
Лектор: Иван Иванчей
1. Введение
2. Нарушение допущения о независимости наблюдений
3. Смешанные регрессионные модели. Реализация в R
4. Статистиче...🎥 09 - Основы статистики. Введение в bootstrap
👁 143 раз ⏳ 3923 сек.
Лектор: Арсений Москвичев
1. Складной нож (jackknife)
2. Bootstrap
https://stepik.org/2152Vk
00 - Основы статистики. О курсе
Лектор: Анатолий Карпов https://stepik.org/76