Качество кода Apache Hadoop: production VS test
Для того, чтобы получить качественный production код, недостаточно просто обеспечить максимальное покрытие тестами. Несомненно, для того, чтобы добиться высоких результатов, основной код проекта и тесты обязаны работать в идеально сплоченном тандеме. Поэтому уделять внимания тестам нужно столько же, сколько и основному коду. Написание хорошего теста – залог того, что он отловит регрессию в production. Чтобы показать важность того, что баги в тестах ничем не хуже, чем в production, рассмотрим очередной разбор предупреждений статического анализатора PVS-Studio. Цель: Apache Hadoop.
🔗 Качество кода Apache Hadoop: production VS test
Для того, чтобы получить качественный production код, недостаточно просто обеспечить максимальное покрытие тестами. Несомненно, для того, чтобы добиться высоких...
Для того, чтобы получить качественный production код, недостаточно просто обеспечить максимальное покрытие тестами. Несомненно, для того, чтобы добиться высоких результатов, основной код проекта и тесты обязаны работать в идеально сплоченном тандеме. Поэтому уделять внимания тестам нужно столько же, сколько и основному коду. Написание хорошего теста – залог того, что он отловит регрессию в production. Чтобы показать важность того, что баги в тестах ничем не хуже, чем в production, рассмотрим очередной разбор предупреждений статического анализатора PVS-Studio. Цель: Apache Hadoop.
🔗 Качество кода Apache Hadoop: production VS test
Для того, чтобы получить качественный production код, недостаточно просто обеспечить максимальное покрытие тестами. Несомненно, для того, чтобы добиться высоких...
Habr
Качество кода Apache Hadoop: production VS test
Для того, чтобы получить качественный production код, недостаточно просто обеспечить максимальное покрытие тестами. Несомненно, для того, чтобы добиться высоких результатов, основной код проекта и...
Развитие компилятора C для нового мультиклета-нейропроцессора
На конференции разработчиков системного и инструментального ПО – OS DAY 2016, которая прошла в г. Иннополис 9-10 июня 2016 (Казань) при обсуждении доклада о мультиклеточной архитектуре была высказана мысль, что она будет наиболее эффективной при решении задач искусственного интеллекта. Условия для разработки нового процессора общего назначения, ориентированного на задачи ИИ, сложились в текущем году.
Нейропроцессор Мультиклет S2, проект которого был впервые представлен на Huawei Innovation Forum 2019 является дальнейшим развитием мультиклеточной архитектуры. От ранее созданных мультиклетов он отличается системой команд, а именно вводом новых типов малоразмерных данных (с фиксированной и плавающей запятой) и операций с ними. Увеличено количество клеток – 256 и частота – 2,5 ГГц, что должно обеспечить пиковую производительность 81,9 TФлопс на 16F и, соответственно, сделать его сравнимым, в части нейровычислений, с возможностями современных специализированных ASIC TPU (TPU-3: 90 Тфлопс на 16F).
Так как эффективность использования процессоров в значительной мере зависит от оптимальности компилятора разработана развиваемая схема оптимизации кода.
Рассмотрим ее более подробно.
🔗 Развитие компилятора C для нового мультиклета-нейропроцессора
На конференции разработчиков системного и инструментального ПО – OS DAY 2016, которая прошла в г. Иннополис 9-10 июня 2016 (Казань) при обсуждении доклада о му...
На конференции разработчиков системного и инструментального ПО – OS DAY 2016, которая прошла в г. Иннополис 9-10 июня 2016 (Казань) при обсуждении доклада о мультиклеточной архитектуре была высказана мысль, что она будет наиболее эффективной при решении задач искусственного интеллекта. Условия для разработки нового процессора общего назначения, ориентированного на задачи ИИ, сложились в текущем году.
Нейропроцессор Мультиклет S2, проект которого был впервые представлен на Huawei Innovation Forum 2019 является дальнейшим развитием мультиклеточной архитектуры. От ранее созданных мультиклетов он отличается системой команд, а именно вводом новых типов малоразмерных данных (с фиксированной и плавающей запятой) и операций с ними. Увеличено количество клеток – 256 и частота – 2,5 ГГц, что должно обеспечить пиковую производительность 81,9 TФлопс на 16F и, соответственно, сделать его сравнимым, в части нейровычислений, с возможностями современных специализированных ASIC TPU (TPU-3: 90 Тфлопс на 16F).
Так как эффективность использования процессоров в значительной мере зависит от оптимальности компилятора разработана развиваемая схема оптимизации кода.
Рассмотрим ее более подробно.
🔗 Развитие компилятора C для нового мультиклета-нейропроцессора
На конференции разработчиков системного и инструментального ПО – OS DAY 2016, которая прошла в г. Иннополис 9-10 июня 2016 (Казань) при обсуждении доклада о му...
Хабр
Развитие компилятора C для нового мультиклета-нейропроцессора
На конференции разработчиков системного и инструментального ПО – OS DAY 2016, которая прошла в г. Иннополис 9-10 июня 2016 (Казань) при обсуждении доклада о му...
Pandas Tips & Tricks: Need For Speed
🔗 Pandas Tips & Tricks: Need For Speed
A Personal Favorite 1-Liner
🔗 Pandas Tips & Tricks: Need For Speed
A Personal Favorite 1-Liner
Medium
Pandas Tips & Tricks: Need For Speed
A Personal Favorite 1-Liner
The year in AI: 2019 ML/AI advances recap
🔗 The year in AI: 2019 ML/AI advances recap
It has become somewhat of a tradition for me to do an end-of-year retrospective of advances in AI/ML (see last year’s round up for…
🔗 The year in AI: 2019 ML/AI advances recap
It has become somewhat of a tradition for me to do an end-of-year retrospective of advances in AI/ML (see last year’s round up for…
Medium
The year in AI: 2019 ML/AI advances recap
It has become somewhat of a tradition for me to do an end-of-year retrospective of advances in AI/ML (see last year’s round up for…
Kaggle Reading Group: On NMT Search Errors and Model Errors: Cat Got Your Tongue? (Part 2) | Kaggle
🔗 Kaggle Reading Group: On NMT Search Errors and Model Errors: Cat Got Your Tongue? (Part 2) | Kaggle
This week we'll be continuing "On NMT Search Errors and Model Errors: Cat Got Your Tongue?" by Felix Stahlber and Bill Byrne, published at EMNLP 2019. You can follow along with the paper here: https://www.aclweb.org/anthology/D19-1331.pdf About Kaggle: Kaggle is the world's largest community of data scientists. Join us to compete, collaborate, learn, and do your data science work. Kaggle's platform is the fastest way to get started on a new data science project. Spin up a Jupyter notebook with a single cli
🔗 Kaggle Reading Group: On NMT Search Errors and Model Errors: Cat Got Your Tongue? (Part 2) | Kaggle
This week we'll be continuing "On NMT Search Errors and Model Errors: Cat Got Your Tongue?" by Felix Stahlber and Bill Byrne, published at EMNLP 2019. You can follow along with the paper here: https://www.aclweb.org/anthology/D19-1331.pdf About Kaggle: Kaggle is the world's largest community of data scientists. Join us to compete, collaborate, learn, and do your data science work. Kaggle's platform is the fastest way to get started on a new data science project. Spin up a Jupyter notebook with a single cli
YouTube
Kaggle Reading Group: On NMT Search Errors and Model Errors: Cat Got Your Tongue? (Part 2) | Kaggle
This week we'll be continuing "On NMT Search Errors and Model Errors: Cat Got Your Tongue?" by Felix Stahlber and Bill Byrne, published at EMNLP 2019. You ca...
🎥 Microsoft Cognitive Service to Add Image and Voice Intelligence to your apps
👁 1 раз ⏳ 3564 сек.
👁 1 раз ⏳ 3564 сек.
Cognitive Services is a set of APIs that use the power of Machine Learning to enhance your application. Using these APIs, you can quickly add image recognition and analysis , speech recognition , text-to-speech capabilities , and many other features to your application.
In this presentation, you will learn about the capabilities of these APIs, how to test them, and how to call them via a REST web service and using some helpful .NET libraries.
This channel is all about latest trending technologies tutoriaVk
Microsoft Cognitive Service to Add Image and Voice Intelligence to your apps
Cognitive Services is a set of APIs that use the power of Machine Learning to enhance your application. Using these APIs, you can quickly add image recognition and analysis , speech recognition , text-to-speech capabilities , and many other features to your…
Demystifying the Confusion Matrix
🔗 Demystifying the Confusion Matrix
Not nearly as confusing as the name implies!
🔗 Demystifying the Confusion Matrix
Not nearly as confusing as the name implies!
Medium
Demystifying the Confusion Matrix
Not nearly as confusing as the name implies!
🎥 Fuzzy Logic in Artificial Intelligence | Introduction to Fuzzy Logic & Membership Function | Edureka
👁 2 раз ⏳ 1168 сек.
👁 2 раз ⏳ 1168 сек.
***AI and Deep Learning using TensorFlow: https://www.edureka.co/ai-deep-learning-with-tensorflow ***
This Edureka Live video on "Fuzzy Logic in AI" will explain what is fuzzy logic and how it is used to find different possibilities between 0 and 1. It also explains the architecture of this logic along with real-time examples.
(blog: https://www.edureka.co/blog/fuzzy-logic-ai/ )
-----------------------------------------------------------
Machine Learning Podcast - http://bit.ly/2IGLYCc
Complete YoutVk
Fuzzy Logic in Artificial Intelligence | Introduction to Fuzzy Logic & Membership Function | Edureka
***AI and Deep Learning using TensorFlow: https://www.edureka.co/ai-deep-learning-with-tensorflow ***
This Edureka Live video on "Fuzzy Logic in AI" will explain what is fuzzy logic and how it is used to find different possibilities between 0 and 1. It…
This Edureka Live video on "Fuzzy Logic in AI" will explain what is fuzzy logic and how it is used to find different possibilities between 0 and 1. It…
Named Entity Disambiguation Boosted with Knowledge Graphs
🔗 Named Entity Disambiguation Boosted with Knowledge Graphs
We combine text and graph based approaches to build a Named Entity Disambiguation pipeline.
🔗 Named Entity Disambiguation Boosted with Knowledge Graphs
We combine text and graph based approaches to build a Named Entity Disambiguation pipeline.
Medium
Named Entity Disambiguation Boosted with Knowledge Graphs
We combine text and graph based approaches to build a Named Entity Disambiguation pipeline.
🎥 Introduction and Logistics Advance AI Deep Reinforcement Learning Python (Part1)
👁 2 раз ⏳ 1280 сек.
👁 2 раз ⏳ 1280 сек.
Hello Everyone, How Are You ?
Today i'll share video about Advance AI Deep Reinforcement Learning Python (Part 1)
Part :
1. Introduction and Outline
2. Where to get the Code
3. Tensor Flow
Please support us with subscribe this channel
Thank You
#artificialintelligence #PythonTensorFlow 2 Tutorial: Get Started in Deep Learning With tf.keras
🔗 TensorFlow 2 Tutorial: Get Started in Deep Learning With tf.keras
Predictive modeling with deep learning is a skill that modern developers need to know. TensorFlow is the premier open-source deep learning framework developed and maintained by Google. Although using TensorFlow directly can be challenging, the modern tf.keras API beings the simplicity and ease of use of Keras to the TensorFlow project. Using tf.keras allows you …
🔗 TensorFlow 2 Tutorial: Get Started in Deep Learning With tf.keras
Predictive modeling with deep learning is a skill that modern developers need to know. TensorFlow is the premier open-source deep learning framework developed and maintained by Google. Although using TensorFlow directly can be challenging, the modern tf.keras API beings the simplicity and ease of use of Keras to the TensorFlow project. Using tf.keras allows you …
MachineLearningMastery.com
TensorFlow 2 Tutorial: Get Started in Deep Learning with tf.keras - MachineLearningMastery.com
Predictive modeling with deep learning is a skill that modern developers need to know. TensorFlow is the premier open-source deep learning framework developed and maintained by Google. Although using TensorFlow directly can be challenging, the modern tf.keras…
Objective Video Quality Analysis at Airtime
🔗 Objective Video Quality Analysis at Airtime
by Caitlin O’Callaghan
🔗 Objective Video Quality Analysis at Airtime
by Caitlin O’Callaghan
Medium
Objective Video Quality Analysis at Airtime
by Caitlin O’Callaghan
Comparing the Effects of Annotation Type on Machine Learning Detection Performance http://openaccess.thecvf.com/content_CVPRW_2019/html/PBVS/Mullen_Comparing_the_Effects_of_Annotation_Type_on_Machine_Learning_Detection_CVPRW_2019_paper.html
🔗 CVPR 2019 Open Access Repository
🔗 CVPR 2019 Open Access Repository
What is My Data Worth? – The Berkeley Artificial Intelligence
https://bair.berkeley.edu/blog/2019/12/16/data-worth/
🔗 What is My Data Worth?
The BAIR Blog
https://bair.berkeley.edu/blog/2019/12/16/data-worth/
🔗 What is My Data Worth?
The BAIR Blog
The Berkeley Artificial Intelligence Research Blog
What is My Data Worth?
The BAIR Blog
What is adversarial machine learning, and how is it used today?
-Generative modeling, security, model-based optimization, neuroscience, fairness, and more!
Here's a fantastic video overview by Ian Goodfellow.
http://videos.re-work.co/videos/1351-ian-goodfellow
#ML #adversarialML #AI #datascience
🔗 Ian Goodfellow
At the time of his presentation, Ian was a Senior Staff Research Scientist at Google and gave an insight into some of the latest breakthroughs in GANs. Dubbed the 'Godfather of GANs', who better to get an overview from than Ian? Post discussion, Ian had one of the longest question queues that we have seen at one of our summits, skip the queue and watch his presentation from the comfort of your PC here
-Generative modeling, security, model-based optimization, neuroscience, fairness, and more!
Here's a fantastic video overview by Ian Goodfellow.
http://videos.re-work.co/videos/1351-ian-goodfellow
#ML #adversarialML #AI #datascience
🔗 Ian Goodfellow
At the time of his presentation, Ian was a Senior Staff Research Scientist at Google and gave an insight into some of the latest breakthroughs in GANs. Dubbed the 'Godfather of GANs', who better to get an overview from than Ian? Post discussion, Ian had one of the longest question queues that we have seen at one of our summits, skip the queue and watch his presentation from the comfort of your PC here
videos.re-work.co
Ian Goodfellow
At the time of his presentation, Ian was a Senior Staff Research Scientist at Google and gave an insight into some of the latest breakthroughs in GANs. Dubbed the 'Godfather of GANs', who better to get an overview from than Ian? Post discussion, Ian had one…
Interrogating theoretical models of neural computation with deep inference
Bittner et al.: https://www.biorxiv.org/content/10.1101/837567v2
#Neuroscience
🔗 Interrogating theoretical models of neural computation with deep inference
A cornerstone of theoretical neuroscience is the circuit model: a system of equations that captures a hypothesized neural mechanism. Such models are valuable when they give rise to an experimentally observed phenomenon – whether behavioral or in terms of neural activity – and thus can offer insights into neural computation. The operation of these circuits, like all models, critically depends on the choices of model parameters. Historically, the gold standard has been to analytically derive the relationship between model parameters and computational properties. However, this enterprise quickly becomes infeasible as biologically realistic constraints are included into the model increasing its complexity, often resulting in ad hoc approaches to understanding the relationship between model and computation. We bring recent machine learning techniques – the use of deep generative models for probabilistic inference – to bear on this problem, learning distributions of parameters that produce the specified properties
Bittner et al.: https://www.biorxiv.org/content/10.1101/837567v2
#Neuroscience
🔗 Interrogating theoretical models of neural computation with deep inference
A cornerstone of theoretical neuroscience is the circuit model: a system of equations that captures a hypothesized neural mechanism. Such models are valuable when they give rise to an experimentally observed phenomenon – whether behavioral or in terms of neural activity – and thus can offer insights into neural computation. The operation of these circuits, like all models, critically depends on the choices of model parameters. Historically, the gold standard has been to analytically derive the relationship between model parameters and computational properties. However, this enterprise quickly becomes infeasible as biologically realistic constraints are included into the model increasing its complexity, often resulting in ad hoc approaches to understanding the relationship between model and computation. We bring recent machine learning techniques – the use of deep generative models for probabilistic inference – to bear on this problem, learning distributions of parameters that produce the specified properties
bioRxiv
Interrogating theoretical models of neural computation with deep inference
A cornerstone of theoretical neuroscience is the circuit model: a system of equations that captures a hypothesized neural mechanism. Such models are valuable when they give rise to an experimentally observed phenomenon – whether behavioral or in terms of…
Chip Huyen
🔗 Chip Huyen
With 51 workshops, 1428 accepted papers, and 13k attendees, saying that NeurIPS is overwhelming is an understatement. I did my best to summarize the key tren...
🔗 Chip Huyen
With 51 workshops, 1428 accepted papers, and 13k attendees, saying that NeurIPS is overwhelming is an understatement. I did my best to summarize the key tren...
Huyenchip
Key trends from NeurIPS 2019
[Twitter thread]
Machine Learning 2020: Enterprises Brace Yourself for These Trends
🔗 Machine Learning 2020: Enterprises Brace Yourself for These Trends
The world has come far ahead in technological evolution. Ideas and theories that were only considered as a dream at one time are now…
🔗 Machine Learning 2020: Enterprises Brace Yourself for These Trends
The world has come far ahead in technological evolution. Ideas and theories that were only considered as a dream at one time are now…
Medium
Machine Learning 2020: Enterprises Brace Yourself for These Trends
The world has come far ahead in technological evolution. Ideas and theories that were only considered as a dream at one time are now…
Your Guide to AI for Self-Driving Cars in 2020
🔗 Your Guide to AI for Self-Driving Cars in 2020
Self-driving cars, also referred to as autonomous cars, are cars which are capable of driving with little to no human input. A fully…
🔗 Your Guide to AI for Self-Driving Cars in 2020
Self-driving cars, also referred to as autonomous cars, are cars which are capable of driving with little to no human input. A fully…
Medium
Your Guide to AI for Self-Driving Cars in 2020
Self-driving cars, also referred to as autonomous cars, are cars which are capable of driving with little to no human input. A fully…
Professor Karl Friston - Frontiers publications.
https://loop.frontiersin.org/people/20407/overview
🔗 Karl Friston
Karl Friston is a neuroscientist and authority on brain imaging. He invented statistical parametric mapping: SPM is an international standard for analysing imaging data and rests on the general linear model and random field theory (developed with Keith Worsley). In 1994, his group developed voxel-based morphometry. VBM detects differences in neuroanatomy and is used clinically and as a surrogate in genetic studies. These technical contributions were motivated by schizophrenia research and theoretical studies of value-learning (with Gerry Edelman). In 1995 this work was formulated as the disconnection hypothesis of schizophrenia (with Chris Frith). In 2003, he invented dynamic causal modelling (DCM), which is used to infer the architecture of distributed systems like the brain. Mathematical contributions include variational (generalised) filtering and dynamic expectation maximization (DEM) for Bayesian model inversion and time-series analysis. Friston currently works on models of functional integration in the
https://loop.frontiersin.org/people/20407/overview
🔗 Karl Friston
Karl Friston is a neuroscientist and authority on brain imaging. He invented statistical parametric mapping: SPM is an international standard for analysing imaging data and rests on the general linear model and random field theory (developed with Keith Worsley). In 1994, his group developed voxel-based morphometry. VBM detects differences in neuroanatomy and is used clinically and as a surrogate in genetic studies. These technical contributions were motivated by schizophrenia research and theoretical studies of value-learning (with Gerry Edelman). In 1995 this work was formulated as the disconnection hypothesis of schizophrenia (with Chris Frith). In 2003, he invented dynamic causal modelling (DCM), which is used to infer the architecture of distributed systems like the brain. Mathematical contributions include variational (generalised) filtering and dynamic expectation maximization (DEM) for Bayesian model inversion and time-series analysis. Friston currently works on models of functional integration in the
Loop
Karl Friston
Karl Friston is a neuroscientist and authority on brain imaging. He invented statistical parametric mapping: SPM is an international standard for analysing imaging data and rests on the general linear model and random field theory (developed with Keith Worsley).…