This is not a paper, but it is awesome!
App Polycam uses a builtin LIDAR sensor in the latest iPad Pro to scan the surroundings and build a textured 3D mesh. The mesh is “generally accurate down to about one inch”. The process is also near-real-time: processing is done locally on the tablet, with single-room captures taking “only seconds to process”, making it possible to see the mesh building up as you walk around. Looks like it's one of the best 3D scanning app out there (for arbitrary objects, see 3D sofa example here).
However, it relies on LIDAR which we can find only in the latest iPad Pro and upcoming iPhone 12 Pro. It would be much more exciting if they used pure RGB-based techniques, e.g. SLAM which does no require a LIDAR or a depth camera. I will come back to this and will briefly discuss some techniques for building 3D shapes from images in future posts.
App Polycam uses a builtin LIDAR sensor in the latest iPad Pro to scan the surroundings and build a textured 3D mesh. The mesh is “generally accurate down to about one inch”. The process is also near-real-time: processing is done locally on the tablet, with single-room captures taking “only seconds to process”, making it possible to see the mesh building up as you walk around. Looks like it's one of the best 3D scanning app out there (for arbitrary objects, see 3D sofa example here).
However, it relies on LIDAR which we can find only in the latest iPad Pro and upcoming iPhone 12 Pro. It would be much more exciting if they used pure RGB-based techniques, e.g. SLAM which does no require a LIDAR or a depth camera. I will come back to this and will briefly discuss some techniques for building 3D shapes from images in future posts.
YouTube
Polycam preview
3D scanning with the Polycam app on iOS. Learn more at https://polycam.ai/
There is another cool app in3D created by my mates that can build your 3D avatar from a 360 video capturing you from different angles. They achieve compelling results with their avatars capturing fine shape details and automatically rigged (see avatar example). However, the app is currently available only for iPhones as well, but at least does not require a LIDAR sensor 😅.
YouTube
in3D app: 3D body scanning with an iPhone
App: https://apple.co/2FcBZ7B
Web: http://in3d.io/
Scan yourself into GTA V, Second Life or VRChat.
Contact us: hello@in3d.io
Web: http://in3d.io/
Scan yourself into GTA V, Second Life or VRChat.
Contact us: hello@in3d.io
Scientists from the University of Washington broke the longstanding record in solving the notorious NP-hard problem — Travelling Salesman Problem (TSP). This optimization problem, which seeks the shortest (or least expensive) round trip through a collection of cities, has applications ranging from DNA sequencing to ride-sharing logistics.
There were no advancements in this field since 1976 when Nicos Christofides came up with an algorithm that efficiently finds approximate solutions — round trips that are at most 50% longer than the best round trip.
Funny enough that the novel algorithm improves the previous approximate algorithm by a whopping margin of 2.0 x 10^-36 !!! (Yes, it is 0.2 billionth of a trillionth of a trillionth of a percent.) But please don't be too disappointed (although I was). This result breaks a theoretical and psychological barrier that persisted for more than forty years. And hopefully, it will spike the interest of the broader community about this problem and will lead to further advancements in the next years. Moreover, it is likely (although not proven yet) that the proposed algorithm is much more efficient than the predecessor in most of the cases and it improves at least by that tiny margin in the worst case.
As a Deep Learning evangelist, my first impression after reading the caption was that it was another victory of Neural Networks, however, I was wrong, and yet not all the cool stuff is done with the help of NNs. The method is based on the machinery called the geometry of polynomials, a very little known discipline in the theoretical computer science world.
We are living in incredible times! Maybe somebody will finally prove P = NP?
There were no advancements in this field since 1976 when Nicos Christofides came up with an algorithm that efficiently finds approximate solutions — round trips that are at most 50% longer than the best round trip.
Funny enough that the novel algorithm improves the previous approximate algorithm by a whopping margin of 2.0 x 10^-36 !!! (Yes, it is 0.2 billionth of a trillionth of a trillionth of a percent.) But please don't be too disappointed (although I was). This result breaks a theoretical and psychological barrier that persisted for more than forty years. And hopefully, it will spike the interest of the broader community about this problem and will lead to further advancements in the next years. Moreover, it is likely (although not proven yet) that the proposed algorithm is much more efficient than the predecessor in most of the cases and it improves at least by that tiny margin in the worst case.
As a Deep Learning evangelist, my first impression after reading the caption was that it was another victory of Neural Networks, however, I was wrong, and yet not all the cool stuff is done with the help of NNs. The method is based on the machinery called the geometry of polynomials, a very little known discipline in the theoretical computer science world.
We are living in incredible times! Maybe somebody will finally prove P = NP?
Quanta Magazine
Computer Scientists Break Traveling Salesperson Record
After 44 years, there’s finally a better way to find approximate solutions to the notoriously difficult traveling salesperson problem.
JuergenSchmidhuber.gif
18.9 MB
New paper on neural painting "Stylized Neural Painting".
The main idea is to train a neural network to render individual brushstrokes, parametrized by color, shape, and transparency.
An input image is approximated by a fixed number of brushstrokes which are blended based on their transparency values.
To find optimal parameters for each brushstroke authors propose to run an iterative optimization procedure in the same way as it was done in the pioneering work of Gatys et al.
Another novelty of this paper is Optimal Transport loss, which has more meaningful gradients compared to the photometric loss in case of sparse brushstrokes.
Authors even created a google Colab notebook, where you can play around with the method.
📃 https://arxiv.org/pdf/2011.08114.pdf
🌐 https://jiupinjia.github.io/neuralpainter/
💾 https://github.com/jiupinjia/stylized-neural-painting
The main idea is to train a neural network to render individual brushstrokes, parametrized by color, shape, and transparency.
An input image is approximated by a fixed number of brushstrokes which are blended based on their transparency values.
To find optimal parameters for each brushstroke authors propose to run an iterative optimization procedure in the same way as it was done in the pioneering work of Gatys et al.
Another novelty of this paper is Optimal Transport loss, which has more meaningful gradients compared to the photometric loss in case of sparse brushstrokes.
Authors even created a google Colab notebook, where you can play around with the method.
📃 https://arxiv.org/pdf/2011.08114.pdf
🌐 https://jiupinjia.github.io/neuralpainter/
💾 https://github.com/jiupinjia/stylized-neural-painting
Keynote of Turing Award Winners at AAAI 2020 (Geoff Hinton, Yann LeCunn, Yoshua Bengio). I especially liked the Yann LeCunn's talk on Self-supervised learning
🎥 Video
📃 LeCunn's slides
🎥 Video
📃 LeCunn's slides
This is very exciting when the technology which we are developing helps us to appreciate some historical moments.
I have stumbled upon an amazing video of the interview with Yuri Gagarin, the first cosmonaut, from July 1961. The interview was done by the BBC during Gargarin’s 4-day visit to Great Britain as part of a Soviet Exhibition at Earl's Court in London.
And now we can appreciate the interview in 4K(!!!) enhanced by neural networks, as a courtesy of @denissexy. Gagarin was never so alive for Millenials!
Enjoy!
I have stumbled upon an amazing video of the interview with Yuri Gagarin, the first cosmonaut, from July 1961. The interview was done by the BBC during Gargarin’s 4-day visit to Great Britain as part of a Soviet Exhibition at Earl's Court in London.
And now we can appreciate the interview in 4K(!!!) enhanced by neural networks, as a courtesy of @denissexy. Gagarin was never so alive for Millenials!
Enjoy!
YouTube
[AI stuff] Interview enhanced by neural networks. Yuri Gagarin, first cosmonaut, London, 1961.
Source video:https://www.bbc.co.uk/programmes/p00fwcbnYou can reach me here:💌 https://shir-man.comRecently I found an amazing interview with Yuri Gagarin in...
Forwarded from Karim Iskakov - канал (Karim Iskakov)
This media is not supported in your browser
VIEW IN TELEGRAM
Turning selfie video into Deformable NeRF for high-fidelity renderings from novel viewpoints.
The work smashes previous methods (Neural Volumes, NeRF) in terms of quality by a wide margin. Just look at these curls at 0:46 (timecode is clickable)!
🌐 nerfies.github.io
📝 arxiv.org/abs/2011.12948
📉 @loss_function_porn
The work smashes previous methods (Neural Volumes, NeRF) in terms of quality by a wide margin. Just look at these curls at 0:46 (timecode is clickable)!
🌐 nerfies.github.io
📝 arxiv.org/abs/2011.12948
📉 @loss_function_porn
That feeling when a dumb robot dances better than you. Boston Dynamics is still surprising with an amazing manual control of those robots.
YouTube
Do You Love Me?
Our whole crew got together to celebrate the start of what we hope will be a happier year: Happy New Year from all of us at Boston Dynamics. www.BostonDynamics.com.
A Swiss village was completely reproduced in virtual reality (VR) by filming from drone and handheld cameras. Now you can make it to the heart of the old settlement and feel the history while sitting in your chair. Just amazing!
https://twitter.com/i/status/1343112828069113856
https://twitter.com/i/status/1343112828069113856
Twitter
ねとらぼ
グッとくる……! スイスの秘境の村をまるごとVR化 歴史を感じる風景の中を歩ける映像がロマンいっぱい https://t.co/g2UHBXchCA @itm_nlabより https://t.co/5v43pZz2fk
My first youtube video is out!
I this video I explain how we earned $6000 by getting in Top3 on a Kaggle autonomous driving competition organized by LYFT.
It is crucial for an autonomous vehicle to anticipate what will happen next on the road to plan its actions accordingly.
The goal of this competition was to predict the future motions of all the cars (or any other agents) around the autonomous vehicle. In the video, I present our CNN + Set Transformer solution which is placed in TOP 3 on the private leaderboard.
Video: https://youtu.be/3Yz8_x38qbc
Solution source code: https://github.com/asanakoy/kaggle-lyft-motion-prediction-av
Solution write-up: https://www.kaggle.com/c/lyft-motion-prediction-autonomous-vehicles/discussion/205376
Please let me know in the comments what you think about such a format.
I this video I explain how we earned $6000 by getting in Top3 on a Kaggle autonomous driving competition organized by LYFT.
It is crucial for an autonomous vehicle to anticipate what will happen next on the road to plan its actions accordingly.
The goal of this competition was to predict the future motions of all the cars (or any other agents) around the autonomous vehicle. In the video, I present our CNN + Set Transformer solution which is placed in TOP 3 on the private leaderboard.
Video: https://youtu.be/3Yz8_x38qbc
Solution source code: https://github.com/asanakoy/kaggle-lyft-motion-prediction-av
Solution write-up: https://www.kaggle.com/c/lyft-motion-prediction-autonomous-vehicles/discussion/205376
Please let me know in the comments what you think about such a format.
YouTube
How to Earn Money by Winning a KAGGLE Competition: Lyft Motion Prediction for Autonomous Vehicles
How to secure a 3rd place and win $6000. Solution of our team which predicts future cars' motion for autonomous driving.
🔨 Source code: https://github.com/asanakoy/kaggle-lyft-motion-prediction-av
Link to the Kaggle challenge: https://www.kaggle.com/c/lyft…
🔨 Source code: https://github.com/asanakoy/kaggle-lyft-motion-prediction-av
Link to the Kaggle challenge: https://www.kaggle.com/c/lyft…
Set Transformer original paper: https://arxiv.org/abs/1810.00825. I will write about it specifically later.
I use the podcasts of Lex Fridman as an opportunity to talk to very intelligent and clever people while having breakfast. These conversations always give me the motivation to keep up with my research work as well.
I have just finished listening to Lex's conversation with Prof. Sergey Levine. Very insightful!
Sergey is a brilliant researcher in the field of Deep RL and Computer Vision and a very humble and genuine person. I was lucky to meet him in person and to talk to him a little bit at my first big scientific conference NeurIPS 2016.
A piece of advice for students from Sergey Levine:
"It is important to not be afraid to spend time imagining the kind of outcome that you might like to see. If someone who is a student considering a career in AI takes a little while, sits down and thinks like "What do I really want to see a machine do? What do I want to see a robot do? What do I want to see a natural language system do?". Imagine it almost like a commercial for a future product or something that you'd like to see in the world. And then actually sit down and think about the steps that are necessary to get there. And hopefully, that thing is not a better number on ImageNet classification, it's probably like an actual thing that we can't do today. That would be really AWESOME.
Whether it's a robot butler or an awesome healthcare decision-making support system. Whatever it is that you find inspiring. And I think that thinking about that and then backtracking from there and imagining the steps needed to get there will actually do much better research, it will lead to rethinking the assumptions, it will lead to working on the bottlenecks other people aren't working on."
I have just finished listening to Lex's conversation with Prof. Sergey Levine. Very insightful!
Sergey is a brilliant researcher in the field of Deep RL and Computer Vision and a very humble and genuine person. I was lucky to meet him in person and to talk to him a little bit at my first big scientific conference NeurIPS 2016.
A piece of advice for students from Sergey Levine:
"It is important to not be afraid to spend time imagining the kind of outcome that you might like to see. If someone who is a student considering a career in AI takes a little while, sits down and thinks like "What do I really want to see a machine do? What do I want to see a robot do? What do I want to see a natural language system do?". Imagine it almost like a commercial for a future product or something that you'd like to see in the world. And then actually sit down and think about the steps that are necessary to get there. And hopefully, that thing is not a better number on ImageNet classification, it's probably like an actual thing that we can't do today. That would be really AWESOME.
Whether it's a robot butler or an awesome healthcare decision-making support system. Whatever it is that you find inspiring. And I think that thinking about that and then backtracking from there and imagining the steps needed to get there will actually do much better research, it will lead to rethinking the assumptions, it will lead to working on the bottlenecks other people aren't working on."
YouTube
Sergey Levine: Robotics and Machine Learning | Lex Fridman Podcast #108
Sergey Levine is a professor at Berkeley and a world-class researcher in deep learning, reinforcement learning, robotics, and computer vision, including the development of algorithms for end-to-end training of neural network policies that combine perception…
Hi guys! New video on my YouTube channel!
Computer Vision for animals is a fast-growing and very promising sub-field.
I this video I will explain how to reconstruct a 3D model of an animal with a single photo.
The method is based on cycle consistency loss between image pixels and vertices on a 3D mesh.
Reference papers:
1) "Articulation-aware Canonical Surface Mapping", Kulkarni et al., CVPR 2020
2) "Canonical Surface Mapping via Geometric Cycle Consistency", Kulkarni et al., ICCV 2019
Method source code: GitHub repo.
Computer Vision for animals is a fast-growing and very promising sub-field.
I this video I will explain how to reconstruct a 3D model of an animal with a single photo.
The method is based on cycle consistency loss between image pixels and vertices on a 3D mesh.
Reference papers:
1) "Articulation-aware Canonical Surface Mapping", Kulkarni et al., CVPR 2020
2) "Canonical Surface Mapping via Geometric Cycle Consistency", Kulkarni et al., ICCV 2019
Method source code: GitHub repo.
YouTube
How to Reconstruct 3D Model of an Animal from a single Photo via Cycle Consistency [Deep Learning]
Computer Vision for Animals is one of the growing sub-fields with huge potential. In this video, I explain 2 papers for reconstructing 3D meshes of animals just from photos.
Timecodes:
0:00 Intro
0:24 Intuitive example
0:48 Goal
0:58 Geometrical priors…
Timecodes:
0:00 Intro
0:24 Intuitive example
0:48 Goal
0:58 Geometrical priors…