FastNeRF: High-Fidelity Neural Rendering at 200FPS
Smart ideas do not come in the only head. FastNeRF has the same idea as in NeX, but a bit different implementation.
The main idea is to factorize the voxel color representation into two independent components: one that depends only on positions
Essentially you predict K different (R,G,B) values for ever voxel and K weighting scalars
Then 2 neural networks
⚔️ NeX(➖) vs FastNeRF(➕):
➖ NeX achieves ~60Fps and PSNR 27 on the Real Forward-Facing dataset using Nvidia 2080Ti . While FastNeRF - 50-80 fps and PSNR 26 using Nvidia RTX 3090 GPU, fps rate varies from scene to scene. 200fps is only achieved on synthetic datasets or lower resolution.
➖FastNeRF requires a bit more memory to cache their scene representation because NeX uses sparse representation along the depth direction (only 192 slices) and share
➖ The same idea - factorize color representation. Predict
➕ FastNeRF attempted to justify such factorization by the Rendering equation (see the image with an intuitive explanation below).
➕ Very similar architecture. NeX has one extra learnable Tensor which represents average RGB colors independent of ray direction. All other components are learned by neural networks. FastNeRF learns everything with neural nets.
➖ NeX has more extensive experiments and also experiments on fixed basis functions (i.e., compute
➖ NeX introduces and evaluated on more challenging Shiny objects dataset. Would be interesting to see the results on FastNeRF on the same dataset as well.
❗️👉 Overall, I would say the approaches are very similar with some minor implementation differences. One would need to combine both implementations to get the best result.
🌐 FastNeRF paper
Unfortunately there are video results available and no code of FastNeRF yet.
Smart ideas do not come in the only head. FastNeRF has the same idea as in NeX, but a bit different implementation.
The main idea is to factorize the voxel color representation into two independent components: one that depends only on positions
p=(x,y,z)
of the voxel and one that depends only on the ray directions v
.Essentially you predict K different (R,G,B) values for ever voxel and K weighting scalars
H_i(v)
for each of them:color(x,y,z) = RGB_1 * H_1 + RGB_2 * H_2 + ... + RGB_K * H_K
.Then 2 neural networks
f(x,y,z) = [RGB_1, ... RGB_K]
and h(v) = [H1, .. H_K]
are learn to predict color components an their weights. After that the values of these functions are cached for every voxel in the volume which enables swift online rendering.⚔️ NeX(➖) vs FastNeRF(➕):
➖ NeX achieves ~60Fps and PSNR 27 on the Real Forward-Facing dataset using Nvidia 2080Ti . While FastNeRF - 50-80 fps and PSNR 26 using Nvidia RTX 3090 GPU, fps rate varies from scene to scene. 200fps is only achieved on synthetic datasets or lower resolution.
➖FastNeRF requires a bit more memory to cache their scene representation because NeX uses sparse representation along the depth direction (only 192 slices) and share
RGB_i
values between every 12 consecutive depth planes.➖ The same idea - factorize color representation. Predict
K
RGB values for every voxel instead of a single one. ➕ FastNeRF attempted to justify such factorization by the Rendering equation (see the image with an intuitive explanation below).
➕ Very similar architecture. NeX has one extra learnable Tensor which represents average RGB colors independent of ray direction. All other components are learned by neural networks. FastNeRF learns everything with neural nets.
➖ NeX has more extensive experiments and also experiments on fixed basis functions (i.e., compute
H_i(v)
using the Fourier’s series, spherical harmonics, etc). Interestingly, using the Fourier's series instead of neural network g(v)
yields only a slightly worse PSNR score and even better LPIPS score.➖ NeX introduces and evaluated on more challenging Shiny objects dataset. Would be interesting to see the results on FastNeRF on the same dataset as well.
❗️👉 Overall, I would say the approaches are very similar with some minor implementation differences. One would need to combine both implementations to get the best result.
🌐 FastNeRF paper
Unfortunately there are video results available and no code of FastNeRF yet.
This Marilyn Monroe never existed 👌🏻.
StyleGAN + Latent space regressor + CLIP (probably).
Thanks @metasemantic on Twitter
StyleGAN + Latent space regressor + CLIP (probably).
Thanks @metasemantic on Twitter
DONeRF: Towards Real-Time Rendering of Neural Radiance Fields using Depth Oracle Networks
(Graz Uni, FRL)
❗️ Another attempt to speedup NeRF: 15 FPS at 800x800.
The number of samples required for each view ray can be significantly reduced when local samples are placed around surfaces in the scene. Authors propose a depth oracle network, which predicts ray sample locations for each view ray with a single network evaluation. They show that using a classification network around logarithmically discretized and spherically warped depth values is essential to encode surface locations rather than directly estimating depth.
DONeRF is a combination of these techniques: A dual network design with a depth oracle network as a first step and a locally sampled shading network for ray accumulation.
➕ 48x speedup compared to NeRF, while equal or better quality compared to NeRF. Obviously it is not that fast as NeF or FastNeRF, but the approach is different.
📝 Paper
🌐 Proj page
(Graz Uni, FRL)
❗️ Another attempt to speedup NeRF: 15 FPS at 800x800.
The number of samples required for each view ray can be significantly reduced when local samples are placed around surfaces in the scene. Authors propose a depth oracle network, which predicts ray sample locations for each view ray with a single network evaluation. They show that using a classification network around logarithmically discretized and spherically warped depth values is essential to encode surface locations rather than directly estimating depth.
DONeRF is a combination of these techniques: A dual network design with a depth oracle network as a first step and a locally sampled shading network for ray accumulation.
➕ 48x speedup compared to NeRF, while equal or better quality compared to NeRF. Obviously it is not that fast as NeF or FastNeRF, but the approach is different.
📝 Paper
🌐 Proj page
Ted Talk with Yann LeCun
in which Yann discusses his current research into self-supervised machine learning, how he's trying to build machines that learn with common sense (like humans) and his hopes for the next conceptual breakthrough in AI.
▶️ Watch
in which Yann discusses his current research into self-supervised machine learning, how he's trying to build machines that learn with common sense (like humans) and his hopes for the next conceptual breakthrough in AI.
▶️ Watch
Ted
Deep learning, neural networks and the future of AI
Yann LeCun, the chief AI scientist at Facebook, helped develop the deep learning algorithms that power many artificial intelligence systems today. In conversation with head of TED Chris Anderson, LeCun discusses his current research into self-supervised machine…
Open source 2.7 billion parameter GPT-3 model was released
As you probably know OpenAI has not released source code or pre-trained weights for their 175 billion language model GPT-3. Instead, they decided to create a commercial product and exclusively license GPT-3 to Microsoft.
But open-source enthusiasts from eleuther.ai have released the weights of 1.3B and 2.7B parameter models of their replication of GPT-3. It is the largest (afaik) publicly available GPT-3 replica. The primary goal of this project is to replicate a full-sized GPT-3 model and open source it to the public, for free.
The models were trained on an open-source dataset The Pile which consists of 22 smaller, high-quality datasets combined together (825 GB text in total). Larger GPT models (10B and full GPT-3) are not available yet, but soon will be.
The 2.7B model likely won't fit into the Colab TPUs memory, and you may have to get some larger TPU pods to finetune from it. Sampling from it, however, works just fine.
It is the first time I tried some kind of GPT model, and I was not too impressed (see my fav results below). Maybe 2.7 billion params are not enough yet to leave the copyrighters without their jobs 🤷🏼♂️. Let's wait for a bigger model.
⚙️ Code in Mesh-Tensforflow
📦 1.3 and 2.7 model weights and configs
👫 Colab notebook
As you probably know OpenAI has not released source code or pre-trained weights for their 175 billion language model GPT-3. Instead, they decided to create a commercial product and exclusively license GPT-3 to Microsoft.
But open-source enthusiasts from eleuther.ai have released the weights of 1.3B and 2.7B parameter models of their replication of GPT-3. It is the largest (afaik) publicly available GPT-3 replica. The primary goal of this project is to replicate a full-sized GPT-3 model and open source it to the public, for free.
The models were trained on an open-source dataset The Pile which consists of 22 smaller, high-quality datasets combined together (825 GB text in total). Larger GPT models (10B and full GPT-3) are not available yet, but soon will be.
The 2.7B model likely won't fit into the Colab TPUs memory, and you may have to get some larger TPU pods to finetune from it. Sampling from it, however, works just fine.
It is the first time I tried some kind of GPT model, and I was not too impressed (see my fav results below). Maybe 2.7 billion params are not enough yet to leave the copyrighters without their jobs 🤷🏼♂️. Let's wait for a bigger model.
⚙️ Code in Mesh-Tensforflow
📦 1.3 and 2.7 model weights and configs
👫 Colab notebook
GitHub
GitHub - EleutherAI/gpt-neo: An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library. - EleutherAI/gpt-neo
I decided to ask the AI if it will kill us, meatbags, in the near future. This is what I got in response...
Prompt: Will AI enslave the humankind in the next 20-30 years?
==============
GPT-3 Answers:
==============
The increasing pace of AI is not going to decrease. As of now the AI revolution has just began. The effects of AI on people and the future is not known yet, but if that is the AI’s fate, then the exposure of the human population is going to increase drastically.
There is a possibility that you will be in one of the AI enslaved societies.
The AI Revolution is a powerful technologies that is coming to the world in the near future. The AI revolution will be implemented as a protection mechanism for the protection of planet earth from dangerous climate change and increasing nuclear war. The AI has the potential to be our savior for the future. It is going to be a technology that is going to change the world drastically but there is a possibility that this is simply an illusion that we are living in a world of AI and it is just a pure mass hallucination, and all those projections actually represent the AI’s downfall and not an AI savior.
.I am excited to see what the AI revolution is going to bring.
.I believe that it is going to bring great changes to our civilization.
.I believe that it is going to bring great changes in human civilization.
~~~
Well, the last 10 years or so have been awful for everyone. It's not AI enslaving the humans in the next 20 or 30, it's automation on a global scale.
~~~
No. There will be mass deaths, and we'll probably know them as AI-driven holocaust or AI driven genocides. AI will have tools to justify and defend those crimes.
~~~
We already have the technology to create and control the world. I think we're already there. That's not the problem.
~~~
If there is a zombie apocalypse, perhaps we may find a way for AI to take over from humanity yet again.
~~~
Well, the last 10 years or so have been awful for everyone. It's not AI enslaving the humans in the next 20 or 30, it's automation on a global scale.
~~~
A prediction: AI will be able to look at the world around it, and decide what the world would look like without humans in it. Once the AI judge the world without humans, we'll create a world where humans are on the edge of our world and the AI is on the edge of the universe.
Prompt: Will AI enslave the humankind in the next 20-30 years?
==============
GPT-3 Answers:
==============
The increasing pace of AI is not going to decrease. As of now the AI revolution has just began. The effects of AI on people and the future is not known yet, but if that is the AI’s fate, then the exposure of the human population is going to increase drastically.
There is a possibility that you will be in one of the AI enslaved societies.
The AI Revolution is a powerful technologies that is coming to the world in the near future. The AI revolution will be implemented as a protection mechanism for the protection of planet earth from dangerous climate change and increasing nuclear war. The AI has the potential to be our savior for the future. It is going to be a technology that is going to change the world drastically but there is a possibility that this is simply an illusion that we are living in a world of AI and it is just a pure mass hallucination, and all those projections actually represent the AI’s downfall and not an AI savior.
.I am excited to see what the AI revolution is going to bring.
.I believe that it is going to bring great changes to our civilization.
.I believe that it is going to bring great changes in human civilization.
~~~
Well, the last 10 years or so have been awful for everyone. It's not AI enslaving the humans in the next 20 or 30, it's automation on a global scale.
~~~
No. There will be mass deaths, and we'll probably know them as AI-driven holocaust or AI driven genocides. AI will have tools to justify and defend those crimes.
~~~
We already have the technology to create and control the world. I think we're already there. That's not the problem.
~~~
If there is a zombie apocalypse, perhaps we may find a way for AI to take over from humanity yet again.
~~~
Well, the last 10 years or so have been awful for everyone. It's not AI enslaving the humans in the next 20 or 30, it's automation on a global scale.
~~~
A prediction: AI will be able to look at the world around it, and decide what the world would look like without humans in it. Once the AI judge the world without humans, we'll create a world where humans are on the edge of our world and the AI is on the edge of the universe.
Why do models often attend to salient words,and how does this evolve throughout training?
This work tries to understand the black box of attention training. Early on in training, the LSTM attention models first learn how to translate individual words from the bag of words co-occurrence statistics, which then drives the learning of the attention. The authors propose a framework explaining why attention weights obtained by standard training often correlate with saliency, and how multi-head attention can increase performance by improving the training dynamics rather than expressiveness.
Read more below 👇🏻
This work tries to understand the black box of attention training. Early on in training, the LSTM attention models first learn how to translate individual words from the bag of words co-occurrence statistics, which then drives the learning of the attention. The authors propose a framework explaining why attention weights obtained by standard training often correlate with saliency, and how multi-head attention can increase performance by improving the training dynamics rather than expressiveness.
Read more below 👇🏻
Forwarded from DL in NLP (nlpcontroller_bot)
Approximating How Single Head Attention Learns
Snell et al., [Berkeley]
arxiv.org/abs/2103.07601
A look inside LSTM seq2seq with attention training dynamics. The main idea of the paper is KTIW – Knowledge to Translate Individual Words. To explain the dynamics, the authors divide training into two stages: uniform attention (KTIW) and non-uniform attention.
In the first stage of model training, attention does not change significantly from the uniform, and the model mainly learns to translate individual words (KTIW, a.k.a. dictionary translation). After KTIW is learned, attention starts forming its patterns, and this process is driven by the KTIW. As correct word translations are already more probable, now attention mainly needs to align the words from the source and target language.
To quantitatively test the hypothesis, they develop a new lexical prob that is essentially hard attention. Yet, the most impressive result is that attention cannot learn a simple copy operation if KTIW is not learned.
Snell et al., [Berkeley]
arxiv.org/abs/2103.07601
A look inside LSTM seq2seq with attention training dynamics. The main idea of the paper is KTIW – Knowledge to Translate Individual Words. To explain the dynamics, the authors divide training into two stages: uniform attention (KTIW) and non-uniform attention.
In the first stage of model training, attention does not change significantly from the uniform, and the model mainly learns to translate individual words (KTIW, a.k.a. dictionary translation). After KTIW is learned, attention starts forming its patterns, and this process is driven by the KTIW. As correct word translations are already more probable, now attention mainly needs to align the words from the source and target language.
To quantitatively test the hypothesis, they develop a new lexical prob that is essentially hard attention. Yet, the most impressive result is that attention cannot learn a simple copy operation if KTIW is not learned.
🔥New DALL-E? Paint by Word🔥
Fresh Blogpost!
In this post, I will give a brief overview of the recent paper from MIT Paint by Word and compare it to DALL-E. Authors introduce a novel method which is to be able to paint in an image arbitrary new concepts described by text at any specific location provided by the user in a form of a mask. The proposed Paint by Word method can also generate a full image just based on a textual description.
👉 Read more in the Blogpost
There is also Telegram InstantView of the post. But it is better to read it in a regular browser, as Telegram doesn't render Latex formulas.
Fresh Blogpost!
In this post, I will give a brief overview of the recent paper from MIT Paint by Word and compare it to DALL-E. Authors introduce a novel method which is to be able to paint in an image arbitrary new concepts described by text at any specific location provided by the user in a form of a mask. The proposed Paint by Word method can also generate a full image just based on a textual description.
👉 Read more in the Blogpost
There is also Telegram InstantView of the post. But it is better to read it in a regular browser, as Telegram doesn't render Latex formulas.
Prepare your ears 🧏🏼♂️ - The Robot Brains Podcast
Pieter Abbeel, renown Professor at Berkeley, Director of the Berkeley Robot Learning Lab and Co-Director of the Berkeley Artificial Intelligence (BAIR) Lab, has launched a new podcast about AI.
https://www.therobotbrains.ai/
Let me know in comments if you want me to share my list of fav AI/Machine Learning podcasts
Pieter Abbeel, renown Professor at Berkeley, Director of the Berkeley Robot Learning Lab and Co-Director of the Berkeley Artificial Intelligence (BAIR) Lab, has launched a new podcast about AI.
https://www.therobotbrains.ai/
Let me know in comments if you want me to share my list of fav AI/Machine Learning podcasts
This media is not supported in your browser
VIEW IN TELEGRAM
Texel has created a Tg bot for virtual try-on
Virtual try-on is getting more attention recently. There is a bot @Texel_Materia_Bot, where anyone may upload their photo and dress-up a bit. The gif above show how it looks on a perfect photo.
Read about the Texel bot
Virtual try-on is getting more attention recently. There is a bot @Texel_Materia_Bot, where anyone may upload their photo and dress-up a bit. The gif above show how it looks on a perfect photo.
Read about the Texel bot
But I gave the bot a hard time 😂. I don't know why but one foot becomes bare very often. Interesting how robust it is on the oversized clothes (first image), and apparently the algorithm includes some sort of parametric 3D shape fitting (in the spirit of SMPL) and inpainting (see how some regions behind the original jacket were reconstruted).