Forwarded from Denis Sexy IT π¬π§
Recently I have found an Instagram of artist from Tomsk, Evgeny Schwenk β he redraws characters from Soviet cartoons as if they were real people. I have applied neural.love neural network which made his drawings even more realistic. Just a bit of Photoshop (mainly for hats) and here we go.
I guess Karlsson-on-the-Roof is my best result.
I guess Karlsson-on-the-Roof is my best result.
Aloha guys!
I'm verty excited to announce that I have joined Facebook Reality Labs (FRL) as a Research Scientist! Before that, I interned twice at Facebook AI Research, and now I will work in the FRL division, which focuses on virtual and augmented reality. Moving from academy to industry, I hope that I will still have enough freedom in choosing research directions π.
I'm verty excited to announce that I have joined Facebook Reality Labs (FRL) as a Research Scientist! Before that, I interned twice at Facebook AI Research, and now I will work in the FRL division, which focuses on virtual and augmented reality. Moving from academy to industry, I hope that I will still have enough freedom in choosing research directions π.
Tech at Meta
Reality Labs | Tech at Meta
Experimented with generating images from text prompts with VQGAN and CLIP. Some cool results:
1."Minecraft Starcraft"
2. "Polygonal fast food"
3. "Holy war against capitalism"
4. "Modern cubist painting"
π€πΌ Colab notebook
1."Minecraft Starcraft"
2. "Polygonal fast food"
3. "Holy war against capitalism"
4. "Modern cubist painting"
π€πΌ Colab notebook
Media is too big
VIEW IN TELEGRAM
Here's a very recent article from Googe Brain that uses diffusion models for super-resolution.
The results are shocking! Their model beats the GAN-based SOTA method. The video shows an example of how a 64x64 picture is upscaled to 1024x1024. But no source code yet.
π Project page
π Paper
I also wrote about OpenAI paper on diffusion models earlier.
The results are shocking! Their model beats the GAN-based SOTA method. The video shows an example of how a 64x64 picture is upscaled to 1024x1024. But no source code yet.
π Project page
π Paper
I also wrote about OpenAI paper on diffusion models earlier.
OpenAI disbands its robotics research team. This is exactly the same team that, for example, taught a robotic arm to solve a Rubik's cube using Reinforcement Learning. This decision was made because the company considers more promising research in areas where physical equipment is not required (except for servers, of course), and there is already a lot of data available. And also for economic reasons, since Software as a Services is a business with a much higher margin. Yes, the joke is that the non-profit organization OpenAI is considered more and more about profit. This is understandable because it takes a lot of money to create general artificial intelligence (AGI) that can learn all the tasks that a person can do and even more.
It's no secret that research in the field of robotics is also a very costly activity that requires a lot of investment. Therefore, there are not so many companies involved in this. Among the large and successful, only Boston Dynamics comes to mind, which has already changed several owners. Did you know that in 2013 Google acquired Boston Dynamics, then Google also scaled down its robotics research program, and in 2017 sold Boston Dynamic to the Japanese firm SoftBank. The adventures of Boston Dynamics did not end there, and in December 2020 SoftBank resold 80% of the shares (a controlling stake) to the automaker Hyundai. This looks somehow fishy as if every company understands after a few years that it is still difficult to make a profit from Boston Dynamics and sells it to another patsy.
In any case, it is very interesting to observe which focus areas are chosen by the titans of AI research. But I'm a bit sad that robots are still lagging behind.
It's no secret that research in the field of robotics is also a very costly activity that requires a lot of investment. Therefore, there are not so many companies involved in this. Among the large and successful, only Boston Dynamics comes to mind, which has already changed several owners. Did you know that in 2013 Google acquired Boston Dynamics, then Google also scaled down its robotics research program, and in 2017 sold Boston Dynamic to the Japanese firm SoftBank. The adventures of Boston Dynamics did not end there, and in December 2020 SoftBank resold 80% of the shares (a controlling stake) to the automaker Hyundai. This looks somehow fishy as if every company understands after a few years that it is still difficult to make a profit from Boston Dynamics and sells it to another patsy.
In any case, it is very interesting to observe which focus areas are chosen by the titans of AI research. But I'm a bit sad that robots are still lagging behind.
VentureBeat
OpenAI disbands its robotics research team
OpenAI has disbanded its robotics team in what might be a reflection of economic and commercial realities.
Researchers from NVIDIA (in particular Tero Karras) have once again "solved" image generation.
This time, the scientists were able to remove aliasing in the generator. In a nutshell, then the reason for the artifacts was careless signal processing in the CNN resulting in incorrect discretization. The signal could not be accurately reconstructed, which led to unnatural "jerks" noticeable in the video. The authors have modified the generator to prevent these negative sampling effects.
The resulting networks match the FID of StyleGAN2 but differ dramatically in their internal representations, and they are fully equivariant to translation and rotation even at subpixel scales.
The code is not available yet, but I'm sure NVIDIA will release it soon.
Read more about Alias-Free GAN here.
This time, the scientists were able to remove aliasing in the generator. In a nutshell, then the reason for the artifacts was careless signal processing in the CNN resulting in incorrect discretization. The signal could not be accurately reconstructed, which led to unnatural "jerks" noticeable in the video. The authors have modified the generator to prevent these negative sampling effects.
The resulting networks match the FID of StyleGAN2 but differ dramatically in their internal representations, and they are fully equivariant to translation and rotation even at subpixel scales.
The code is not available yet, but I'm sure NVIDIA will release it soon.
Read more about Alias-Free GAN here.
German startup aims to become "Europe's OpenAI"
The German startup Aleph Alpha, which is based in Heidelberg, Germany (the city where I did my PhD), recently raised $ 27M in a Series A round. The task, they set themselves ambitious (even too much) - they want to create another breakthrough in AI, something similar to OpenAI GPT-3.
The company was founded in 2019, and, strangely, I discovered it only today. I looked at their ML team. And I have not found a single person with any major scientific achievements (say on the level of Professor). I got disappointed. Their ML team includes 3 recent PhD students and Connor Leahy, who is known for co-founding EleutherAI. EleutherAI is a non-profit organization that was created to reproduce and open-source GPT-3 model. Perhaps they bet on Connor, but, frankly speaking, Connor is not a researcher, he has no scientific publications, and EleutherAI is simply reproducing results of OpenAI. When OpenAI was founded, it was immediately clear that they got a stellar team, which would certainly produce something cool.
My impressions are controversial. Aleph Alpha has partnerships with German government agencies. They promote themselves in the style of "we are Europe's last chance claim a place in the field of AI", "we will be based purely in Europe and will be pushing European values and ethical standards." They also promise to be more open than OpenAI (lol) and commit to open-source. Although, perhaps, they will just create some kind of large platform with AI solutions and sit on the government funding. It will be a kind of AI consulting, they even have a job posted on their website for this purpose - AI Delivery & Consulting. The whole affair smacks of a government cover-up like in the case of Palantir (at least partially).
I'm not a startup expert, but it seems like Europe is very hungry for innovation. They want to keep up with the United States and China. Therefore, they give out the bucks at the first opportunity, especially if the company promises to work closely with the government. What do you think about this, gentlemen?
The German startup Aleph Alpha, which is based in Heidelberg, Germany (the city where I did my PhD), recently raised $ 27M in a Series A round. The task, they set themselves ambitious (even too much) - they want to create another breakthrough in AI, something similar to OpenAI GPT-3.
The company was founded in 2019, and, strangely, I discovered it only today. I looked at their ML team. And I have not found a single person with any major scientific achievements (say on the level of Professor). I got disappointed. Their ML team includes 3 recent PhD students and Connor Leahy, who is known for co-founding EleutherAI. EleutherAI is a non-profit organization that was created to reproduce and open-source GPT-3 model. Perhaps they bet on Connor, but, frankly speaking, Connor is not a researcher, he has no scientific publications, and EleutherAI is simply reproducing results of OpenAI. When OpenAI was founded, it was immediately clear that they got a stellar team, which would certainly produce something cool.
My impressions are controversial. Aleph Alpha has partnerships with German government agencies. They promote themselves in the style of "we are Europe's last chance claim a place in the field of AI", "we will be based purely in Europe and will be pushing European values and ethical standards." They also promise to be more open than OpenAI (lol) and commit to open-source. Although, perhaps, they will just create some kind of large platform with AI solutions and sit on the government funding. It will be a kind of AI consulting, they even have a job posted on their website for this purpose - AI Delivery & Consulting. The whole affair smacks of a government cover-up like in the case of Palantir (at least partially).
I'm not a startup expert, but it seems like Europe is very hungry for innovation. They want to keep up with the United States and China. Therefore, they give out the bucks at the first opportunity, especially if the company promises to work closely with the government. What do you think about this, gentlemen?
TechCrunch
German startup Aleph Alpha raises $27M Series A round to build βEuropeβs OpenAIβ
With Microsoft now being an investor in OpenAI the field is more open for new insurgents into the open-source AI arena. Now a German company hopes to take on the next AI mantle and produce something akin to the success of the GPT-3 AI model.
This media is not supported in your browser
VIEW IN TELEGRAM
Paint Transformer: Feed Forward Neural Painting with Stroke Prediction
Earlier I wrote about a style transfer that, instead of optimizing pixels, optimizes directly parameterized brush strokes. A new work has been released, it uses Transformer architecture to predict stroke parameters using one (well, almost one) forward-pass of the network. In fact, their transformer is made in the spirit of a recurrent network.
Four forward passes of the original image in different resolutions are made through the network, starting from downsapled 16 times to the original one. At each next forward puss, a rendered canvas with strokes predicted in the previous passes is also added to the input as well as the original image in a higher resolution. Thus, the network gradually adds new strokes to the canvas, starting with larger ones (they are painted on a low resolution canvas), and ending with smaller ones (on a high resolution canvas). The network is trained on synthetic data generated online.
π Paper
π Code
Earlier I wrote about a style transfer that, instead of optimizing pixels, optimizes directly parameterized brush strokes. A new work has been released, it uses Transformer architecture to predict stroke parameters using one (well, almost one) forward-pass of the network. In fact, their transformer is made in the spirit of a recurrent network.
Four forward passes of the original image in different resolutions are made through the network, starting from downsapled 16 times to the original one. At each next forward puss, a rendered canvas with strokes predicted in the previous passes is also added to the input as well as the original image in a higher resolution. Thus, the network gradually adds new strokes to the canvas, starting with larger ones (they are painted on a low resolution canvas), and ending with smaller ones (on a high resolution canvas). The network is trained on synthetic data generated online.
π Paper
π Code