Forwarded from Pavel Durov
Today we are adding native support for comments in channels. So once you update Telegram, you’ll be able to leave comments in some channels, including this one.
Throughout the next 10 days I’ll be posting stuff here to try this feature out.
What I like about our implementation of comments is that they are indistinguishable from a group chat. In fact, all comments in a channel are hosted in a group attached to that channel.
This allows for many possibilities both for commenters (e.g. adding voice messages, stickers, GIFs etc. to comments) and for admins (e.g. limiting voice messages, stickers, GIFs etc. in comments).
Throughout the next 10 days I’ll be posting stuff here to try this feature out.
What I like about our implementation of comments is that they are indistinguishable from a group chat. In fact, all comments in a channel are hosted in a group attached to that channel.
This allows for many possibilities both for commenters (e.g. adding voice messages, stickers, GIFs etc. to comments) and for admins (e.g. limiting voice messages, stickers, GIFs etc. in comments).
Scalable digital fitting room based on the first full-body Deepfake
The next generation of Deepfake does the full-body swap including head, hair, skin tone and body shape modification. Practically the new technology aims to be applied for a brand agnostic digital fitting room, that works on the existing content and doesn't require content creation on the brand side.
To demonstrate other applications and raise interest of the community, developer created a video, demonstrating many features of the new generation of Deepfake.
YouTube: https://www.youtube.com/watch?v=15q41GjtyZs
Link: http://www.beyondbelief.ai
#deepfake #fittingroom #CV #DL
The next generation of Deepfake does the full-body swap including head, hair, skin tone and body shape modification. Practically the new technology aims to be applied for a brand agnostic digital fitting room, that works on the existing content and doesn't require content creation on the brand side.
To demonstrate other applications and raise interest of the community, developer created a video, demonstrating many features of the new generation of Deepfake.
YouTube: https://www.youtube.com/watch?v=15q41GjtyZs
Link: http://www.beyondbelief.ai
#deepfake #fittingroom #CV #DL
YouTube
#Deepfake 2.0: create full body fakes on any photo from one selfie in 0.1 seconds
Hi, my name is Alexey, I am from the Netherlands. Today I want to show you what I spent the last two years of my life on: the next generation of Deepfake, which can generate "fakes" from one selfie only. In contrast to the old good one, the 2.0 version generates…
learning to summarize from human feedback
by openai
the authors collect a high-quality dataset of human comparisons between summaries, train a model to predict the human-preferred summary & use that model as a reward function to fine-tune a summarization policy using reinforcement learning. they apply this method to a version of the tl;dr dataset of reddit posts & find that their models significantly outperform both human reference summaries & much larger models fine-tuned with supervised learning alone
the researchers focused on english text summarization, as it’s a challenging problem where the notion of what makes a "good summary" is difficult to capture without human input
these models also transfer to cnn/dm news articles, producing summaries nearly as good as the human reference without any news-specific fine-tuning. furthermore, they conduct extensive analyses to understand the human feedback dataset & fine-tuned models. they establish that their reward model generalizes to a new dataset & that optimizing their reward model results in better summaries than optimizing rouge according to humans
blogpost: https://openai.com/blog/learning-to-summarize-with-human-feedback/
paper: https://arxiv.org/abs/2009.01325
code: https://github.com/openai/summarize-from-feedback
#nlp #rl #summarize
by openai
the authors collect a high-quality dataset of human comparisons between summaries, train a model to predict the human-preferred summary & use that model as a reward function to fine-tune a summarization policy using reinforcement learning. they apply this method to a version of the tl;dr dataset of reddit posts & find that their models significantly outperform both human reference summaries & much larger models fine-tuned with supervised learning alone
the researchers focused on english text summarization, as it’s a challenging problem where the notion of what makes a "good summary" is difficult to capture without human input
these models also transfer to cnn/dm news articles, producing summaries nearly as good as the human reference without any news-specific fine-tuning. furthermore, they conduct extensive analyses to understand the human feedback dataset & fine-tuned models. they establish that their reward model generalizes to a new dataset & that optimizing their reward model results in better summaries than optimizing rouge according to humans
blogpost: https://openai.com/blog/learning-to-summarize-with-human-feedback/
paper: https://arxiv.org/abs/2009.01325
code: https://github.com/openai/summarize-from-feedback
#nlp #rl #summarize
NVidia released a technology to change face alignment on video
Nvidia has unveiled AI face-alignment that means you're always looking at the camera during video calls. Its new Maxine platform uses GANs to reconstruct the unseen parts of your head — just like a deepfake.
Link: https://www.theverge.com/2020/10/5/21502003/nvidia-ai-videoconferencing-maxine-platform-face-gaze-alignment-gans-compression-resolution
#NVidia #deepfake #GAN
Nvidia has unveiled AI face-alignment that means you're always looking at the camera during video calls. Its new Maxine platform uses GANs to reconstruct the unseen parts of your head — just like a deepfake.
Link: https://www.theverge.com/2020/10/5/21502003/nvidia-ai-videoconferencing-maxine-platform-face-gaze-alignment-gans-compression-resolution
#NVidia #deepfake #GAN
Forwarded from Binary Tree
Python 3.9.0 is released!
Major new features of the 3.9 series, compared to 3.8
Some of the new major new features and changes in Python 3.9 are:
- PEP 573, Module State Access from C Extension Methods
- PEP 584, Union Operators in dict
- PEP 585, Type Hinting Generics In Standard Collections
- PEP 593, Flexible function and variable annotations
- PEP 602, Python adopts a stable annual release cadence
- PEP 614, Relaxing Grammar Restrictions On Decorators
- PEP 615, Support for the IANA Time Zone Database in the Standard Library
- PEP 616, String methods to remove prefixes and suffixes
- PEP 617, New PEG parser for CPython
- BPO 38379, garbage collection does not block on resurrected objects;
- BPO 38692, os.pidfd_open added that allows process management without races and signals;
- BPO 39926, Unicode support updated to version 13.0.0;
- BPO 1635741, when Python is initialized multiple times in the same process, it does not leak memory anymore;
- A number of Python builtins (range, tuple, set, frozenset, list, dict) are now sped up using PEP 590 vectorcall;
- A number of Python modules (_abc, audioop, _bz2, _codecs, _contextvars, _crypt, _functools, _json, _locale, operator, resource, time, _weakref) now use multiphase initialization as defined by PEP 489;
- A number of standard library modules (audioop, ast, grp, _hashlib, pwd, _posixsubprocess, random, select, struct, termios, zlib) are now using the stable ABI defined by PEP 384.
#python, #release
Major new features of the 3.9 series, compared to 3.8
Some of the new major new features and changes in Python 3.9 are:
- PEP 573, Module State Access from C Extension Methods
- PEP 584, Union Operators in dict
- PEP 585, Type Hinting Generics In Standard Collections
- PEP 593, Flexible function and variable annotations
- PEP 602, Python adopts a stable annual release cadence
- PEP 614, Relaxing Grammar Restrictions On Decorators
- PEP 615, Support for the IANA Time Zone Database in the Standard Library
- PEP 616, String methods to remove prefixes and suffixes
- PEP 617, New PEG parser for CPython
- BPO 38379, garbage collection does not block on resurrected objects;
- BPO 38692, os.pidfd_open added that allows process management without races and signals;
- BPO 39926, Unicode support updated to version 13.0.0;
- BPO 1635741, when Python is initialized multiple times in the same process, it does not leak memory anymore;
- A number of Python builtins (range, tuple, set, frozenset, list, dict) are now sped up using PEP 590 vectorcall;
- A number of Python modules (_abc, audioop, _bz2, _codecs, _contextvars, _crypt, _functools, _json, _locale, operator, resource, time, _weakref) now use multiphase initialization as defined by PEP 489;
- A number of standard library modules (audioop, ast, grp, _hashlib, pwd, _posixsubprocess, random, select, struct, termios, zlib) are now using the stable ABI defined by PEP 384.
#python, #release
Binary Tree
Python 3.9.0 is released! Major new features of the 3.9 series, compared to 3.8 Some of the new major new features and changes in Python 3.9 are: - PEP 573, Module State Access from C Extension Methods - PEP 584, Union Operators in dict - PEP 585, Type Hinting…
PEP 584 — new operator "|" that can be used to merge two dictionaries
Ford is reported to work on a two-leg human-shaped delivery robot
#RL #bostondynamics #delivery #autonomousrobots
#RL #bostondynamics #delivery #autonomousrobots
Data Science by ODS.ai 🦜
Ford is reported to work on a two-leg human-shaped delivery robot #RL #bostondynamics #delivery #autonomousrobots
By the way we appreciate comments in English, so thank you for taking part in the discussions.
Forwarded from PDP-11🚀
NIVIDIA DPU
by Forbes
⚡️Domain specific processors (accelerators) are playing greater roles in off-loading CPUs and improving performance of computing systems
🤖The NVDIA BlueField-2 DPU( Data Processor Units), a new domain specific computing technology, is enabled by the company’s Data-Center-Infrastructure-on-a-Chip Software (DOCA SDK) Smart NIC. . Off-loading processing to a DPU can result in overall cost savings and improved performance for data centers.
👬 NVIDIA’s current DPU lineup includes two PCIe cards BlueField-2 and BlueField-2X DPUs. BlueField-2 based on ConnectX®-6 Dx SmartNIC combined with powerful Arm cores. BlueField-2X includes all the key features of a BlueField-2 DPU enhanced with an NVIDIA Ampere GPU’s AI capabilities that can be applied to data center security, networking and storage tasks
Read more about DPUs:
- Product page
- Mellanox product brief
- Servethehome
- Nextplatform
- Nextplatform
by Forbes
⚡️Domain specific processors (accelerators) are playing greater roles in off-loading CPUs and improving performance of computing systems
🤖The NVDIA BlueField-2 DPU( Data Processor Units), a new domain specific computing technology, is enabled by the company’s Data-Center-Infrastructure-on-a-Chip Software (DOCA SDK) Smart NIC. . Off-loading processing to a DPU can result in overall cost savings and improved performance for data centers.
👬 NVIDIA’s current DPU lineup includes two PCIe cards BlueField-2 and BlueField-2X DPUs. BlueField-2 based on ConnectX®-6 Dx SmartNIC combined with powerful Arm cores. BlueField-2X includes all the key features of a BlueField-2 DPU enhanced with an NVIDIA Ampere GPU’s AI capabilities that can be applied to data center security, networking and storage tasks
Read more about DPUs:
- Product page
- Mellanox product brief
- Servethehome
- Nextplatform
- Nextplatform
Waymo started driverless tests in Phoenix
This #Google company plans to expand tests to cover whole state later.
Blog: https://blog.waymo.com/2020/10/waymo-is-opening-its-fully-driverless.html
Redditers’ experience: https://www.reddit.com/r/waymo/comments/j7rphd/4_minute_full_video_in_waymo_one_no_driver_short/
#autonomousrobots #selfdriving #rl #DL
This #Google company plans to expand tests to cover whole state later.
Blog: https://blog.waymo.com/2020/10/waymo-is-opening-its-fully-driverless.html
Redditers’ experience: https://www.reddit.com/r/waymo/comments/j7rphd/4_minute_full_video_in_waymo_one_no_driver_short/
#autonomousrobots #selfdriving #rl #DL
StyleGAN2 with adaptive discriminator augmentation (ADA)
Github: https://github.com/NVlabs/stylegan2-ada
ArXiV: https://arxiv.org/abs/2006.06676
#StyleGAN #GAN #DL #CV
Github: https://github.com/NVlabs/stylegan2-ada
ArXiV: https://arxiv.org/abs/2006.06676
#StyleGAN #GAN #DL #CV
Open cool NOT Kaggle contest – AIJ Contest 2020 🎉
◼️ Digital Peter: Recognition of Peter the Great’s manuscripts
October 9 – November 8
Digital Peter is an educational task with a historical slant created on the basis of several AI technologies (CV, NLP, and knowledge graphs). The task was prepared jointly with the Saint Petersburg Institute of History of the Russian Academy of Sciences, Federal Archival Agency of Russia and the Russian State Archive of Ancient Acts.
Participants are invited to create an algorithm for line-by-line recognition of manuscripts written by Peter the Great.
◼️ NoFloodWithAI: Flash floods on the Amur river
October 9 – November 1
NoFloodWithAI is a special track with a socially important theme prepared jointly with the Ministry of Emergency Situations, Ministry of Natural Resources and Rosgidromet of Russia.
Participants are invited to develop an algorithm for short-term forecasting of water levels in the Amur River for the following settlements: Dzhalinda, Blagoveshchensk, Innokentievka, Leninskoye, Khabarovsk, Komsomolsk-on-Amur, Nikolaevsk-on-Amur for 10 days in advance in order to prevent emergency situations in Russia’s regions. The results of the contest will be reused to mitigate environmental risks and minimize economic damage wrought on the regions.
◼️ AI4Humanities: ruGPT-3
October 16 – November 1
It is a track developed especially for those familiar with artificial neural networks. It will help to learn about a promising ruGPT-3 technology able to generate very complex meaningful texts based on just one request made in natural human language. For example, it can help you answer most questions included in basic or unified national exams (OGE or EGE), write Java code at the request "Please make a website for the online shop", come up with a business idea for a new startup, or write new popular science articles.
more on the project page: https://ods.ai/tracks/aij2020
#ods #aij #contest #peter #floods #rugpt3
◼️ Digital Peter: Recognition of Peter the Great’s manuscripts
October 9 – November 8
Digital Peter is an educational task with a historical slant created on the basis of several AI technologies (CV, NLP, and knowledge graphs). The task was prepared jointly with the Saint Petersburg Institute of History of the Russian Academy of Sciences, Federal Archival Agency of Russia and the Russian State Archive of Ancient Acts.
Participants are invited to create an algorithm for line-by-line recognition of manuscripts written by Peter the Great.
◼️ NoFloodWithAI: Flash floods on the Amur river
October 9 – November 1
NoFloodWithAI is a special track with a socially important theme prepared jointly with the Ministry of Emergency Situations, Ministry of Natural Resources and Rosgidromet of Russia.
Participants are invited to develop an algorithm for short-term forecasting of water levels in the Amur River for the following settlements: Dzhalinda, Blagoveshchensk, Innokentievka, Leninskoye, Khabarovsk, Komsomolsk-on-Amur, Nikolaevsk-on-Amur for 10 days in advance in order to prevent emergency situations in Russia’s regions. The results of the contest will be reused to mitigate environmental risks and minimize economic damage wrought on the regions.
◼️ AI4Humanities: ruGPT-3
October 16 – November 1
It is a track developed especially for those familiar with artificial neural networks. It will help to learn about a promising ruGPT-3 technology able to generate very complex meaningful texts based on just one request made in natural human language. For example, it can help you answer most questions included in basic or unified national exams (OGE or EGE), write Java code at the request "Please make a website for the online shop", come up with a business idea for a new startup, or write new popular science articles.
more on the project page: https://ods.ai/tracks/aij2020
#ods #aij #contest #peter #floods #rugpt3
On Single Point Forecasts for Fat-Tailed Variables
Fundametal paper, which is applicable to software development deadline estimation, #COVID dynamics forecasts and everything else. Great reminder on that sometimes the more correct solution is to provide TBD answer than any estimation.
ArXiV: https://arxiv.org/pdf/2007.16096.pdf
#Taleb #fattails
Fundametal paper, which is applicable to software development deadline estimation, #COVID dynamics forecasts and everything else. Great reminder on that sometimes the more correct solution is to provide TBD answer than any estimation.
ArXiV: https://arxiv.org/pdf/2007.16096.pdf
#Taleb #fattails
Forwarded from Находки в опенсорсе
> How to make CPython faster.
> We want to speed up CPython by a factor of 5 over the next four releases.
> See the plan for how this can be done.
> Making CPython faster by this amount will require funding. Relying on the goodwill and spare time of the core developers is not sufficient.
This includes JIT and other awesome features for #python
https://github.com/markshannon/faster-cpython
We need this!
> We want to speed up CPython by a factor of 5 over the next four releases.
> See the plan for how this can be done.
> Making CPython faster by this amount will require funding. Relying on the goodwill and spare time of the core developers is not sufficient.
This includes JIT and other awesome features for #python
https://github.com/markshannon/faster-cpython
We need this!
GitHub
GitHub - markshannon/faster-cpython: How to make CPython faster.
How to make CPython faster. Contribute to markshannon/faster-cpython development by creating an account on GitHub.