Just links
6.58K subscribers
358 photos
39 videos
10 files
7.76K links
That's just link aggregator of everything I consider interesting, especially DL and topological condensed matter physics. @EvgeniyZh
Download Telegram
Forwarded from форель разбивает лоб
а вот это уже круто: перепись физиков мира!

в Nature Publishing Group запустили новый журнал Nature Reviews Physics, и решили в первом номере забабахать вот такое, чтобы народ в твиттерах пошел обсуждать. И в общем есть что обсудить, да и упаковано всё идеально (слишком идеально... иногда возникает впечатление, что у академических звезд типа Barabási на фултайме пара спецов по инфографике)

Over the past decades, the diversity of areas explored by physicists has exploded, encompassing new topics from biophysics and chemical physics to network science. However, it is unclear how these new subfields emerged from the traditional subject areas and how physicists explore them. To map out the evolution of physics subfields, here, we take an intellectual census of physics by studying physicists’ careers. We use a large-scale publication data set, identify the subfields of 135,877 physicists and quantify their heterogeneous birth, growth and migration patterns among research areas. We find that the majority of physicists began their careers in only three subfields, branching out to other areas at later career stages, with different rates and transition times. Furthermore, we analyse the productivity, impact and team sizes across different subfields, finding drastic changes attributable to the recent rise in large-scale collaborations. This detailed, longitudinal census of physics can inform resource allocation policies and provide students, editors and scientists with a broader view of the field’s internal dynamics.

https://www.nature.com/articles/s42254-018-0005-3
Identifying and Correcting Label Bias in Machine Learning https://arxiv.org/abs/1901.04966
Forwarded from Spark in me (Alexander)
Pre-trained BERT in PyTorch

https://github.com/huggingface/pytorch-pretrained-BERT

(1)
Model code here is just awesome.
Integrated DataParallel / DDP wrappers / FP16 wrappers also are awesome.

FP16 precision training from APEX just works (no idea about convergence though yet).

(2)
As for model weights - I cannot really tell, there is no dedicated Russian model.
The only problem I am facing now - using large embeddings bags batch size is literally 1-4 even for smaller models.

And training models with sentence piece is kind of feasible for rich languages, but you will always worry about generalization.

(3)
Did not try the generative pre-training (and sentence prediction pre-training), I hope that properly initializing embeddings will also work for a closed domain with a smaller model (they pre-train 4 days on 4+ TPUs, lol).

(5)
Why even tackle such models?
Chat / dialogue / machine comprehension models are complex / require one-off feature engineering.
Being able to tune something like BERT on publicly available benchmarks and then on your domain can provide a good way to embed complex situations (like questions in dialogues).

#nlp
#deep_learning