Researching Deep Meaning Of
9 subscribers
2 files
18 links
Download Telegram
Forwarded from Axis of Ordinary
by Matthew Barnett

Some improvements we might start to see more in large language models within 2 years:

- Explicit memory that will allow it to retrieve documents and read them before answering questions

- A context window of hundreds of thousands of tokens, allowing the model to read and write entire books

- Dynamic inference computation that depends on the difficulty of the query, allowing the model to "think hard" about difficult questions before spitting out an answer

- Alignment principles that help the model produce more reliable and more useful output than naive RLHF, such as Anthropic's "Constitutional AI" approach
Forwarded from Complex Systems Studies
Complex systems in the spotlight: next steps after the 2021 Nobel Prize in Physics

The 2021 Nobel Prize in Physics recognized the fundamental role of complex systems in the natural sciences. In order to celebrate this milestone, this editorial presents the point of view of the editorial board of JPhys Complexity on the achievements, challenges, and future prospects of the field. To distinguish the voice and the opinion of each editor, this editorial consists of a series of editor perspectives and reflections on few selected themes. A comprehensive and multi-faceted view of the field of complexity science emerges. We hope and trust that this open discussion will be of inspiration for future research on complex systems.

1.5 MB
Suppressing quantum errors by scaling a surface code logical qubit by Google Quantum AI
653.3 KB
A variational eigenvalue solver on a photonic quantum processor