These are the most helpfull/interesting channels for me now:
https://tttttt.me/axisofordinary
https://tttttt.me/j_links
https://tttttt.me/ComplexSys
https://tttttt.me/emptyset_of_ideas
https://tttttt.me/brenoritvrezorkre_channel
https://tttttt.me/AI_DeepLearning
https://tttttt.me/axisofordinary
https://tttttt.me/j_links
https://tttttt.me/ComplexSys
https://tttttt.me/emptyset_of_ideas
https://tttttt.me/brenoritvrezorkre_channel
https://tttttt.me/AI_DeepLearning
Telegram
Axis of Ordinary
Memetic and cognitive hazards.
Substack: https://axisofordinary.substack.com/
Substack: https://axisofordinary.substack.com/
Researching Deep Meaning Of
These are the most helpfull/interesting channels for me now: https://tttttt.me/axisofordinary https://tttttt.me/j_links https://tttttt.me/ComplexSys https://tttttt.me/emptyset_of_ideas https://tttttt.me/brenoritvrezorkre_channel https://tttttt.me/AI_DeepLearning
Telegram
Pathetic low-frequenciers
That's my personal channel of some crazy stuff. Daily I see a lot of strange things across the internet, so I decided to publish some of them here. Beware of: weird math, crazy pics, cybernercophilia, nerdish humor.
Forwarded from Axis of Ordinary
by Matthew Barnett
Some improvements we might start to see more in large language models within 2 years:
- Explicit memory that will allow it to retrieve documents and read them before answering questions https://arxiv.org/abs/2112.04426
- A context window of hundreds of thousands of tokens, allowing the model to read and write entire books https://arxiv.org/abs/2202.07765
- Dynamic inference computation that depends on the difficulty of the query, allowing the model to "think hard" about difficult questions before spitting out an answer https://arxiv.org/abs/2207.07061
- Alignment principles that help the model produce more reliable and more useful output than naive RLHF, such as Anthropic's "Constitutional AI" approach https://www.anthropic.com/constitutional.pdf
Some improvements we might start to see more in large language models within 2 years:
- Explicit memory that will allow it to retrieve documents and read them before answering questions https://arxiv.org/abs/2112.04426
- A context window of hundreds of thousands of tokens, allowing the model to read and write entire books https://arxiv.org/abs/2202.07765
- Dynamic inference computation that depends on the difficulty of the query, allowing the model to "think hard" about difficult questions before spitting out an answer https://arxiv.org/abs/2207.07061
- Alignment principles that help the model produce more reliable and more useful output than naive RLHF, such as Anthropic's "Constitutional AI" approach https://www.anthropic.com/constitutional.pdf