Nodir's notebook
2.62K subscribers
29 photos
5 videos
100 links
Engineer πŸ‡ΊπŸ‡ΏπŸ‡ΊπŸ‡Έ

The views are my own and do not represent my employer.
Download Telegram
I liked this talk.
Benjamin Mann is a co-founder of Anthropic. Prior to Anthropic, Ben was one of the architects of GPT-3 at OpenAI. He left OpenAI driven by the mission to ensure that AI benefits humanity. In this talk, Ben opens up about the accelerating progress in AI and the urgent need to steer it responsibly.

In this conversation, we discuss:
1. The inside story of leaving OpenAI with the entire safety team to start Anthropic
2. How Meta’s $100M offers reveal the true market price of top AI talent
3. Why AI progress is still accelerating (not plateauing), and how most people misjudge the exponential
4. Ben’s β€œeconomic Turing test” for knowing when we’ve achieved AGIβ€”and why it’s likely coming by 2027-2028
5. Why he believes 20% unemployment is inevitable
6. The AI nightmare scenarios that concern him mostβ€”and how he believes we can still avoid them
7. How focusing on AI safety created Claude’s beloved personality
8. What three skills he’s teaching his kids instead of traditional academics


https://youtu.be/WWoyWNhx2XU?si=g0lkoY7gAU-bYRjd
πŸ”₯15❀6πŸ‘6
Just some simple thoughts on jobs.

Some people believe there will be no work/jobs in the post-AGI world. I don't think so even though I consider myself AGI-pilled. Mostly because I'm cynical about human nature.

I cannot imagine a world without any problems. Look at the current state, there are wars, climate change, drug epidemic, idiotic country leaders, poor education. That will take a long time to fix. As long as there are problems, there will be work to do.

Second, it is true that work != job. I believe there will always be some scarcity of something, and in particular I don't expect the distance between poor and rich (or low status vs high status) to diminish any time soon. I don't expect a human will be satisfied, ever - there is always more to desire. Earning that will require work in exchange for what the person wants, or perhaps something that can get them what they want (aka money, or some other form of credit). That's a job.

And yes, it is quite possible that those jobs will be much easier, to the extent that you might not consider them jobs at all (is game streaming a job?). But I'm sure most people from 1825 wouldn't consider what we do jobs either.

The transitional period is gonna be rough though, and that's probably the toughest part. People who don't want to change will be hit the most.
πŸ‘25πŸ”₯12❀6
Does this count as new knowledge? No human knew about these vulnerabilities and now we do

Knowledge discovery is a search problem.

Today as part of our commitment to transparency in this space, we are proud to announce that we have reported the first 20 vulnerabilities discovered using our AI-based "Big Sleep" system powered by Gemini


https://x.com/argvee/status/1952390039700431184?s=46
πŸ‘6🀑3
living in uncertainty

each of us is wrong sometimes. when a prediction you made turns out to be false, it's a good exercise to back-propagate it to your world model (what did i miss?) and then do a forward pass to understand what other implications this discovery has (what else must be true that i thought is false). usually it's not a big deal. often it is some wrong assumption/bias which impacts few things. you update your world model and move on

the ai of today was in the realm of fantasy a few years ago. i tried to find what we missed, but all plausible theories explaining llms suggest that we missed something very fundamental, and which has a lot of profound implications. among all world model updates i consider, the delta is large (the rabbit hole is deep), but also i'm not certain which one is the most accurate. just as an example, i am not aware of a theory that would explain why llms are getting good at coding but wont explain why they will eventually become good at everything else.

moreover it is dynamic. last year the talk was about prompt engineering. this year it's agents. i expect it to be different next year, and in general i expect things to continue to evolve. more capabilities will be developed, more work automated and more population waking up

it can be challenging to live like that: we like certainty. you might be used to have some life plan that you follow (say you are a prospective student who chooses your major). any plan has to assume that things that were true during planning will stay true. ok, maybe instead of a static plan you have a more dynamic strategy, essentially a function that accepts the current state and returns suggested actions. well, that function is still compressed knowledge of what works and what doesn't, but that knowledge might also get stale as the world changes

certainty is a luxury. i expect it to continue decreasing. i'm not even sure it will be coming back in our lifetimes, so I'd suggest to start getting used to a world where you can't be very confident in the increasing number of things (this is particularly hard for very smart people who are right most of the time and got used to that)

the only advice i have is gain some humility, be open minded and get ready to constantly adapt to the world changing under our feet
πŸ‘26❀‍πŸ”₯11πŸ”₯6❀2😁2πŸŽ‰1🐳1
I appreciate being mentioned in top-30 AI people in Uzbekistan. Thanks.
https://yuksalish.org/uz/news_detail/835
πŸ”₯55😁14❀8πŸ‘6πŸ€ͺ1
don't let AIs help you spiral into your craziness. if you are vulnerable to that, don't use sycophantic AIs
https://thezvi.wordpress.com/2025/09/16/ai-craziness-notes/
πŸ‘11πŸ‘»3
what an idiot. i mean, the fact that he is an idiot is not new, but this is the new level
https://x.com/whitehouse/status/1969147079478989220?s=46
😱11πŸ’―8🀯7πŸ‘2😒2❀1😁1
OpenAI released a new eval that measures performance on economically valuable, real-world tasks across 44 occupations.

https://openai.com/index/gdpval/
πŸ”₯18❀3πŸ‘3
πŸ”₯43🀩12❀5πŸ‘5😁2πŸ‘1πŸŽ‰1
You can now connect Slack to Claude. It can search your workspace channels, DMs, and files/gdocs to provide context for deep work.

You can also connect Claude app to slack, e.g. ask something in the app and claude can read your slack, search info there, etc.

Video below

https://x.com/claudeai/status/1973445694305468597?s=46
πŸ‘9πŸ”₯8❀5
πŸ¦€ i recommend spending a year with Rust

i don't think i can explain all the reasons why do that in a way that's both short and clear. most likely i'll lose the reader in the middle of the post before i'd get to the point. it is only after some first-hand prolonged experience of learning the Rust way you start getting it.

just trust me on this 😎 go ahead and do yourself a favor

fair warning: first 6mo can be painful, but we have LLMs now that help a lot
🫑42❀11πŸ”₯5πŸ‘4🌚4πŸ‘Œ2😭2
haiku 4.5 (just released) is as smart as sonnet 4.0, but it's 2x faster and 3x cheaper. i've been using it in claude code for a while (primarily because of speed) and i can recommend it. i use it more often than sonnet 4.5 and definitely more than opus

https://www.anthropic.com/news/claude-haiku-4-5
πŸ‘23❀10πŸ”₯8
Addressing seemingly common misunderstanding.

- Sonnet 4.5 is smarter than Opus 4.1.
- Haiku 4.5 nearly as smart than Sonnet 4.0

how come? Scaling laws suggest that the intelligence of models grows with scale (aka the bitter lesson). We increase training scale all the time, so it is not surprising that a newer model is more intelligent than an older model.

Besides, smaller models are:
- much faster, so you are getting more done
- cheaper, so your quota lasts longer
πŸ”₯18πŸ‘7❀2πŸ‘Ύ2
per aspera ad astra!
πŸ”₯38❀13😍6