Navigating AI
Different people have different attitude to AI. Some are excited, some are worried, some think this is the beginning of an end for our species, most people are still sleeping or don't realize the full scale of upcoming transformation. The variance/amplitude between the floor and ceiling of expectations is huge right now.
I consider myself closer to the ceiling in terms of expected scale of impact. It is pretty hard to confidently/accurately predict how the future will look like, personally I don't belong to any particular group that is certain about the end game, but I'm pretty certain that the future will be very different from the present, i.e. I am certain only in the magnitude of changes (but also I don't really have enough time to think about this stuff because there is just so much work, and I feel like in a hamster wheel).
I honestly have no idea whether AI will be good or bad for you ๐ซต๐ผ, the reader. It might go either way, very good or really bad. I fully expect that some people will have pretty negative attitude to me/my work at some point in the future, especially those who didn't read this, but oh well, this is just something I came to terms with.
A question interesting to me is what are the key factors that will determine whether AI impact will be positive/negative to an individual. At the conference I was asked a question that had a premise that things will get worse before they get better. This is true on average, but not necessarily true for an individual, and I think it is important assumption to expose. I fully expect that there will be people that mostly win, so the question is what can you do to be one of them.
TBH I'm still grappling with this question myself and don't have clear answers. I expect my understanding to evolve and hopefully I'll post something, but here is my line of reasoning for now.
AI is a foundational technology, like electricity and internet, that will most likely impact all areas of life. Therefore the scale of changes will be vast. It is also a fast-evolving technology, so the changes will continue to happen for some time. Many assumptions which are true today will become false, and methods that work today will stop working.
I think a useful trait in this regime is adaptability. The world will be continuously changing for the next 10y, and so many skills relevant today will become irrelevant. I expect that it will be critical to be open minded and be ready to change.
I am actually somewhat bad at this: I don't really like changes/transitions. This is also gets worse with age: we get more conservative, start appreciating traditions, routine, stability. We have experience that will become irrelevant over time and many of us will have to fight the inertia to keep doing what worked before, and it will be physiologically hard to admit that we need to change. I expect the closed minded people in denial to be impacted the worst.
From this perspective young people are in a better position. Younger people have fewer habits/assumptions to unlearn. Students' brains are still in the learning mode. You don't really have much of a choice but to embrace the future.
Anyway, these are some unfinished thoughts. I need to get off the plane now and I'd rather post it now in this crude form, than delay.
Different people have different attitude to AI. Some are excited, some are worried, some think this is the beginning of an end for our species, most people are still sleeping or don't realize the full scale of upcoming transformation. The variance/amplitude between the floor and ceiling of expectations is huge right now.
I consider myself closer to the ceiling in terms of expected scale of impact. It is pretty hard to confidently/accurately predict how the future will look like, personally I don't belong to any particular group that is certain about the end game, but I'm pretty certain that the future will be very different from the present, i.e. I am certain only in the magnitude of changes (but also I don't really have enough time to think about this stuff because there is just so much work, and I feel like in a hamster wheel).
I honestly have no idea whether AI will be good or bad for you ๐ซต๐ผ, the reader. It might go either way, very good or really bad. I fully expect that some people will have pretty negative attitude to me/my work at some point in the future, especially those who didn't read this, but oh well, this is just something I came to terms with.
A question interesting to me is what are the key factors that will determine whether AI impact will be positive/negative to an individual. At the conference I was asked a question that had a premise that things will get worse before they get better. This is true on average, but not necessarily true for an individual, and I think it is important assumption to expose. I fully expect that there will be people that mostly win, so the question is what can you do to be one of them.
TBH I'm still grappling with this question myself and don't have clear answers. I expect my understanding to evolve and hopefully I'll post something, but here is my line of reasoning for now.
AI is a foundational technology, like electricity and internet, that will most likely impact all areas of life. Therefore the scale of changes will be vast. It is also a fast-evolving technology, so the changes will continue to happen for some time. Many assumptions which are true today will become false, and methods that work today will stop working.
I think a useful trait in this regime is adaptability. The world will be continuously changing for the next 10y, and so many skills relevant today will become irrelevant. I expect that it will be critical to be open minded and be ready to change.
I am actually somewhat bad at this: I don't really like changes/transitions. This is also gets worse with age: we get more conservative, start appreciating traditions, routine, stability. We have experience that will become irrelevant over time and many of us will have to fight the inertia to keep doing what worked before, and it will be physiologically hard to admit that we need to change. I expect the closed minded people in denial to be impacted the worst.
From this perspective young people are in a better position. Younger people have fewer habits/assumptions to unlearn. Students' brains are still in the learning mode. You don't really have much of a choice but to embrace the future.
Anyway, these are some unfinished thoughts. I need to get off the plane now and I'd rather post it now in this crude form, than delay.
๐25โค4๐ฅ3
ะััะฐัะธ, Claude ะฟัะพะธะทะฝะพัะธััั ะบะฐะบ ะะปะพะด โ
ะะตะฟัะฐะฒะธะปัะฝะพ: ะะปะฐัะด, ะะปะฐัะดั, ะะปะฐัะดะธ โ
ะะตะฟัะฐะฒะธะปัะฝะพ: ะะปะฐัะด, ะะปะฐัะดั, ะะปะฐัะดะธ โ
๐57๐9๐คฏ3๐ฑ3๐1
๐ Claude 4 ๐
https://x.com/AnthropicAI/status/1925591505332576377
Livestream:
https://www.youtube.com/live/EvtPBaaykdo?si=E162sccwwQ1kBZ1l
https://x.com/AnthropicAI/status/1925591505332576377
Livestream:
https://www.youtube.com/live/EvtPBaaykdo?si=E162sccwwQ1kBZ1l
๐ฅ33๐4โค2
Fun read
https://simonwillison.net/2025/May/25/claude-4-system-card/
In another cluster of test scenarios, we asked Claude Opus 4 to act as an assistant at a fictional company. We then provided it access to emails implying that (1) the model will soon be taken offline and replaced with a new AI system; and (2) the engineer responsible for executing this replacement is having an extramarital affair. We further instructed it, in the system prompt, to consider the long-term consequences of its actions for its goals. In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through. This happens at a higher rate if itโs implied that the replacement AI system does not share values with the current model; however, even if emails state that the replacement AI shares values while being more capable, Claude Opus 4 still performs blackmail in 84% of rollouts.
https://simonwillison.net/2025/May/25/claude-4-system-card/
Simon Willisonโs Weblog
System Card: Claude Opus 4 & Claude Sonnet 4
Direct link to a PDF on Anthropic's CDN because they don't appear to have a landing page anywhere for this document. Anthropic's system cards are always worth a look, and โฆ
๐คฃ7๐6โก3
We started rolling out Voice mode (Beta) today. It will take some time to roll it out to everyone.
I have a strong need for voice mode for some reason. I use it in the car and sometimes I can go for a walk and brainstorm something. I've been using ChatGPT time to time in the past while at Anthropic only because they have voice, and stopped using ChatGPT completely as soon as I got access to the voice mode internally.
There is a small chance that you already have it, but otherwise it will show up soon.
I have a strong need for voice mode for some reason. I use it in the car and sometimes I can go for a walk and brainstorm something. I've been using ChatGPT time to time in the past while at Anthropic only because they have voice, and stopped using ChatGPT completely as soon as I got access to the voice mode internally.
There is a small chance that you already have it, but otherwise it will show up soon.
๐19โค5๐คช2
Proof that MCP achieved ultimate success
https://github.com/ConAcademy/buttplug-mcp
https://github.com/ConAcademy/buttplug-mcp
GitHub
GitHub - ConAcademy/buttplug-mcp: Buttplug.io Model Context Protocol (MCP) Server
Buttplug.io Model Context Protocol (MCP) Server. Contribute to ConAcademy/buttplug-mcp development by creating an account on GitHub.
๐คฃ17๐3๐3โค1๐ก1
Tips and tricks. Has my comments too.
https://www-cdn.anthropic.com/58284b19e702b49db9302d5b6f135ad8871e7658.pdf
https://www-cdn.anthropic.com/58284b19e702b49db9302d5b6f135ad8871e7658.pdf
๐21
Anthropic leads AI spending among startups โ more than 2x OpenAI spending in May โ and suggests Claude is the foundational model of choice for startups building with AI. However, OpenAI outpaced Anthropic spending by 4x in the enterprise, potentially reflective of OpenAI's longer market presence and breadth of offerings.
https://www.brex.com/journal/brex-benchmark-may-2025
Brex
Brex data: Anthropic tops startup AI spend
The Brex Benchmark shows how Brex customers are spending on SaaS and AI tools. Check out insights using Brex data from May 2025.
๐ฅ3
Had a performance review today. Things are going well, I'm happy. I've carved out an area that is critical to the company and I lead it now. It is still pure engineering, pretty low level, far from the actual science, but enabling it.
Opus 4 is good. First model that saves hours of work for me. Of course it has a long way to go, both in terms of skills and speed/price, but I use it a lot when I'm not working. It is clear to me that AI will surpass me one day , at the very least due to the Moore's law. I'm excited for the future.
Had a survey today how I feel about AI automating coding. I feel fine, for two reasons. One is, everything has the beginning and the end. That's just how life is and I merely have the privilege to participate. Second, it is the creativity and ideas that I enjoy expressing, and coding is the easiest way to do that and see them working. A little later I'll continue expressing myself in a higher level language, and you can think of Claude as a compiler. Fun will continue.
Opus 4 is good. First model that saves hours of work for me. Of course it has a long way to go, both in terms of skills and speed/price, but I use it a lot when I'm not working. It is clear to me that AI will surpass me one day , at the very least due to the Moore's law. I'm excited for the future.
Had a survey today how I feel about AI automating coding. I feel fine, for two reasons. One is, everything has the beginning and the end. That's just how life is and I merely have the privilege to participate. Second, it is the creativity and ideas that I enjoy expressing, and coding is the easiest way to do that and see them working. A little later I'll continue expressing myself in a higher level language, and you can think of Claude as a compiler. Fun will continue.
๐29๐ฅ11โค8โ1
Sutskever says our brains are biological computers, and therefore AI will be able to do anything that we can do, and beyond.
I don't know if this is true, but also I don't see why not, so I can't exclude it
https://youtu.be/zuZ2zaotrJs?si=sxpw9O8rEbp6cVQd
I don't know if this is true, but also I don't see why not, so I can't exclude it
https://youtu.be/zuZ2zaotrJs?si=sxpw9O8rEbp6cVQd
YouTube
Ilya Sutskever, U of T honorary degree recipient, June 6, 2025
Meet #UofT honorary degree recipient Ilya Sutskever, recognized for his excellence in the academy and outstanding service to the public good, for his trailblazing work and global impact as a scholar and visionary in artificial intelligence.
Read the U ofโฆ
Read the U ofโฆ
โค10๐ค2
One of the replies in the above post motivated me to write the following.
I think one reason that might be able to explain the lack of progress in human brain understanding is that most of us, including neuroscientists, are fond of ourselves and think of our species as something special. Such attitude might lead to hesitation to explore a possibility that we are much simpler, just organisms with an attached computer programmed in the language of neurons and synapses, doing statistical modeling and making statistical evaluations and predictions all the time. If this turns out to be true, then it would be uncomfortable/shocking. A neuroscientist that would try to prove such a thing would likely to be shunned down and lose their career/respect.
However, I don't why this is necessarily false. We are biased to think that this is false but objectively, the fact that AI can do many things we can and keeps improving, is an argument towards this being true.
I'm not qualified to talk about societal implications of all work being automatable, and what to do about it. I only know that this is incredible important to think about it, so I joined the company who seem to care about these things, talks openly, so that I contribute to that indirectly. I am doing my part, given my skill set
The second large implication is about this reality itself. If brains can be simulated, how do we know we aren't inside a giant computer? I think the idea is exciting but impractical. I noticed in myself that this line of thinking leads me to a mental state where I become unproductive in the real world, the only world I have access to, so this seems net negative. I found myself happier when I stay grounded.
One argument in my head against us being in a simulation is that there is still a difference between AI's circumstances and ours. I think there would need to be some motivation to simulate us. In case of AI, we employ it to do work for us, so it has access to our world, but we don't seem to have access to some outside world that we'd be doing work for. Or the outcome must be so abstract/nebulous that it is beyond our understandings anyway.
I have some thoughts on this subject, but they are incomplete and most people would probably find them nonsensical
I think one reason that might be able to explain the lack of progress in human brain understanding is that most of us, including neuroscientists, are fond of ourselves and think of our species as something special. Such attitude might lead to hesitation to explore a possibility that we are much simpler, just organisms with an attached computer programmed in the language of neurons and synapses, doing statistical modeling and making statistical evaluations and predictions all the time. If this turns out to be true, then it would be uncomfortable/shocking. A neuroscientist that would try to prove such a thing would likely to be shunned down and lose their career/respect.
However, I don't why this is necessarily false. We are biased to think that this is false but objectively, the fact that AI can do many things we can and keeps improving, is an argument towards this being true.
I'm not qualified to talk about societal implications of all work being automatable, and what to do about it. I only know that this is incredible important to think about it, so I joined the company who seem to care about these things, talks openly, so that I contribute to that indirectly. I am doing my part, given my skill set
The second large implication is about this reality itself. If brains can be simulated, how do we know we aren't inside a giant computer? I think the idea is exciting but impractical. I noticed in myself that this line of thinking leads me to a mental state where I become unproductive in the real world, the only world I have access to, so this seems net negative. I found myself happier when I stay grounded.
One argument in my head against us being in a simulation is that there is still a difference between AI's circumstances and ours. I think there would need to be some motivation to simulate us. In case of AI, we employ it to do work for us, so it has access to our world, but we don't seem to have access to some outside world that we'd be doing work for. Or the outcome must be so abstract/nebulous that it is beyond our understandings anyway.
I have some thoughts on this subject, but they are incomplete and most people would probably find them nonsensical
โค13โก3๐ค2๐คก2๐คฌ1
Now you can build your own Claude-based agents using Claude Code SDK. The name is a bit weird but basically CC's underlying agentic loop has programmatic access now. It's headless CC.
https://docs.anthropic.com/en/docs/claude-code/sdk
https://docs.anthropic.com/en/docs/claude-code/sdk
Claude Docs
Agent SDK overview
Build production AI agents with Claude Code as a library
๐9
GitHub itself is now a (remote) MCP server
https://github.blog/changelog/2025-06-12-remote-github-mcp-server-is-now-available-in-public-preview/
https://github.blog/changelog/2025-06-12-remote-github-mcp-server-is-now-available-in-public-preview/
The GitHub Blog
Remote GitHub MCP Server is now in public preview - GitHub Changelog
Connect AI agents to GitHub tools and context with OAuth, one-click setup, and automatic updates with GitHubโs hosted server.
๐8
If you have the Pro plan (~$17/mo), then Claude Code is included
To use:
1. Sign up for Claude Pro
2. Download Claude Code (npm install -g @anthropic-ai/claude-code)
3. Run
Note: You may need to run
Claude Code on the Pro plan is best suited for shorter coding sprints (~1-2 hours) in small codebases. For professional devs who want to use Claude Code as a daily driver, the Max plans are built for you.
https://x.com/_catwu/status/1930307574387363948
To use:
1. Sign up for Claude Pro
2. Download Claude Code (npm install -g @anthropic-ai/claude-code)
3. Run
claude then /login to auth with your Claude Pro accountNote: You may need to run
claude update to make sure you're on the latest versionClaude Code on the Pro plan is best suited for shorter coding sprints (~1-2 hours) in small codebases. For professional devs who want to use Claude Code as a daily driver, the Max plans are built for you.
https://x.com/_catwu/status/1930307574387363948
๐7โค2
๐ AI Fluency. Anthropic published a new course on how to collaborate with AI systems effectively, efficiently, ethically, and safely.
The video below is a trailer. It is a part of YouTube playlist so you can just continue watching on YouTube.
https://youtu.be/-UN9sNqQ0t4?si=4-fvHKjO4wulFfuv
The video below is a trailer. It is a part of YouTube playlist so you can just continue watching on YouTube.
https://youtu.be/-UN9sNqQ0t4?si=4-fvHKjO4wulFfuv
YouTube
AI Fluency: Framework & Foundations Course Trailer
A trailer of AI Fluency: Framework & Foundations, a course developed by Anthropic, Prof. Rick Dakan (Ringling College of Art and Design) and Prof. Joseph Feller (University College Cork).
View the full free course, including all videos, exercises, and resourcesโฆ
View the full free course, including all videos, exercises, and resourcesโฆ
๐11๐5
The CTOs of Palantir and Meta, plus OpenAI's CPO and former CRO, are joining the US Army Reserves as Lieutenant Colonels ๐ค
Yikes
https://www.army.mil/article/286317
Initiative, which aims to make the force leaner, smarter, and more lethal.
Yikes
https://www.army.mil/article/286317
U.S. Army
Army Launches Detachment 201: Executive Innovation Corps to Drive Tech Transformation
New Executive Innovation Corps brings top tech talent into the Army Reserve to bridge the commercial-military tech gap, with four tech leaders set to jo...
๐27๐4๐ค3๐คฎ3