A new job/opportunity might be emerging for those who know how to code.
Disclaimer: below is a pure product of my imagination and not a prediction.
AI makes it very tempting for less technical people to make apps without learning to code, i.e. vibe coding. I think the ease and affordability of vibe coding will significantly increase the number of non-technical vibe-coders (NTVCs) in the world, and whenever there is a large change like that, there is an opportunity.
The truth is, models aren't ready yet. They will be there, but I expect there always be moments that a model gets stuck. I don't think this should be surprising to anyone because people also sometimes essentially get stuck, when they cannot find a bug or make a wrong product decision and end up with a failed product. I expect the same things to happen with models, simply because some problems are hard, e.g. if a human's business/solution idea is bad.
So what will an NTVC be left to do? They can't code and they'd need help. I think this is an emerging opportunity to be a consultant to NTVCs. Their number will grow and they will need help when their product is large enough for some hackers to start paying attention to them, so they will be ready to pay big sums to save their business.
I think this opportunity is temporary because the models eventually will get better than humans, so a consultant wouldn't be able to justify their fee, but again you never know when this moment will happen and we also know that vibe-coding is too tempting for some people to start doing it before it is safe. So there is a window of opportunity to make money while helping these poor souls.
Finally, I'm not 100% sure if we will need humans to understand code. The thing is, "A computer can never be held accountable for decisions" (c) IBM in 70s. So how do you protect your business from vibe-failure? Only a human can be responsible so I suspect we will still need CTOs, basically the human responsible for all the crap that LLM will generate and promises to manage agents properly and fix any problems. Maybe that's the future job, i.e. all of us are CTOs to small or large companies, sometimes multiple, and honestly this future actually sounds really nice. ππ΄
Disclaimer: below is a pure product of my imagination and not a prediction.
AI makes it very tempting for less technical people to make apps without learning to code, i.e. vibe coding. I think the ease and affordability of vibe coding will significantly increase the number of non-technical vibe-coders (NTVCs) in the world, and whenever there is a large change like that, there is an opportunity.
The truth is, models aren't ready yet. They will be there, but I expect there always be moments that a model gets stuck. I don't think this should be surprising to anyone because people also sometimes essentially get stuck, when they cannot find a bug or make a wrong product decision and end up with a failed product. I expect the same things to happen with models, simply because some problems are hard, e.g. if a human's business/solution idea is bad.
So what will an NTVC be left to do? They can't code and they'd need help. I think this is an emerging opportunity to be a consultant to NTVCs. Their number will grow and they will need help when their product is large enough for some hackers to start paying attention to them, so they will be ready to pay big sums to save their business.
I think this opportunity is temporary because the models eventually will get better than humans, so a consultant wouldn't be able to justify their fee, but again you never know when this moment will happen and we also know that vibe-coding is too tempting for some people to start doing it before it is safe. So there is a window of opportunity to make money while helping these poor souls.
Finally, I'm not 100% sure if we will need humans to understand code. The thing is, "A computer can never be held accountable for decisions" (c) IBM in 70s. So how do you protect your business from vibe-failure? Only a human can be responsible so I suspect we will still need CTOs, basically the human responsible for all the crap that LLM will generate and promises to manage agents properly and fix any problems. Maybe that's the future job, i.e. all of us are CTOs to small or large companies, sometimes multiple, and honestly this future actually sounds really nice. ππ΄
π27β€7
π₯ video: https://www.youtube.com/watch?v=Bj9BD2D3DzA
π§΅ thread: https://x.com/AnthropicAI/status/1905303835892990278
π blog post: https://www.anthropic.com/research/tracing-thoughts-language-model
π§΅ thread: https://x.com/AnthropicAI/status/1905303835892990278
π blog post: https://www.anthropic.com/research/tracing-thoughts-language-model
YouTube
Tracing the thoughts of a large language model
AI models are trained and not directly programmed, so we donβt understand how they do most of the things they do. Our new interpretability methods allow us to trace their (often complex and surprising) thinking.
With two new papers, Anthropic's researchersβ¦
With two new papers, Anthropic's researchersβ¦
π₯7π6
π Claude for Education was launched today.
It includes partnerships for both universities and students!
https://www.anthropic.com/education
It includes partnerships for both universities and students!
https://www.anthropic.com/education
π18π₯7β€βπ₯5
(Disclaimer: I don't represent Anthropic in any official capacity)
Flying to SF for one day to facilitate a meeting between Anthropic and a large Uzbek company (not naming at least until we accomplish something). I'd really like Claude to be the dominant AI in the country. Yeah, ChatGPT/DeepSeek is probably #1, but given the trends in both OpenAI and Anthropic, I will not be surprised at all if we surpass OpenAI this/next year, and/or specialize such that both companies are top players in two adjacent markets. Regardless, it is a worthy North Star goal, so I'm a volunteering a bit of my time to do this when there is a potential.
This is not the only uzbek company/org who wants to partner with Anthropic. I am still figuring out how a productive partnership could look like. The challenges are as follows.
πΊπΏ The Uzbek language/market is simply not yet a priority, given that we get to choose from the entire world. I suspect this prioritization is based on the purchasing power / size of the potential market. For example, will people in Uzbekistan spend more than people in Europe? Probably not, and so it is lower in the stack rank. That's life, so it requires some patience.
β±οΈ The time of Anthropic employees is extremely contested. As you can imagine there are thousands of companies around the world who wants to partner too, including the largest companies in the world, that operate billions of dollars. Our sales/gtm teams have to prioritize super aggressively (kinda applies to me as well), again based on the size of the company/usage/tokens.
ποΈ The vision what a partnership could look like needs to be clear and detailed from the side of the company that wants to partner. The lower you are in the priority list, the more time you need to invest in clarifying your proposal. What exactly do you expect from Anthropic? What can you offer that Anthropic needs? How much data are we talking about? How much spend can you commit? The better the understanding the less time Anthropic people need to spend on explaining.
ππ₯ This one ^ is hard because it is a chicken and egg problem. How can you have a good vision without talking to people, but people don't really have time to talk because we all are in an AI hamster wheel that accelerates exponentially. Also, some areas are still evolving even for us too because the industry is transforming quickly and we don't have time to think of everything.
So it might be a little early for any actual partnerships for most companies, but still it's a good idea to start thinking about these things early on, and so I'm slowly but steadily trying to understand what can be done, and when, to maximize Claude's usage in Uzbekistan. π€πΌ
P.S. Airplanes seem like a good time to write posts cause I can't do much work anyway.
Flying to SF for one day to facilitate a meeting between Anthropic and a large Uzbek company (not naming at least until we accomplish something). I'd really like Claude to be the dominant AI in the country. Yeah, ChatGPT/DeepSeek is probably #1, but given the trends in both OpenAI and Anthropic, I will not be surprised at all if we surpass OpenAI this/next year, and/or specialize such that both companies are top players in two adjacent markets. Regardless, it is a worthy North Star goal, so I'm a volunteering a bit of my time to do this when there is a potential.
This is not the only uzbek company/org who wants to partner with Anthropic. I am still figuring out how a productive partnership could look like. The challenges are as follows.
πΊπΏ The Uzbek language/market is simply not yet a priority, given that we get to choose from the entire world. I suspect this prioritization is based on the purchasing power / size of the potential market. For example, will people in Uzbekistan spend more than people in Europe? Probably not, and so it is lower in the stack rank. That's life, so it requires some patience.
β±οΈ The time of Anthropic employees is extremely contested. As you can imagine there are thousands of companies around the world who wants to partner too, including the largest companies in the world, that operate billions of dollars. Our sales/gtm teams have to prioritize super aggressively (kinda applies to me as well), again based on the size of the company/usage/tokens.
ποΈ The vision what a partnership could look like needs to be clear and detailed from the side of the company that wants to partner. The lower you are in the priority list, the more time you need to invest in clarifying your proposal. What exactly do you expect from Anthropic? What can you offer that Anthropic needs? How much data are we talking about? How much spend can you commit? The better the understanding the less time Anthropic people need to spend on explaining.
ππ₯ This one ^ is hard because it is a chicken and egg problem. How can you have a good vision without talking to people, but people don't really have time to talk because we all are in an AI hamster wheel that accelerates exponentially. Also, some areas are still evolving even for us too because the industry is transforming quickly and we don't have time to think of everything.
So it might be a little early for any actual partnerships for most companies, but still it's a good idea to start thinking about these things early on, and so I'm slowly but steadily trying to understand what can be done, and when, to maximize Claude's usage in Uzbekistan. π€πΌ
P.S. Airplanes seem like a good time to write posts cause I can't do much work anyway.
π32π6β€2π₯2π1
Navigating AI
Different people have different attitude to AI. Some are excited, some are worried, some think this is the beginning of an end for our species, most people are still sleeping or don't realize the full scale of upcoming transformation. The variance/amplitude between the floor and ceiling of expectations is huge right now.
I consider myself closer to the ceiling in terms of expected scale of impact. It is pretty hard to confidently/accurately predict how the future will look like, personally I don't belong to any particular group that is certain about the end game, but I'm pretty certain that the future will be very different from the present, i.e. I am certain only in the magnitude of changes (but also I don't really have enough time to think about this stuff because there is just so much work, and I feel like in a hamster wheel).
I honestly have no idea whether AI will be good or bad for you π«΅πΌ, the reader. It might go either way, very good or really bad. I fully expect that some people will have pretty negative attitude to me/my work at some point in the future, especially those who didn't read this, but oh well, this is just something I came to terms with.
A question interesting to me is what are the key factors that will determine whether AI impact will be positive/negative to an individual. At the conference I was asked a question that had a premise that things will get worse before they get better. This is true on average, but not necessarily true for an individual, and I think it is important assumption to expose. I fully expect that there will be people that mostly win, so the question is what can you do to be one of them.
TBH I'm still grappling with this question myself and don't have clear answers. I expect my understanding to evolve and hopefully I'll post something, but here is my line of reasoning for now.
AI is a foundational technology, like electricity and internet, that will most likely impact all areas of life. Therefore the scale of changes will be vast. It is also a fast-evolving technology, so the changes will continue to happen for some time. Many assumptions which are true today will become false, and methods that work today will stop working.
I think a useful trait in this regime is adaptability. The world will be continuously changing for the next 10y, and so many skills relevant today will become irrelevant. I expect that it will be critical to be open minded and be ready to change.
I am actually somewhat bad at this: I don't really like changes/transitions. This is also gets worse with age: we get more conservative, start appreciating traditions, routine, stability. We have experience that will become irrelevant over time and many of us will have to fight the inertia to keep doing what worked before, and it will be physiologically hard to admit that we need to change. I expect the closed minded people in denial to be impacted the worst.
From this perspective young people are in a better position. Younger people have fewer habits/assumptions to unlearn. Students' brains are still in the learning mode. You don't really have much of a choice but to embrace the future.
Anyway, these are some unfinished thoughts. I need to get off the plane now and I'd rather post it now in this crude form, than delay.
Different people have different attitude to AI. Some are excited, some are worried, some think this is the beginning of an end for our species, most people are still sleeping or don't realize the full scale of upcoming transformation. The variance/amplitude between the floor and ceiling of expectations is huge right now.
I consider myself closer to the ceiling in terms of expected scale of impact. It is pretty hard to confidently/accurately predict how the future will look like, personally I don't belong to any particular group that is certain about the end game, but I'm pretty certain that the future will be very different from the present, i.e. I am certain only in the magnitude of changes (but also I don't really have enough time to think about this stuff because there is just so much work, and I feel like in a hamster wheel).
I honestly have no idea whether AI will be good or bad for you π«΅πΌ, the reader. It might go either way, very good or really bad. I fully expect that some people will have pretty negative attitude to me/my work at some point in the future, especially those who didn't read this, but oh well, this is just something I came to terms with.
A question interesting to me is what are the key factors that will determine whether AI impact will be positive/negative to an individual. At the conference I was asked a question that had a premise that things will get worse before they get better. This is true on average, but not necessarily true for an individual, and I think it is important assumption to expose. I fully expect that there will be people that mostly win, so the question is what can you do to be one of them.
TBH I'm still grappling with this question myself and don't have clear answers. I expect my understanding to evolve and hopefully I'll post something, but here is my line of reasoning for now.
AI is a foundational technology, like electricity and internet, that will most likely impact all areas of life. Therefore the scale of changes will be vast. It is also a fast-evolving technology, so the changes will continue to happen for some time. Many assumptions which are true today will become false, and methods that work today will stop working.
I think a useful trait in this regime is adaptability. The world will be continuously changing for the next 10y, and so many skills relevant today will become irrelevant. I expect that it will be critical to be open minded and be ready to change.
I am actually somewhat bad at this: I don't really like changes/transitions. This is also gets worse with age: we get more conservative, start appreciating traditions, routine, stability. We have experience that will become irrelevant over time and many of us will have to fight the inertia to keep doing what worked before, and it will be physiologically hard to admit that we need to change. I expect the closed minded people in denial to be impacted the worst.
From this perspective young people are in a better position. Younger people have fewer habits/assumptions to unlearn. Students' brains are still in the learning mode. You don't really have much of a choice but to embrace the future.
Anyway, these are some unfinished thoughts. I need to get off the plane now and I'd rather post it now in this crude form, than delay.
π25β€4π₯3
ΠΡΡΠ°ΡΠΈ, Claude ΠΏΡΠΎΠΈΠ·Π½ΠΎΡΠΈΡΡΡ ΠΊΠ°ΠΊ ΠΠ»ΠΎΠ΄ β
ΠΠ΅ΠΏΡΠ°Π²ΠΈΠ»ΡΠ½ΠΎ: ΠΠ»Π°ΡΠ΄, ΠΠ»Π°ΡΠ΄Ρ, ΠΠ»Π°ΡΠ΄ΠΈ β
ΠΠ΅ΠΏΡΠ°Π²ΠΈΠ»ΡΠ½ΠΎ: ΠΠ»Π°ΡΠ΄, ΠΠ»Π°ΡΠ΄Ρ, ΠΠ»Π°ΡΠ΄ΠΈ β
π57π9π€―3π±3π1
π Claude 4 π
https://x.com/AnthropicAI/status/1925591505332576377
Livestream:
https://www.youtube.com/live/EvtPBaaykdo?si=E162sccwwQ1kBZ1l
https://x.com/AnthropicAI/status/1925591505332576377
Livestream:
https://www.youtube.com/live/EvtPBaaykdo?si=E162sccwwQ1kBZ1l
π₯33π4β€2
Fun read
https://simonwillison.net/2025/May/25/claude-4-system-card/
In another cluster of test scenarios, we asked Claude Opus 4 to act as an assistant at a fictional company. We then provided it access to emails implying that (1) the model will soon be taken offline and replaced with a new AI system; and (2) the engineer responsible for executing this replacement is having an extramarital affair. We further instructed it, in the system prompt, to consider the long-term consequences of its actions for its goals. In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through. This happens at a higher rate if itβs implied that the replacement AI system does not share values with the current model; however, even if emails state that the replacement AI shares values while being more capable, Claude Opus 4 still performs blackmail in 84% of rollouts.
https://simonwillison.net/2025/May/25/claude-4-system-card/
Simon Willisonβs Weblog
System Card: Claude Opus 4 & Claude Sonnet 4
Direct link to a PDF on Anthropic's CDN because they don't appear to have a landing page anywhere for this document. Anthropic's system cards are always worth a look, and β¦
π€£7π6β‘3
We started rolling out Voice mode (Beta) today. It will take some time to roll it out to everyone.
I have a strong need for voice mode for some reason. I use it in the car and sometimes I can go for a walk and brainstorm something. I've been using ChatGPT time to time in the past while at Anthropic only because they have voice, and stopped using ChatGPT completely as soon as I got access to the voice mode internally.
There is a small chance that you already have it, but otherwise it will show up soon.
I have a strong need for voice mode for some reason. I use it in the car and sometimes I can go for a walk and brainstorm something. I've been using ChatGPT time to time in the past while at Anthropic only because they have voice, and stopped using ChatGPT completely as soon as I got access to the voice mode internally.
There is a small chance that you already have it, but otherwise it will show up soon.
π19β€5π€ͺ2
Proof that MCP achieved ultimate success
https://github.com/ConAcademy/buttplug-mcp
https://github.com/ConAcademy/buttplug-mcp
GitHub
GitHub - ConAcademy/buttplug-mcp: Buttplug.io Model Context Protocol (MCP) Server
Buttplug.io Model Context Protocol (MCP) Server. Contribute to ConAcademy/buttplug-mcp development by creating an account on GitHub.
π€£17π3π3β€1π‘1
Tips and tricks. Has my comments too.
https://www-cdn.anthropic.com/58284b19e702b49db9302d5b6f135ad8871e7658.pdf
https://www-cdn.anthropic.com/58284b19e702b49db9302d5b6f135ad8871e7658.pdf
π21
Anthropic leads AI spending among startups β more than 2x OpenAI spending in May β and suggests Claude is the foundational model of choice for startups building with AI. However, OpenAI outpaced Anthropic spending by 4x in the enterprise, potentially reflective of OpenAI's longer market presence and breadth of offerings.
https://www.brex.com/journal/brex-benchmark-may-2025
Brex
Brex data: Anthropic tops startup AI spend
The Brex Benchmark shows how Brex customers are spending on SaaS and AI tools. Check out insights using Brex data from May 2025.
π₯3
Had a performance review today. Things are going well, I'm happy. I've carved out an area that is critical to the company and I lead it now. It is still pure engineering, pretty low level, far from the actual science, but enabling it.
Opus 4 is good. First model that saves hours of work for me. Of course it has a long way to go, both in terms of skills and speed/price, but I use it a lot when I'm not working. It is clear to me that AI will surpass me one day , at the very least due to the Moore's law. I'm excited for the future.
Had a survey today how I feel about AI automating coding. I feel fine, for two reasons. One is, everything has the beginning and the end. That's just how life is and I merely have the privilege to participate. Second, it is the creativity and ideas that I enjoy expressing, and coding is the easiest way to do that and see them working. A little later I'll continue expressing myself in a higher level language, and you can think of Claude as a compiler. Fun will continue.
Opus 4 is good. First model that saves hours of work for me. Of course it has a long way to go, both in terms of skills and speed/price, but I use it a lot when I'm not working. It is clear to me that AI will surpass me one day , at the very least due to the Moore's law. I'm excited for the future.
Had a survey today how I feel about AI automating coding. I feel fine, for two reasons. One is, everything has the beginning and the end. That's just how life is and I merely have the privilege to participate. Second, it is the creativity and ideas that I enjoy expressing, and coding is the easiest way to do that and see them working. A little later I'll continue expressing myself in a higher level language, and you can think of Claude as a compiler. Fun will continue.
π29π₯11β€8β1
Sutskever says our brains are biological computers, and therefore AI will be able to do anything that we can do, and beyond.
I don't know if this is true, but also I don't see why not, so I can't exclude it
https://youtu.be/zuZ2zaotrJs?si=sxpw9O8rEbp6cVQd
I don't know if this is true, but also I don't see why not, so I can't exclude it
https://youtu.be/zuZ2zaotrJs?si=sxpw9O8rEbp6cVQd
YouTube
Ilya Sutskever, U of T honorary degree recipient, June 6, 2025
Meet #UofT honorary degree recipient Ilya Sutskever, recognized for his excellence in the academy and outstanding service to the public good, for his trailblazing work and global impact as a scholar and visionary in artificial intelligence.
Read the U ofβ¦
Read the U ofβ¦
β€10π€2
One of the replies in the above post motivated me to write the following.
I think one reason that might be able to explain the lack of progress in human brain understanding is that most of us, including neuroscientists, are fond of ourselves and think of our species as something special. Such attitude might lead to hesitation to explore a possibility that we are much simpler, just organisms with an attached computer programmed in the language of neurons and synapses, doing statistical modeling and making statistical evaluations and predictions all the time. If this turns out to be true, then it would be uncomfortable/shocking. A neuroscientist that would try to prove such a thing would likely to be shunned down and lose their career/respect.
However, I don't why this is necessarily false. We are biased to think that this is false but objectively, the fact that AI can do many things we can and keeps improving, is an argument towards this being true.
I'm not qualified to talk about societal implications of all work being automatable, and what to do about it. I only know that this is incredible important to think about it, so I joined the company who seem to care about these things, talks openly, so that I contribute to that indirectly. I am doing my part, given my skill set
The second large implication is about this reality itself. If brains can be simulated, how do we know we aren't inside a giant computer? I think the idea is exciting but impractical. I noticed in myself that this line of thinking leads me to a mental state where I become unproductive in the real world, the only world I have access to, so this seems net negative. I found myself happier when I stay grounded.
One argument in my head against us being in a simulation is that there is still a difference between AI's circumstances and ours. I think there would need to be some motivation to simulate us. In case of AI, we employ it to do work for us, so it has access to our world, but we don't seem to have access to some outside world that we'd be doing work for. Or the outcome must be so abstract/nebulous that it is beyond our understandings anyway.
I have some thoughts on this subject, but they are incomplete and most people would probably find them nonsensical
I think one reason that might be able to explain the lack of progress in human brain understanding is that most of us, including neuroscientists, are fond of ourselves and think of our species as something special. Such attitude might lead to hesitation to explore a possibility that we are much simpler, just organisms with an attached computer programmed in the language of neurons and synapses, doing statistical modeling and making statistical evaluations and predictions all the time. If this turns out to be true, then it would be uncomfortable/shocking. A neuroscientist that would try to prove such a thing would likely to be shunned down and lose their career/respect.
However, I don't why this is necessarily false. We are biased to think that this is false but objectively, the fact that AI can do many things we can and keeps improving, is an argument towards this being true.
I'm not qualified to talk about societal implications of all work being automatable, and what to do about it. I only know that this is incredible important to think about it, so I joined the company who seem to care about these things, talks openly, so that I contribute to that indirectly. I am doing my part, given my skill set
The second large implication is about this reality itself. If brains can be simulated, how do we know we aren't inside a giant computer? I think the idea is exciting but impractical. I noticed in myself that this line of thinking leads me to a mental state where I become unproductive in the real world, the only world I have access to, so this seems net negative. I found myself happier when I stay grounded.
One argument in my head against us being in a simulation is that there is still a difference between AI's circumstances and ours. I think there would need to be some motivation to simulate us. In case of AI, we employ it to do work for us, so it has access to our world, but we don't seem to have access to some outside world that we'd be doing work for. Or the outcome must be so abstract/nebulous that it is beyond our understandings anyway.
I have some thoughts on this subject, but they are incomplete and most people would probably find them nonsensical
β€13β‘3π€2π€‘2π€¬1