Benedict Evans has been calling tech shifts for decades. Now he says forget the hype: AI isn’t the new electricity. It’s the biggest change since the iPhone, and that’s plenty big enough.
Featured clips
Your Most Controversial Take On AI?
How To Learn Pattern Matching And Spot Trends
Thinking By Writing
Who Will Win The AI Race?
We talk about why everyone gets platform shifts wrong, where Google’s actually vulnerable, and what real people do with AI when nobody’s watching.
Available Now: Apple Podcasts | Spotify | YouTube | Transcript
Evans sees patterns others don’t. This conversation will change how you think about what’s actually happening versus what everyone says is happening.
Key Ideas
- AI is the biggest platform shift since the iPhone, but it’s only that. It’s not a civilization-altering technology like electricity, because platform shifts happen every 10-15 years and then become just software.
- The data advantage incumbents supposedly have is illusory, mainly because LLMs require vast amounts of generalized text, which is readily available, making data effectively a commodity.
- ChatGPT has captured the brand position like Google did for search, but the underlying models are becoming commodities with no clear product differentiation.
- Only 10% of people use AI daily, with another 15-20% weekly, while 20-30% tried it and didn’t get it, suggesting a major adoption gap despite free access, because people struggle to map AI capabilities to their actual tasks.
- Historical platform shifts demonstrate that incumbents often attempt to integrate new technology as a feature, rather than acknowledging fundamental changes, as evidenced by Kodak’s all-in approach to digital cameras, which ultimately failed due to a shifting business model.
- Regulation that treats AI like weapons creates explicit trade-offs: if you make it hard to build models and start companies, you can’t complain when innovation happens elsewhere.
- Current AI systems have “zero value for quantitative analysis” because error rates remain at dozens per page rather than approaching the near-zero threshold needed for reliable use.
- The feedback loop problem means AI can generate variations but struggles with true originality because variance is penalized in training, unlike AlphaGo, which had an external scoring system.
- Meta and Amazon aim to make LLMs a commodity infrastructure sold at cost, allowing them to differentiate their platforms, whereas OpenAI requires models to retain their value.
- Writing is thinking, and delegating writing to AI means missing the chance to think clearly. Students using AI for homework avoid the mental work that builds reasoning skills.
Reminder: all opinions are the opinion of the guest!

