The a16z Show
The a16z Show
Andreessen Horowitz
Dwarkesh and Ilya Sutskever on What Comes After Scaling
1 hour 32 minutes Posted Dec 15, 2025 at 11:00 am.
0:00
1:32:09
Download MP3
Show notes
AI models feel smarter than their real-world impact. They ace benchmarks, yet still struggle with reliability, strange bugs, and shallow generalization. Why is there such a gap between what they can do on paper and in practice
In this episode from The Dwarkesh Podcast, Dwarkesh talks with Ilya Sutskever, cofounder of SSI and former OpenAI chief scientist, about what is actually blocking progress toward AGI. They explore why RL and pretraining scale so differently, why models outperform on evals but underperform in real use, and why human style generalization remains far ahead.
Ilya also discusses value functions, emotions as a built-in reward system, the limits of pretraining, continual learning, superintelligence, and what an AI driven economy could look like.