Why I don’t think AGI is right around the corner - Dwarkesh Podcast Recap
Podcast: Dwarkesh Podcast
Published: 2025-07-03
Duration: 14 min
Summary
The episode discusses skepticism regarding the imminent arrival of AGI, highlighting the limitations of current AI models, particularly their lack of continual learning capabilities. The host argues that while LLMs can handle specific tasks, they struggle to adapt and improve over time like humans do.
What Happened
In this episode, the host narrates a blog post titled 'Why I Don't Think AGI is Right Around the Corner,' written in June 2025. He reflects on his discussions with various podcast guests about the timelines for achieving AGI, with opinions varying from two to twenty years. While he acknowledges the impressive capabilities of today's LLMs, he argues that Fortune 500 companies are not using them for significant workflow transformations, primarily due to the inherent challenges in extracting human-like labor from these models.
The host emphasizes the critical issue of continual learning, noting that despite their strengths, LLMs do not improve over time in the same organic manner that humans learn. He uses the analogy of teaching a child to play the saxophone, explaining how human learners adapt through experience, whereas LLMs are limited to their initial programming. He suggests that the lack of high-level feedback mechanisms and the inability to retain contextual understanding between sessions contribute to this limitation. Although he expresses optimism for the future potential of AI, he believes that meaningful advancements in continual learning are still years away.
Key Insights
- LLMs are impressive but lack the ability to learn and adapt like humans.
- The current applications of LLMs in corporate settings are limited by their inability to provide high-level feedback.
- Future breakthroughs in AI will likely come from solving the problem of continual learning.
- While the host is skeptical about immediate AGI developments, he remains optimistic about long-term advancements in AI.
Key Questions Answered
What are the current limitations of large language models (LLMs)?
The host describes LLMs as being quite capable at performing specific tasks, but they fall short in their ability to continuously learn and adapt over time. For example, he mentions that while he can co-write an essay with an LLM, the model often struggles to provide useful suggestions initially and fails to retain learned preferences beyond the session. This lack of adaptability is a significant barrier to utilizing LLMs for more complex, human-like tasks.
How does the host compare human learning to AI learning?
The host draws a vivid analogy between teaching a child to play the saxophone and the limitations of LLMs. He explains that human learners improve through trial and error, adjusting their methods based on feedback and personal experience. In contrast, LLMs cannot learn from their mistakes in a meaningful way; they rely on static prompts and lack the capacity for ongoing improvement, resulting in a fundamentally different learning process.
What is the host's perspective on the timeline for achieving AGI?
The host expresses skepticism about the immediate arrival of AGI, suggesting that many predictions are overly optimistic. He mentions discussions with podcast guests who predict AGI within two to twenty years, but he believes that the complexities of continual learning and the current limitations of AI models mean that significant breakthroughs are unlikely to occur within that timeframe. Instead, he posits that while AI may automate certain tasks, true AGI remains further off.
What role does continual learning play in the future of AI, according to the host?
Continual learning is portrayed as a critical factor for the future success of AI models. The host argues that once we solve the issue of continual learning, we could see a significant leap in the capabilities of AI, leading to what might resemble an intelligence explosion. He believes that AIs could learn across different applications and contexts if they achieved this capability, allowing them to become much more effective in various roles across the economy.
What are the host's thoughts on the predictions made by his podcast guests regarding AI?
The host reflects on the predictions made by researchers he interviewed, including expectations for reliable computer use agents by the end of next year. He expresses skepticism towards these forecasts, particularly regarding the practical application of AI in handling complex tasks like managing emails and invoices. While he acknowledges that current AI tools exist, he believes they are not yet capable of performing at the level his guests anticipate, highlighting a gap between optimistic predictions and the current technological reality.