Google DeepMind CEO Demis Hassabis: AI's Next Breakthroughs, AGI Timeline, Google's AI Glasses Bet - Big Technology Podcast Recap

Podcast: Big Technology Podcast

Published: 2026-01-21

Duration: 34 min

Summary

Demis Hassabis discusses the significant advancements in AI over the past year, the challenges of achieving AGI, and the potential future of Google's AI glasses. He emphasizes the need for breakthroughs in continual learning and memory to reach true general intelligence.

What Happened

In this episode, host Alex Kantrowitz welcomes Demis Hassabis, CEO of Google DeepMind, to discuss the trajectory of AI development and the path towards artificial general intelligence (AGI). Hassabis reflects on the skepticism surrounding AI progress a year ago, noting that internally, DeepMind remained confident in the advancements being made. He mentions that while there were concerns about data limitations, the team found ways to innovate within existing frameworks, suggesting that there is still significant potential for improvement in current AI models.

Hassabis elaborates on the characteristics needed for AGI, highlighting areas such as continual learning and better memory. He acknowledges that the current large language models (LLMs) have limitations, particularly in their inability to retain information outside a single session. He believes that learning, especially across various domains, is essential for AGI, and he sees potential in hybrid systems that combine deep learning with other methods. The conversation touches on the need for innovative breakthroughs to achieve AGI, with Hassabis advocating for a balance between scaling existing technologies and pursuing new ideas.

Key Insights

Key Questions Answered

What advancements have been made in AI over the past year?

Demis Hassabis emphasizes that DeepMind has seen ongoing improvements in AI despite external skepticism. He points out that concerns about data running out were unfounded, as the team discovered ways to extract further value from existing architectures and data. This indicates that there is still substantial room for innovation and improvement within current AI frameworks.

What are the key features needed for AGI?

Hassabis outlines that for a system to be considered AGI, it must exhibit all the cognitive capabilities of humans, including high levels of creativity and problem-solving. He stresses that learning is synonymous with intelligence, highlighting that AGI should not only solve problems but also generate new theories and ideas, akin to groundbreaking scientific achievements.

How does Google DeepMind view the future of LLMs?

Hassabis expresses confidence in large foundation models as essential components of future AGI systems. He acknowledges that while scaling existing ideas is important, there may also be a need for significant innovations to overcome current limitations. This reflects a commitment to advancing both established paradigms and exploring new approaches in AI research.

What are the limitations of current AI models according to Hassabis?

Hassabis points out that existing AI models, such as LLMs, have limitations in their ability to retain learned information over time, often described as having a 'goldfish brain.' He argues for the necessity of continual learning and better memory systems to enhance AI's effectiveness and to create more personalized and adaptive technologies.

What is the significance of hybrid systems in AI research?

Hassabis mentions the potential of hybrid systems that combine neural networks with other methodologies, such as symbolic reasoning. He cites examples like AlphaGo and AlphaFold, which integrate different techniques to achieve impressive results. This approach may be crucial in advancing AI capabilities towards achieving AGI.