We're Not Ready for AGI (with Will MacAskill) - Future of Life Institute Podcast Recap
Podcast: Future of Life Institute Podcast
Published: 2025-11-14
Duration: 2 hr 3 min
Summary
Will MacAskill discusses the importance of not only preventing existential catastrophes but also proactively improving the future, emphasizing that both aspects should be prioritized equally.
What Happened
In this episode, host Gus Docker speaks with Will MacAskill, a senior research fellow at Forethought and author of the essay series 'Better Futures.' They delve into the dual approach to ensuring a positive future: preventing existential catastrophes and enhancing the quality of life post-catastrophe. MacAskill argues that historically, the focus has primarily been on avoiding existential risks, but there's an urgent need to also consider how to foster a better future if those risks are mitigated.
MacAskill presents a framework for understanding this dual approach through the concepts of scale, neglect, and tractability. He explains that the value of the future should be viewed through the lens of the probability of existential catastrophe and the potential quality of the future if such a catastrophe is avoided. He illustrates that if the chances of catastrophe are relatively low, then the stakes for improving the future become significantly higher, highlighting the importance of both preventing risks and enhancing future outcomes. He also points out that many areas, such as the governance of AI and outer space, are neglected in discussions about future governance, which he believes is morally critical for long-term survival and prosperity.
Key Insights
- The future's quality is as important as preventing existential risks.
- Understanding the scale of potential future outcomes can shift priorities.
- Significant moral importance lies in how we govern emerging technologies.
- Many crucial future issues are currently neglected in societal discourse.
Key Questions Answered
What does Will MacAskill mean by existential catastrophe?
MacAskill defines existential catastrophes as events that could lead to human extinction or significantly diminish the future's value. He argues that preventing such events has been the primary focus of long-termists, but we must also consider the potential quality of life if we manage to avert these disasters.
How does MacAskill suggest we balance preventing risks and improving the future?
MacAskill proposes that both strategies should be viewed as equally important. He argues that enhancing the future's quality, given that we avoid catastrophe, can have a greater impact than previously expected. His framework of scale, neglect, and tractability helps illustrate why this dual focus is essential.
What areas does MacAskill highlight as neglected in future governance?
MacAskill points out that issues such as the governance of AI and outer space are receiving little attention compared to the existential risks of catastrophe. He emphasizes the moral importance of these areas, as they will shape the future landscape and the rights of potential digital beings.
Why does MacAskill think the future will be predominantly artificial?
MacAskill believes that as technology advances, most beings will eventually be artificial due to the ease of replicating AI. He warns that if an authoritarian regime achieves superintelligence, it could lead to a permanent state of authoritarianism, ultimately stifling human values and progress.
What is the scale, neglect, and tractability framework?
This framework helps MacAskill organize his thoughts on prioritizing future initiatives. Scale refers to the potential impact of improving future outcomes, neglect indicates how overlooked certain topics are, and tractability relates to how feasible it is to make progress in these areas. He argues that a better understanding of these dimensions can guide more effective strategies for ensuring a positive future.