#232 - ChatGPT Ads, Thinking Machines Drama, STEM - Last Week in AI Recap

Podcast: Last Week in AI

Published: 2026-01-28

Duration: 1 hr 41 min

Summary

This episode dives into the latest in AI news, focusing on the implications of open-source model releases, the intersection of AI and national security, and ongoing discussions about the potential risks of authoritarian capture in AI development.

What Happened

In this episode, hosts Andrey Karenkov and Jeremy Harris reflect on the complexity and evolving landscape of AI, particularly touching on themes like open-source model releases and the ongoing drama surrounding thinking machines. They explore how recent developments in AI technology could impact society, especially concerning national security and governance. The hosts emphasize the importance of understanding these dynamics in light of historical authoritarian regimes and the unique challenges posed by AI.

Jeremy introduces a listener's question about authoritarian lockdowns and the role of AI in enabling oppressive regimes. He explains that the concern revolves around the potential for AI to enhance surveillance and control, making it harder for populations to resist authoritarian rule. The hosts discuss the current state of AI development across various labs and express caution about the future, noting that while competition may appear even now, there is a looming risk that one entity could gain a significant advantage, leading to potential authoritarian outcomes if superintelligence emerges.

Key Insights

Key Questions Answered

What are the risks of open-source AI models?

The hosts discuss how open-source AI model releases can be a double-edged sword. While they promote transparency and collaboration, there's also a concern that these models could be misused for malicious purposes. The balance between accessibility and safety remains a critical topic in AI discussions.

How does AI influence national security?

Jeremy highlights that AI technologies are increasingly intertwined with national security frameworks. The advancement of AI could enable more effective surveillance and military applications, raising ethical and strategic concerns about their deployment and the potential for misuse.

What is authoritarian capture in the context of AI?

The concept of authoritarian capture refers to the risk that advanced AI technologies could empower oppressive regimes to maintain control over their populations. Jeremy explains that if a company develops superintelligent AI, it could wield power similar to that of a nation-state, complicating efforts to resist authoritarian governance.

How are AI labs currently competing?

The hosts mention that there is currently a relatively even competition among AI labs, which may help mitigate the risks associated with monopolistic control over AI technologies. However, they caution that this balance might not last forever, and the potential for one lab to achieve a significant breakthrough remains a concern.

What are the implications of AI for democratic societies?

Jeremy raises the question of how AI could reshape democratic societies. While there is potential for AI to enhance governance and public services, there is also a risk that it could be used to undermine democratic processes, as governments may leverage AI for surveillance and manipulation, challenging civil liberties.