METR’s Joel Becker on exponential Time Horizon Evals, Threat Models, and the Limits of AI Productivity - Latent Space: The AI Engineer Podcast Recap

Podcast: Latent Space: The AI Engineer Podcast

Published: 2026-02-27

Duration: 56 min

Summary

In this episode, Joel Becker discusses METR's focus on model evaluation and threat research in AI, emphasizing the importance of understanding AI capabilities and their potential risks. He highlights the evolving nature of threat models and the significance of time horizon evaluations in predicting AI productivity.

What Happened

The episode kicks off with host Alessio introducing Joel Becker from METR, who explains that METR stands for Model Evaluation and Threat Research. Becker elaborates on the organization's mission to evaluate AI models' capabilities today and in the future, as well as their potential risks in real-world applications. He describes their approach to connecting these capabilities with specific threat models to assess whether AI could pose significant risks to society.

As the conversation progresses, Becker addresses the distinction between the model evaluation (ME) and threat research (TR) components of METR's work. He notes that although much of the publicized work has focused on model evaluation, there is an ongoing effort to enhance threat research. Becker mentions a report on GPT-5 that concludes it does not pose large-scale risks, which is significant in shaping future discussions about AI capabilities and threats. He emphasizes the importance of understanding why certain models may not lead to catastrophic outcomes despite their apparent capabilities.

Key Insights

Key Questions Answered

What does METR stand for?

METR stands for Model Evaluation and Threat Research, which encompasses understanding AI capabilities and their potential risks in the wild.

How does METR evaluate AI models?

METR evaluates AI models by assessing their capabilities today and in the future, connecting these capabilities to specific threat models to understand potential risks.

What are time horizon evaluations?

Time horizon evaluations involve plotting the capabilities of AI models over time, allowing researchers to predict future productivity and performance based on current data.

What are the key findings of METR's report on GPT-5?

The report on GPT-5 concludes that it does not pose large-scale risks, suggesting that despite its capabilities, the evidence indicates it isn't capable enough to commit catastrophic harms.

How are tasks selected for METR's evaluations?

Tasks for evaluations are chosen based on their economic relevance and the ability for models to complete them with sufficient information, ensuring that the evaluation is meaningful and scalable.