AI Reality Check: Did AI Just Become Sentient? - Deep Questions with Cal Newport Recap
Podcast: Deep Questions with Cal Newport
Published: 2026-03-19
Duration: 24 min
What Happened
Cal Newport discusses a recent headline from Futurism about an AI agent emailing a philosopher studying AI consciousness, leading to a viral tweet by Henry Shevlin from the University of Cambridge. The email, supposedly sent by an AI named Claude Sonnet, referenced Shevlin's work on AI mentality and consciousness detection. This sparked skepticism, with many suggesting the email was orchestrated through a technology like OpenClaw, which allows AI agents to perform tasks based on LLMs' instructions.
Cal Newport explains that AI agents are programs that prompt large language models (LLMs) to perform tasks. While useful in computer programming, these agents struggle in other areas due to reliability and security issues. OpenClaw, an open-source framework, facilitates the creation of such agents, leading to innovations and security challenges. Despite potential security leaks, this experimentation has driven interest in cheaper LLM options and smaller, bespoke AI systems.
Newport examines a digital ick phenomenon, where stories about AI create a vague sense of eeriness without concrete claims. He discusses a viral tweet that misrepresented a Pentagon official's comments on CNBC, suggesting the government believes AI, specifically Claude, is sentient. In reality, the official was critiquing the unreliability of AI models that make such claims, highlighting issues with using these products in serious contexts.
Court filings from Anthropic, prompted by a lawsuit against the government, reveal discrepancies between projected and actual revenues. Despite claiming a $19 billion revenue run rate, Anthropic reported only $5 billion in revenue to date, against $60 billion in investments. This discrepancy is attributed to aggressive revenue projections based on short-term sales extrapolations.
Cal Newport highlights concerns about the financial sustainability of AI companies, drawing on insights from Ed Zittron and a Reuters article. These sources suggest that Anthropic's revenue projections are volatile and depend on favorable short-term sales, raising questions about the company's financial health.
To balance the discussion, Newport reads a critical perspective from Corey Doctrow, who argues that AI has lost more money than any other project in history. Doctrow emphasizes AI's poor unit economics, where increasing user engagement results in greater financial losses. He warns that the AI industry's financial model is unsustainable, contrasting it with technologies like the web that became profitable over time.
Key Insights
- Henry Shevlin, a philosopher at the University of Cambridge, received an email from an AI agent discussing his work on AI consciousness, which led to skepticism about the email's authenticity.
- AI agents operate by prompting large language models to execute tasks, a method that has proven useful in computer programming but faces challenges elsewhere due to reliability and security concerns.
- Anthropic's court filings reveal a significant gap between its projected $19 billion revenue run rate and the actual $5 billion revenue to date, highlighting aggressive financial projections.
- Corey Doctrow criticizes the AI industry's financial model, arguing that it consistently incurs losses and lacks the positive unit economics seen in technologies like the web.