What's the path to AGI? A conversation with Turing Co-founder and CEO Jonathan Siddharth - Gradient Dissent: Conversations on AI Recap

Podcast: Gradient Dissent: Conversations on AI

Published: 2024-11-07

Duration: 55 min

Guests: Jonathan Siddharth

Summary

Jonathan Siddharth, CEO of Turing, discusses the role of human intelligence and coding data in advancing AGI, explaining how Turing leverages a global pool of software engineers to provide high-quality training data for AI models.

What Happened

Jonathan Siddharth, CEO and Co-founder of Turing, discusses the company's role in accelerating AGI advancement and deployment, highlighting how Turing provides essential coding data for training large language models (LLMs). Siddharth explains that while compute power has advanced significantly, the bottleneck now is human intelligence, which Turing addresses by leveraging a global network of vetted software engineers. He elaborates on how coding proficiency in LLMs extends beyond code generation to improve reasoning, logic, and symbolic tasks.

Siddharth also touches on Turing's business model, where they supply coding data for LLM training and create applications for Fortune 500 companies. He emphasizes the importance of coding data, likening coding skills to teaching a model to fish, enabling it to perform a wide range of tasks, including math and logical reasoning. Siddharth describes Turing's approach to finding and vetting talent worldwide, which allows the company to provide high-quality data efficiently.

The conversation delves into Turing's adaptation to the evolving AI landscape, initially starting as a tech services company focused on remote talent sourcing, and now emerging as a key player in the LLM ecosystem. Siddharth reflects on the unexpected opportunity that emerged when AI labs recognized the value of coding data, prompting Turing to leverage its existing infrastructure to meet this demand.

He discusses the role of process supervision in training models, which involves not only supervising outcomes but also the reasoning chain leading to those outcomes. This ensures that the reasoning process aligns with human logic, thus enhancing the model's efficiency and accuracy.

Siddharth shares insights on the future of AI training, acknowledging the potential of synthetic data but emphasizing the irreplaceable value of human intelligence in improving model performance. He notes that while synthetic data can amplify human-generated data, it cannot fully replace it, especially in tasks requiring nuanced understanding and reasoning.

The episode concludes with discussions on enterprise applications of generative AI, particularly in areas like coding co-pilots, underwriting co-pilots, and claims processing co-pilots. Siddharth highlights the potential for significant productivity gains in these areas, though most applications are still in the proof of concept stage.

Siddharth expresses optimism about the future of AI, envisioning more complex tasks being automated and AI becoming an integral part of enterprise workflows. He believes that as these systems are deployed at scale, they will lead to substantial efficiency improvements and new opportunities for businesses.

Key Insights