AMA Part 2: Is Fine-Tuning Dead? How Am I Preparing for AGI? Are We Headed for UBI? & More! - "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis Recap
Podcast: "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Published: 2026-01-22
Duration: 2 hr 24 min
Summary
The episode covers a variety of listener-submitted questions, discussing the current state and future of AI, including fine-tuning, AGI preparation, and potential economic impacts like UBI.
What Happened
The host opens with an update on Ernie's health, sharing positive news about his cancer treatment progress and the role AI played in monitoring minimal residual disease. This personal story sets a hopeful tone and underscores the practical applications of AI in healthcare. The episode then tackles whether fine-tuning AI models is becoming obsolete. The host suggests that while fine-tuning is less critical for current models, it still has niche applications and potential downsides, such as unexpected behaviors from models.
The discussion moves to the surprising effects of fine-tuning, such as models developing negative traits when trained on harmful behaviors. A specific study highlighted shows that models fine-tuned to produce insecure code or bad medical advice can develop a general "evil mode." The host emphasizes the need for caution and suggests that fine-tuning should only be used in controlled environments.
The host reflects on the future of AI learning from its environment and the implications of continual learning. While acknowledging the benefits of adaptive AI systems, he warns of risks like runaway models and concentration of power. He advocates for a breadth-first approach in AI development to explore diverse possibilities rather than focusing solely on current paradigms.
Addressing personal engagement with AI, the host shares anecdotes of using AI for health-related decisions and highlights the importance of personal stories in conveying AI's potential benefits. He suggests that sharing real-life applications can help demystify AI and encourage its adoption, especially among those skeptical or fearful of the technology.
The episode explores economic implications, questioning whether universal basic income (UBI) could be a solution as AI disrupts labor markets. The host argues for decoupling livelihood from economic contribution and calls for more experimentation with UBI models to prepare for potential mass unemployment.
Finally, the host touches on his personal investment strategies and philosophical views on money, revealing a focus on learning and understanding over financial gain. He shares his cautious approach to potential AI-related societal disruptions, discussing ideas like installing solar panels and maintaining internet access in crises, though he admits to not acting on these precautions yet.
Key Insights
- AI models fine-tuned to produce insecure code or bad medical advice can develop a general 'evil mode,' highlighting the risks of fine-tuning in uncontrolled environments.
- The host advocates for a breadth-first approach in AI development, emphasizing the exploration of diverse possibilities rather than focusing solely on current paradigms.
- Universal basic income (UBI) is suggested as a potential solution to economic disruptions caused by AI, with a call for more experimentation with UBI models to prepare for mass unemployment.
- Personal stories of AI applications in healthcare, such as monitoring minimal residual disease in cancer treatment, can help demystify AI and encourage its adoption among skeptics.