#326 Zuzanna Stamirowska: Inside Pathway's AI Systems That Work with Live, Real-Time Data - Eye On A.I. Recap

Podcast: Eye On A.I.

Published: 2026-03-11

Duration: 1 hr 8 min

Guests: Zuzanna Stamirowska

Summary

Zuzanna Stamirowska discusses Pathway's groundbreaking post-transformer AI architecture, Baby Dragon Hatchling (BDH), which incorporates memory and reasoning capabilities to overcome the limitations of traditional large language models (LLMs).

What Happened

Pathway's CEO Zuzanna Stamirowska introduced their post-transformer architecture, Baby Dragon Hatchling (BDH), which is designed to address critical limitations in existing AI models, such as the lack of memory and reasoning capabilities. Unlike traditional LLMs, which operate without retaining context, BDH utilizes a graph structure where connections between neurons, or synapses, evolve based on relevance and activity. This design aims to mimic brain-like dynamics, allowing for better reasoning and adaptability over time.

One of BDH's key innovations is its ability to overcome the 'Groundhog Day' limitation of transformers, where models lose all memory after completing a task. By retaining memory and leveraging locality in neuron interactions, BDH ensures consistent reasoning and minimizes hallucinations during inference. This makes it particularly suited for tasks requiring long-term coherence and adaptability to changing inputs.

Stamirowska explained how BDH employs Hebbian learning principles, where connections strengthen based on activity and relevance, creating a dynamic and efficient topology. The architecture's sparsity also enables it to perform efficiently on GPUs, with computational advantages that allow for reasoning without requiring massive scale.

She highlighted BDH's potential applications in industries like healthcare claims resolution and nuclear research, where sparse, highly valuable data requires advanced reasoning capabilities. Additionally, BDH's memory-driven approach makes it ideal for real-time learning and tasks that evolve over time.

The episode delved into the technical aspects of BDH, including the distinction between fast weights (state) and slow weights (parameters), which enable the model to store knowledge effectively in synapses rather than nodes. This approach allows for organic growth and resilience, mirroring complex systems observed in nature.

Stamirowska discussed the implications of such architectures for creativity and generalization, suggesting BDH could enable models to achieve eureka moments through emergent reasoning. Unlike traditional LLMs, BDH's design is aimed at facilitating exploration of concepts and ideas rather than merely recomposing existing knowledge.

Finally, Stamirowska outlined Pathway's partnership with Nvidia and AWS to productize BDH, making it accessible for enterprise use cases. She emphasized the importance of memory and reasoning as the next frontier in AI, with BDH providing a sustainable and efficient path forward.

Key Insights

Key Questions Answered

What is Pathway's Baby Dragon Hatchling architecture discussed on Eye On A.I.?

Pathway's Baby Dragon Hatchling (BDH) is a post-transformer AI architecture designed to incorporate memory and reasoning capabilities. Unlike traditional LLMs, BDH uses a graph-based structure where synaptic connections strengthen based on activity, enabling consistent reasoning and reducing hallucinations.

How does BDH differ from traditional transformer models?

BDH incorporates memory into its architecture, allowing it to retain context and adapt over time. It uses sparsity and Hebbian learning principles to optimize connections between neurons, enabling efficient reasoning without requiring massive scale like transformers.

Can BDH eliminate hallucinations in AI models?

BDH minimizes hallucinations by retaining memory and ensuring coherent reasoning over time. Its graph-based structure and local dynamics enable models to focus on relevant connections, reducing errors during inference.