AI at the Edge is a different operating environment

Practical AI Podcast Recap

Published:

Guests: Brandon Shibley

What Happened

Edge AI refers to deploying AI models outside of the cloud, close to where data is captured, such as on devices or sensors. Brandon Shibley from Edge Impulse, a Qualcomm company, explains that significant innovation in silicon is enabling the embedding of AI models at the edge with greater efficiency and capability.

Economic pressures are pushing for AI to achieve productive outcomes and return on investment. Large language models (LLMs) are expanding in the cloud while shrinking in size for edge applications, allowing a range of problem-solving possibilities. Edge devices can handle small to mid-size LLMs with parameters ranging from single digits to tens of billions.

Edge AI involves combining multiple lean models to solve problems efficiently due to constraints like size, power, and connectivity. Key constraints at the edge include size, power, cost, reliability, latency, and privacy, with edge AI allowing sensitive data to remain private by processing it locally.

Physical AI, such as that used in robotics or self-driving vehicles, requires real-time performance, influencing whether computation occurs at the sensor or in the cloud. Cascades or ensembles of models are employed at the edge to process data efficiently, often beginning with lightweight models for initial detection.

Edge Impulse provides a platform for data handling, model training, and optimization for edge deployment. The edge environment is fragmented with diverse hardware, unlike the more unified cloud environment dominated by Nvidia. ML Ops practices are crucial for continuously deploying and updating models as environments change.

Connectivity issues and distributed environments pose challenges for managing and updating edge AI deployments. Over-the-air update frameworks help manage software and model updates on edge devices. Knowledge distillation transfers knowledge from large models to smaller, specialized models suited for edge deployment.

TinyML refers to small machine learning models used on very small devices, like wearable rings. Edge Impulse is a leading platform in the edge AI space, addressing the diversity and fragmentation of silicon and abstracting hardware differences for model deployment.

Qualcomm's acquisition of Edge Impulse enhances power efficiency and access to accelerators like the Hexagon NPU. Edge AI is especially relevant for battery-powered devices, such as autonomous vehicles, where power efficiency is paramount. Developers can differentiate products using cost-efficient, power-efficient processors for AI model deployment.

Key Insights

View all Practical AI recaps