Powering the Edge: the evolution of AI from digital to neuromorphic systems for ultra-low power performance
together to provide lower power and more performance. Image recognition and computer vision that has been a stalwart application of AI, reliably identifying defects on the production line at speeds beyond the human eye. This AI capability is moving further to the Edge of the network, down to sensors in the field. Embedded microcontrollers from STMicroelectronics, NXP Semiconductors, Renesas Electronics, Infineon Technologies and Analog Devices have all kinds of different types of digital AI accelerator blocks alongside their CPU cores, often dedicated to particular applications, whether that’s refining sensor data, correcting for errors or pattern recognition for images for spoken words. Embedded algorithms are moving from digital signal processing (DSP) to CNNs to transformers for object recognition and detection and pose detection, whether that’s for fault inspection on a production line, shelf monitoring in
a warehouse or person monitoring in the smart home. One of the challenges is that data from sensors has a lot of zeros. This sparsity of data is a major challenge for digital AI chips, which previously have had to process data whether it is a 0 or a 1. The latest designs tackle this sparsity head on, reducing the amount of processing required and so the amount of energy used. Femtosense in the US for example has designed an AI accelerator that is optimised for sparse networks, both reducing the amount of data and operating directly on the compressed data flow. This allows the AI framework to fit into memory on the chip, with 1Mbit of SRAM available in the first generation chip, slashing the power consumption of the edge AI operations. Building the chip on a 22nm fully depleted silicon on insulator (FD SOI) process also helps reduce the power consumption even further. Rather than putting the chip down on a PCB it has been combined with a 40nm microcontroller in the same package to make it even easier for engineers to use without it being too expensive. There are quite a few accelerators for all kinds of applications. Hailo and Ambarella are seeing success in driver safety systems, and self-driving cars and trucks, while Axelera has developed an architecture that handles the AI models in memory. This ‘in memory
compute’ can dramatically cut the power consumption for edge applications. The chip, built on a 12nm process at TSMC, has been benchmarked at 480frame/s handling YOLO AI video analysis on 16 HD streams simultaneously for embedded security camera applications. The Axelera Metis chip is now on M.2 boards for easy integration with controllers and Hailo has been working with Raspberry Pi on its Pi 5 AI Kit. This brings access to the Hailo 8 AI accelerator to both professional and enthusiast creators for home automation, security and robotics based on the Raspberry Pi 5 board. The AI Kit is designed for the Raspberry Pi 5 and uses the M.2 HAT+ connection to add the Hailo- 8L M.2 AI acceleration module. This provides 13 TOPS of edge AI inference for computer vision and other edge AI applications. The key for developers is that the accelerator is fully integrated with Raspberry Pi’s camera software stack and supports numerous out- of-the-box AI applications through Hailo’s software suite and model zoo. This enables Raspberry Pi’s industrial customers to integrate AI into high-performance solutions that are extremely cost- effective and power-efficient. For enthusiasts, the AI Kit provides an accessible way to enhance their creative projects with AI.
There are many types of machine learning. AI implemented at the Edge of the network and embedded into devices is bringing significant advantages in performance and power consumption. But there are also many other types of AI and machine learning algorithms being used in all kinds of different places, not just the data centre. The technology has been evolving from digital deep neural networks (DNN) and convolutional neural networks (CNNs) to transformer networks. At the same time some of these embedded AI chips are using analog approaches for more performance at much lower power, particularly for processing signals from sensors locally without having to send the data to the Cloud.
Contributed By DigiKey’s European Editors
All these digital and analog technologies are also coming
Figure 1: An Edge AI evaluation board from Infineon Technologies
we get technical
56
57
Powered by FlippingBook