Powering the Edge: the evolution of AI from digital to neuromorphic systems for ultra-low power performance
Syntiant is also adding in AI accelerators for handling sensor data. Its Neural Decision Processors (NDPs) are specifically designed to run deep learning models, providing 100x the efficiency and 10/30x the throughput of existing low- power microcontrollers. These NDPs can be used for acoustic event detection for security applications to video processing in teleconferencing and equip almost any device with real-time data processing and decision making with near-zero latency and without the need for libraries or compilers.
the structure of the brain, with interconnected neurons. When a signal is detected, a spike of data propogates thorough the network. These spiking networks are much lower power as they only use the neurons in the path. These can be used for always on audio detectors in chips from companies such as POLYN, or an image processor from Prophesee which is teamed with the Akida spiking neural network from Brainchip as IP that can be integrate into other chips. POLYN in Cambridge, UK, developed its Neuromorphic Analog Signal Processing (NASP) to handle any type of sensor and add all kinds of edge AI algorithms. The neurons in the chip are neurons are physically implemented as an analog circuitry elements according to the mathematical simulation of a single neuron, and optimised for TinyML algorithms. TinyML cuts down the amount of data that needs to be processed with various techniques such as embeddings. An embedding is a function that can map a discrete list of values into a continuous vector to be processed by an edge AI engine and is easier to train. This allows the AI computations can be performed directly on the device and do not require users to send data to the cloud or a remote server.
time-continuous streaming data, such as video analytics, target tracking and audio classification. This can boost the analysis of MRI and CT medical scans for vital signs prediction, and time series analytics used in forecasting, and predictive maintenance to highlight when equipment is going to fail so that repairs can be scheduled.. The TENNs allow for radically simpler implementations by consuming raw data directly from sensors. Like the Polyn and Innatera approaches, this drastically reduces model size and operations performed, while maintaining very high accuracy. This can shrink design cycles and lower the cost of development for customers such as Renesas Electronics. But Brainchip has also added in support for digital Vision Transformers (ViT) acceleration for image classification, object detection, and semantic segmentation. This allows it to self-manage the execution of complex networks like RESNET-50 completely in the neural processor without CPU intervention and minimizes system load.
Figure 3: The Hailo 8 is the first AI accelerator to be added to the Raspberry Pi 5 single board computer
Figure 6: The Akida 1000 neuromorphic AI IP
Figure 4: The Syntiant neural decision processor
implementations that improve latency and power consumption, and enable inference computations directly on devices like wearables, IoT sensors and more, increasing their functionality but also improving users’ privacy as the data stays on the device. Polyn has various implementations for its analog AI, handling vibration data to extract useful data from a sensor or to watch for and recognise a wake word or for voice control. Algorithm-based data compression does not work for noisy signals because of the fundamentally linear aspect of algorithms. Neural networks on the other hand can extract useful information even from very noisy data, due to a non-linear way they process data. Some deep neural network architectures such as NASP prove to be exceptionally well suited for addressing vibration monitoring challenges. Dutch neuromorphic AI chipmaker Innatera has combined an ultra-
low-power spiking neural network engine and a custom 32bit microcontroller core using the open RISC-V instruction set architecture (ISA) with 384 KB of embedded SRAM memory. This creates a single chip that processing sensor data quickly and efficiently with power consumption under 1mW. This is similarly being used for signal processing and pattern recognition tasks using spiking neural networks alongside DNNs and conventional processing in the same device. All of this fits into a 2.16mm x 3mm chip in a 35 pin wafer scale package There is also a key trend to combining analog neuromorphic AI and digital AI technologies. The first generation of the Akida neural spiking processor developed by Brainchip has been evaluated by NASA for handling sensors on space missions, and the second- generation now includes Temporal Event Based Neural Nets (TENN) spatial-temporal convolutions that supercharge the processing of raw
The Akida IP platform can also learn on the chip, allowing continuous improvement and data- less customization that improves security and privacy. This is being used in secure, small form factor devices like hearable and wearable devices, that take raw audio input, medical devices for monitoring heart and respiratory rates and other vitals that consume only microwatts of power. This can scale up to HD-resolution vision solutions delivered through high-value, battery-operated or fanless devices enabling a wide variety of applications from surveillance systems to factory management and augmented reality to scale effectively. All of this marks the combination of the spiking and digital neural networks, with a focus on ultra low power. This combination of technologies is potentially a key step forward for scaling up the size and performance of all kinds of embedded AI systems without driving up the power consumption.
Analog Edge AI
These are all digital AI implementations, but as every engineer knows, there is often another way. One increasingly popular approach is neuromorphic, or spiking, AI. Neuromorphic is replicating
Figure 2: Combining a sparse neural network accelerator with an ARM microcontroller in a single package
Figure 5: The NASP chip from POLYN
The NASP chips are true Tiny AI
we get technical
58
59
Powered by FlippingBook