DigiKey-eMag-EdgeAI-Vol 18

Figure 1: An input speech signal is digitally processed to create a spectrograph used to train an NN to detect keywords. Image source: Arm

Machine learning (ML) has found its way into many areas of the Cloud and has been finding its way to the Edge on relatively powerful processors running Linux. The problem with traditional ML running on these systems is that their power profiles are too large for them to ‘disconnect’ and perform work as battery-operated Edge devices. The trend, and the future of ML at the Edge, is to use tinyML. TinyML aims to bring ML algorithms to resource-constrained devices, such as microcontrollers based on Arm Cortex-M processors. In this blog, we will explore the most popular use cases for leveraging tinyML on microcontroller-based devices for use at the Edge.

Use case #1: keyword spotting

The first use case that tinyML is becoming popular for is keyword spotting. Keyword spotting is the ability of a device to recognize a keyword like ‘Hey Siri’, ‘Alexa’, ‘Hello’, and so forth. Keyword spotting has many uses for edge devices. For example, one might want to use a low-power processor to watch for a keyword that will wake up a more powerful one. Another use case might be to control an embedded system or a robot. I’ve seen examples where a microcontroller was used to decode keywords like ‘forward’, ‘backward’, ‘stop’, ‘right’, and ‘left’ to control a robot’s movement.

Keyword spotting with tinyML

3 uses for tinyML at the Edge

I’ve seen examples where a microcontroller was used to decode keywords like ‘forward’, ‘backward’, ‘stop’, ‘right’, and ‘left’ to control a robot’s movement.

Written by Jacob Beningo

we get technical

8

9

Powered by