DigiKey-emag- Edge AI&ML-Vol-10

How to run a ‘Hello World’ machine learning model on STM32 microcontrollers

curious readers, the trained model results versus actual sine wave results can be seen in Figure 2. The output of the model is in red. The sine wave output isn’t perfect, but it works well enough for a ‘Hello World’ program. Selecting a development board Before looking at how to convert the TensorFlow model to run on a microcontroller, a microcontroller needs to be selected for deployment in the model. This article will focus on STM32 microcontrollers because STMicroelectronics has many tinyML/ML tools that work well for converting and running models. In addition, STMicroelectronics has a wide variety of parts compatible with their ML tools (Figure 3). If one of these boards are lying around the office, it’s perfect for getting the ‘Hello World’ application up and running. However, for those interested in going beyond this example and getting into gesture control or keyword spotting, opt for the STM32 B-L4S5I-IOT01A

Discovery IoT Node (Figure 4).

Figure 2. A comparison between TensorFlow model predictions for a sine wave versus the actual values. Image source: Beningo Embedded Group

This board has an Arm Cortex-M4 processor based on the STM32L4+

data repositories are clearly a great place to start. If the required data hasn’t already been made publicly available on the Internet, then another option is for developers to generate their own data. Matlab

series. The processor has 2 megabytes (Mbytes) of flash

Here’s a quick list: ■ Gesture classification

memory and 640 kilobytes (Kbytes) of RAM, providing plenty of space for tinyML models. The module is adaptable for tinyML use case experiments because it also has STMicroelectronics’ MP34DT01 microelectromechanical systems

■ Anomaly detection ■ Analog meter reader ■ Guidance and control (GNC) ■ Package detection No matter the use case, the best way to start getting familiar with tinyML is with a ‘Hello World’ application, which helps developers learn and understand the basic process they will follow to get a minimal system up and running. There are five necessary steps to run a tinyML model on an STM32 microcontroller: 1. Capture data 2. Label data 3. Train the neural network 4. Convert the model 5. Run the model on the microcontroller

Colab’ button. Google Colab, short for Google Colaboratory, allows developers to write and execute Python in their browser with zero configuration and provides free access to Google GPUs. The output from walking through the training example will include two different model files; a model. tflite TensorFlow model that is quantized for microcontrollers and a model_no_quant.tflite model that is not quantized. The quantization indicates how the model activations and bias are stored numerically. The quantized version produces a smaller model that is more suited to a microcontroller. For those Figure 4. The STM32 B-L4S5I-IOT01A Discovery IoT Node is an adaptable experimentation platform for tinyML due to its onboard Arm Cortex-M4 processor, MEMS microphone, and three-axis accelerometer. Image source: STMicroelectronics

Capturing, labelling, and training a ‘Hello World’ model Developers generally have many options available for how they will capture and label the data needed to train their model. First, there are a lot of online training databases. Developers can search for data that someone has collected and labelled. For example, for basic image detection, there’s CIFAR-10 or ImageNet. To train a model to detect smiles in photos, there’s an image collection for that too. Online

or some other tool can be used to generate the datasets. If automatic data generation is not an option, it can be done manually. Finally, if this all seems too time-consuming, there are some datasets available for purchase, also on the Internet. Collecting the data is often the most exciting and interesting option, but it is also the most work. The ‘Hello World’ example being explored here shows how to train a model to generate a sine wave and deploy it to an STM32. The example was put together by Pete Warden and Daniel

(MEMS) microphone that can be used for keyword spotting application development. In

addition, the onboard LIS3MDLTR three-axis accelerometer, also from STMicroelectronics, can be used for tinyML-based gesture detection. Converting and running the TensorFlow Lite model using STM32Cube.AI Armed with a development board that can be used to run the tinyML model, developers can now start to convert the TensorFlow Lite model into something that can run on the microcontroller. The TensorFlow Lite model can run directly on the microcontroller, but it needs a runtime environment to process it.

Situnayake as part of their work at Google on TensorFlow Lite for Microcontrollers. This makes the job easier because they have put together a simple, public tutorial on capturing, labelling, and training the model. It can be found on Github here; once there, developers should click the ‘Run in Google

When the model is run, a series of functions need to be performed. These functions start with collecting the sensor data, then filtering it, extracting

Figure 5. How data flows from sensors to the runtime and then to the output in a tinyML application. Image source: Beningo Embedded Group

Figure 3. Shown are the microcontrollers and the microprocessor unit (MPU) currently supported by the STMicroelectronics AI ecosystem. Image source: STMicroelectronics

we get technical

10

11

Powered by