DigiKey-emag- Edge AI&ML-Vol-10

How to run a ‘Hello World’ machine learning model on STM32 microcontrollers

there won’t be a huge difference, but it is noticeable. The project can then be generated by clicking ‘Generate code’. The code generator will initialize the project and build in the runtime environment for the tinyML model. However, by default, nothing is feeding the model. Developers need to add code to provide the model input values – x values – which the model will then interpret and use to generate the sine y values. A few pieces of code need to be added to the acquire_and_process_data and post_process functions, as shown in Figure 8. At this point, the example is now ready to run. Note: add some printf statements to get the model output for quick verification. A fast compile and deployment results in the ‘Hello World’ tinyML model running. Pulling the model output

■ As much fun as collecting data can be, it’s generally easier to purchase or use an open-source database to train the model Developers who follow these ‘tips and tricks’ will save quite a bit of time and grief when securing their application.

Figure 6. The X-CUBE-AI plug-in needs to be enabled using the application template for this example. Image source: Beningo Embedded Group

Figure 7. The analyze button will provide developers with RAM, ROM, and execution cycle information. Image source: Beningo Embedded Group

for a full cycle results in the sine wave shown in Figure 9. It’s not perfect, but it is excellent for a first tinyML application. From here, developers could tie the output to a pulse width modulator (PWM) and generate the sine wave. Tips and tricks for ML on embedded systems Developers looking to get started with ML on microcontroller- based systems will have quite a bit on their plate to get their first tinyML application up and running. However, there are several ‘tips and tricks’ to keep in mind that can simplify and speed up their development: ■ Walk through the TensorFlow Lite for microcontrollers ‘Hello World’ example, including the Google Colab file. Take some time to adjust parameters and understand how they affect the trained model ■ Use quantized models for microcontroller applications. The quantized model is compressed to work with uint8_t rather than 32-bit floating-point numbers. As a result, the model will be smaller

Conclusion

and execute faster ■ Explore the additional examples

ML has come to the network Edge, and resource-constrained microcontroller-based systems are a prime target. The latest tools allow ML models to be converted and optimized to run on real-time systems. As shown, getting a model up and running on an STM32 development board is relatively easy, despite the complexities involved. While the discussion examined a simple model that generates a sine wave, far more complex models like gesture detection and keyword spotting are possible.

in the TensorFlow Lite for Microcontrollers repository.

will give the developer the ability to select the model file they created and set the model parameters, as shown in Figure 7. An analyze button will also analyze the model and provide developers with RAM, ROM, and execution cycle information. It’s highly recommended that developers compare the Keras and TFLite model options. On the sine wave model example, which is small,

the necessary features, and feeding it to the model. The model will spit out a result which can then be further filtered, and then – usually – some action is taken. Figure 5 provides an overview of what this process looks like.

Other examples include gesture detection and keyword detection ■ Take the ‘Hello World’ example by connecting the model output to a PWM and a low-pass filter to see the resultant sine wave. Experiment with the runtime to increase and decrease the sine wave frequency ■ Select a development board that includes ‘extra’ sensors that will allow for a wide range of ML applications to be tried

The X-CUBE-AI plug-in to STM32CubeMx provides the

runtime environment to interpret the TensorFlow Lite model and offers alternative runtimes and conversion tools that developers can leverage. The X-CUBE-AI plug- in is not enabled by default in a project. However, after creating a new project and initializing the board, under Software Packs-> Select Components, there is an option to enable the AI runtime. There are several options here; make sure that the Application template is used for this example, as shown in Figure 6. Once X-CUBE-AI is enabled, an STMicroelectronics X-CUBE- AI category will appear in the toolchain. Clicking on the category

Figure 9. The ‘Hello World’ sine wave model output when running on the STM32. Image source: Beningo Embedded Group

Figure 7. The analyze button will provide developers with RAM, ROM, and execution cycle information. Image source: Beningo Embedded Group

we get technical

12

13

Powered by