DigiKey-eMag-EdgeAI-Vol 18

We get technical

Edge AI I Volume 18

3 uses for tinyML at the Edge How to build an AI-powered toaster Road-tested GMSL cameras drive into new markets How machine vision is advancing automation now

we get technical

1

Editor’s note

Welcome to the DigiKey eMagazine Volume 18 – Edge AI. As technology continues to evolve at a rapid pace, we’re constantly exploring innovative solutions that shape the future of industries from IoT to AI and machine vision. In this issue, we’ve curated a selection of articles that offer valuable insights into these game-changing technologies and how they’re transforming the way we approach engineering challenges. In our first feature, we delve into predictive maintenance through AI-powered data acquisition, highlighting how current sensors can play a pivotal role in optimizing efficiency and minimizing downtime. Staying on the topic of AI, we also explore tinyML at the Edge – examining three unique use cases that demonstrate how machine learning can be deployed directly within resource-constrained devices for smarter, more efficient systems. For those venturing into the world of multicore microcontrollers, we break down why they’re essential for IoT devices at the Edge and provide practical advice on getting started with these powerful, parallel-processing units. We also take a deep dive into the crucial, yet often overlooked, aspect of data preparation in machine learning – offering clarity on why clean, structured data is the foundation of successful ML projects. On the hardware front, we explore how to design and deploy smart machine vision systems rapidly, empowering you with the tools needed to integrate visual intelligence into applications across industries. And lastly, we turn our focus to GMSL cameras, which have been road-tested and are driving innovation into new markets, presenting opportunities that are redefining how we capture and process visual data. This issue is packed with cutting-edge information and practical tips to keep you ahead of the curve in the world of technology. We hope these articles inspire fresh ideas and new possibilities as you navigate the exciting developments in your field.

4 Use a current sensor to efficiently acquire data for predictive maintenance with AI 8 3 uses for tinyML at the Edge 12 Why and how to get started with multicore microcontrollers for IoT devices at the Edge 18 How to build an AI-powered toaster 22 Special feature: retroelectro Programming a calculator to form concepts: the organizers of the Dartmouth Summer Research Project 28 What is data-preparation in ML, and why is it crucial for success? 30 How to rapidly design and deploy smart machine vision systems 36 Road-tested GMSL cameras drive into new markets 40 Understanding computer and machine vision 50 How machine vision is advancing automation now

we get technical

2

3

Use a current sensor to efficiently acquire data for predictive maintenance with AI Written by Clive ‘Max’ Maxfield

Arduino Nano 33 IoT also comes equipped with both Wi-Fi and Bluetooth connectivity.

or greater current values). All members of the family support a frequency range of 20 hertz (Hz) to 1 kilohertz (kHz), covering the majority of industrial applications. Also, all CR31xx devices employ a hinge and locking snap that allows them to be attached without interrupting the current carrying wire. The Arduino Nano 33 IoT One example of a low-cost microcontroller development platform suitable for prototyping simple AI/ML applications is the ABX00032 Arduino Nano 33 IoT from Arduino (Figure 2). Featuring an Arm Cortex-M0+ 32-bit ATSAMD21G18A processor running at 48 megahertz (MHz) with 256 kilobytes (Kbytes) of flash memory and 32 Kbytes of SRAM, the

Data capture circuit The circuit used for the purpose of this discussion is shown below in Figure 3. The CR3111-3000 transforms the measured current driving the machine into a much smaller one using a 1000:1 ratio. Resistor R3, which is connected across the CR3111-3000’s secondary (output) coil, acts as a burden resistor, producing an output voltage proportional to the resistor value, based on the amount of current flowing through it. Resistors R1 and R2 act as a voltage divider, forming a ‘virtual ground’ with a value of 1.65 volts. This allows the values from the CR111-3000 to swing positive and negative and still not hit a rail, since the microcontroller cannot accept negative voltages. Capacitor C1 forms part of an RC noise filter that reduces noise from the 3.3 volt

supply and nearby stray fields from getting into the measurements, thereby helping the voltage divider act as a better ground. A vacuum pump with an integrated filter was used to provide a demonstration test bench. For the purposes of this prototype, Tripp Lite’s P006-001 1 foot (ft.) extension power cord was inserted between the power supply and the vacuum pump (Figure 4). Figure 2: The Arduino ABX00032 Nano 33 IoT provides a low-cost platform upon which to build AI/ML applications to enhance existing devices (and create new ones) to be part of the IoT. Image source: Arduino

The Internet of Things (IoT) has brought about tremendous interest in using artificial intelligence (AI) and machine learning (ML) technologies to monitor the health of machines including motors, generators, and pumps, and to alert maintenance engineers as to any looming problems. One difficulty for the designers of AI/ML systems looking to implement this type of predictive maintenance is selecting the best sensor for the application. Another issue is that relatively few designers have any experience creating AI/ML applications. To obtain the data for the AI/ML system to act upon, designers often opt for sophisticated sensors like three-axis accelerometers coupled with high-powered microcontroller development platforms. In many cases, however, it’s possible to achieve the desired goal using a simple current sensor in conjunction with a more modest and less costly microcontroller development platform.

This article introduces the idea of using a current sense transformer to obtain the data required to simply and cost-effectively implement AI/ML applications. Using a low-cost Arduino IoT microcontroller development platform and a current sense transformer from CR Magnetics, the article also presents a simple circuit that employs the current sensor to monitor the health of a vacuum pump with an integrated filter, alerting the user when the filter has become clogged. Finally, the article presents an overview of the process of creating the associated AI/ML application.

avoid this complexity, it’s worth remembering that everything is interrelated. Just as an injury to one part of a person’s body can cause referred pain that is perceived elsewhere in the body, a failing bearing in a motor can modify the current being used to drive that motor. Similarly, in addition to causing overheating, a blocked air intake can also modify the current being used to drive the motor. Consequently, monitoring one aspect of a machine’s operation may cast light on other facets of its workings. As a result, it’s possible to achieve the desired monitoring and sensing goal by observing a related parameter using a substantially simpler sensor, such as the low-cost, small-size, CR3111- 3000 split-core current sense transformer from CR Magnetics (Figure 1). The CR3111-3000 can be used to detect current up to 100 amperes (A) (other members of the CR31xx family can be employed for lessor

The prototype circuit was

Figure 3: The circuit used to convert the output from the CR3111-3000 into a form that can be used by the Arduino Nano 33 IoT with its 3.3 volt inputs. Image source: Max Maxfield

Simple sensors for AI/ML

In order to acquire the data for an AI/ML application to act upon, designers often opt for sophisticated sensors like three- axis accelerometers; but this type of sensor can generate vast amounts of data that are difficult to manipulate and understand. To

Figure 1: The CR3111-3000 split-core current sense transformer provides a low-cost, easy-to-use current detector that can be employed as the primary sensor in an AI/ML predictive maintenance application. Image source: CR Magnetics

we get technical

4

5

Use a current sensor to efficiently acquire data for predictive maintenance with AI

implemented using components from the author’s treasure chest of spare parts (Figure 5). Readily available equivalents would be as follows: ■ Adafruit 64 breadboard ■ Twin Industries TW-E012-000 pre-formed wire kit for use with breadboards ■ Stackpole Electronics RNMF14FTC150R 150 ohm (Ω) ±1% 0.25 watt (W) through-hole resistor ■ Stackpole Electronics’ RNF14FTD10K0 10 kiloohm (kΩ) ±1% 0.25 W through-hole resistor ■ KEMET ESK106M063AC3FA 10 microfarad (µF) 63 volt aluminum electrolytic capacitor With regard to the leads from the current sensor, 1931 22-28 AWG crimp pins from Pololu Corp. were crimped on the ends. These pins were subsequently inserted into a 1904 5 x 1 black rectangular housing with a 0.1 inch (in.) (2.54 millimeter (mm)) pitch, also from Pololu.

Figure 4: The 1-foot extension power cord that was modified to accept the current sensor. Image source: Max Maxfield

Figure 6: Comparison of good/normal data (top) and bad/abnormal data (bottom). Apart from the differences in color, these don’t seem terribly different to the human eye, but an appropriate AI/ML model can distinguish between them. Image source: Max Maxfield

occur. However, the hardware used to implement the predictive maintenance system needs to be as simple and cost-effective as possible; also, designers need ready access to the required software to perform the analysis. As shown, instead of opting for a complex multi-axis accelerometer and associated hardware, a simple, low-cost, small-size, CR3111-3000 split-core current transformer connected to a low- cost microcontroller platform can perform the required sensing and data gathering. Coupled with advances in AI/ML tools and algorithms, it’s now possible for non-AI/ML experts to create sophisticated AI/ML models that can be deployed in a wide range

the beginning and end of the run), and then loaded into NanoEdge AI Studio. The good data was collected with the vacuum pump running in its normal mode. In order to gather the bad data, the pump’s air filter was obstructed with a disk of paper. Using the good and bad data, NanoEdge AI Studio generates the best AI/ML library solution out of 500 million possible combinations. Its ongoing progress is displayed in a variety of different ways, including a scatter chart showing how well the normal signals (blue) are being distinguished from the abnormal signals (red) with regard to a threshold value, which was set to 90% in this example (Figure 7). The early models typically find it difficult to distinguish between the normal and abnormal data,

processor being used (an Arm Cortex-M0+ in the case of the Arduino Nano 33 IoT development board), the type(s) of sensor being used (a current sensor in this case), and the maximum amount of memory that is to be devoted to this AI/ML model (6 Kbytes was selected for this demonstration). In order to create the AI/ML model, it is first necessary to capture representative samples of good and bad data (Figure 6). A simple Arduino sketch (program) was created to read values from the current sensor. This data can be directly loaded into NanoEdge AI Studio ‘on-the-fly’ from the microcontroller’s USB port. Alternatively, the data can be captured into a text file, edited (to remove spurious samples at

but the system evaluates different combinations of algorithmic elements, iterating on increasingly accurate solutions. In this case, the process was halted after 58,252 libraries had been evaluated. The resulting library (model) was only 2 Kbytes in size. It’s important to note that, at this stage, the model is in its untrained form. Many different factors may affect the ways in which the machines run. For example, two seemingly identical vacuum pumps could be mounted in different locations – for example, one on a concrete slab and the other on a suspended floor. Or one of the machines could be located in a hot, humid environment, while the other may be in a cold, dry setting. Furthermore, one could be connected to long lengths of metal pipe, while the other could be attached to short lengths of plastic pipe. Thus, the next step is to incorporate the library into the applications running on the microcontrollers and sensors that are attached to machines that are deployed in the

real world. The AI/ML models on the different machines will then train themselves using good data from these real-world installations. Following this self-training period, the AI/ML models can be left to monitor the health of the machines, looking for anomalies and trends, and reporting their findings and predictions to human supervisors. Conclusion Predictive maintenance using AI/ ML allows engineers to address problems before failures actually

of simple and complex sensing applications.

Creating the AI/ML application

Figure 7: NanoEdge AI Studio evaluates up to 500 million different AI/ML models to determine the optimal configuration for the normal and abnormal data. The initial models are rarely successful (top), but the tool automatically iterates on better and better solutions until the developer decides to call a halt (bottom). Image source: Max Maxfield

In order to create the AI/ML application, a free trial version of NanoEdge AI Studio was accessed from Cartesium’s website (see also, ‘ Easily Bring Artificial Intelligence to Any Industrial System ’). When NanoEdge AI Studio is launched, the user is invited to create and name a new project. The user is then queried as to the

Figure 5: The prototype circuit was implemented using a small breadboard and components from the author’s treasure chest of spare parts. Image source: Max Maxfield

we get technical

6

7

Figure 1: An input speech signal is digitally processed to create a spectrograph used to train an NN to detect keywords. Image source: Arm

Machine learning (ML) has found its way into many areas of the Cloud and has been finding its way to the Edge on relatively powerful processors running Linux. The problem with traditional ML running on these systems is that their power profiles are too large for them to ‘disconnect’ and perform work as battery-operated Edge devices. The trend, and the future of ML at the Edge, is to use tinyML. TinyML aims to bring ML algorithms to resource-constrained devices, such as microcontrollers based on Arm Cortex-M processors. In this blog, we will explore the most popular use cases for leveraging tinyML on microcontroller-based devices for use at the Edge.

Use case #1: keyword spotting

The first use case that tinyML is becoming popular for is keyword spotting. Keyword spotting is the ability of a device to recognize a keyword like ‘Hey Siri’, ‘Alexa’, ‘Hello’, and so forth. Keyword spotting has many uses for edge devices. For example, one might want to use a low-power processor to watch for a keyword that will wake up a more powerful one. Another use case might be to control an embedded system or a robot. I’ve seen examples where a microcontroller was used to decode keywords like ‘forward’, ‘backward’, ‘stop’, ‘right’, and ‘left’ to control a robot’s movement.

Keyword spotting with tinyML

3 uses for tinyML at the Edge

I’ve seen examples where a microcontroller was used to decode keywords like ‘forward’, ‘backward’, ‘stop’, ‘right’, and ‘left’ to control a robot’s movement.

Written by Jacob Beningo

we get technical

8

9

3 uses for tinyML at the Edge

run off a battery and can even have the camera module swapped out. A good getting-started example that you may find interesting is how to use the CIFAR-10 dataset with the Arm CMSIS-NN library for image recognition. The example can be found on YouTube. Use case #3: predictive maintenance The last use case that we will discuss for tinyML is predictive maintenance. Predictive maintenance uses tools such as statistical analysis and ML to predict equipment state based on: ■ Abnormality detection ■ Classification algorithms ■ Predictive models

For example, a factory might have a series of motors, fans, and robotic equipment that are used to produce a product. A company would want to minimize downtime to maximize the number of products that it can produce. If the equipment has sensors that can be interpreted using ML and the other techniques mentioned above, they can detect when the equipment is close to failure. Such a setup might look something like that shown in Figure 3. Connecting a smart sensor to a low-power microcontroller leveraging tinyML can result in a wide variety of useful applications. For example, HVAC units could be monitored, air filters checked, and irregular motor vibration could

be detected, among many others. Preventive maintenance can become more organized, hopefully saving a company from costly reactive measures, ensuring a more optimized maintenance schedule. Conclusion TinyML has so many potential applications and use cases at the Edge. We’ve explored what’s popular now, but the use cases are nearly unlimited. TinyML can be used for gesture detection, guidance and control, and so much more. As Edge devices start to leverage the capabilities of tinyML, the question really becomes, what are you using tinyML for at the Edge?

is typically done by using a microphone to capture an input speech signal. The speech signal is recorded as a voltage over time and then converted into a spectrograph using digital signal processing. The spectrograph is a time series that is plotted against the frequency of the input signal. The spectrograph can be fed into a neural network (NN) to train the tinyML algorithm to recognize specific words. The process is shown in Figure 1. A typical implementation would feed fixed windows of speech into the NN. The network would then evaluate the probability of one of the desired keywords having been spoken. For example, if someone

Figure 2: The OpenMV camera module can be used for image recognition, and development can be done with a simple IDE using Python. Image source: Beningo Embedded Group

or nothing at your door. There are certainly plenty of other applications that range from monitoring old analog meters, detecting lawn health, or even bird counting. Image recognition can seem like a complex field in which to get involved. However, there are several low-cost platforms available that can help developers get up and running. One of my favorites, and one that I use to get things done quickly, is the OpenMV. OpenMV is an open machine vision platform that includes an integrated development environment (IDE), a library framework written in Python, and a camera module from Seeed Technology that helps developers create their machine vision applications (Figure 2). The camera module is based on an STMicroelectronics STM32H7 Cortex-M7 processor. The hardware can be expanded through its onboard expansion headers. It can

said, ‘Yes’, the NN may report that it was 91% sure it was ‘Yes’, with a 2% chance it’s ‘No’, and a 1% chance it’s ‘On’.

The ability to use speech to

control machines is a use case that many device manufacturers are carefully reviewing and hoping to enhance their devices within the coming years.

Use case #2: image recognition

The second use case that tinyML is finding its way into is image recognition. There are quite a few use cases for Edge devices that can perform image

Figure 3: The third popular use case for tinyML is smart sensors that are used for predictive maintenance. Image source: STMicroelectronics

recognition. One use case that you might already be familiar with is the ability to detect whether there is a person, package,

we get technical

10

11

Why and how to get started with multicore microcontrollers for IoT devices at the Edge Written by Jacob Beningo

solve the issues. Developers need to understand the differences between symmetric and asymmetric multicore processors, how to approach functional partitioning, and how to program them effectively. This article will introduce the concept of multicore microcontrollers before discussing

element of their design – the microcontroller – that will allow developers to add features while achieving the optimal balance of performance, functionality, and power consumption. This architectural approach comes in the form of multicore microcontrollers. These have, as their name suggests, multiple processing cores built into a single package. However, just throwing more cores at the problem won’t

Developers of Internet of Things (IoT) devices at the Edge are being asked to incorporate an increasingly diverse and processing-intensive range of functions, from communications and sampling sensors to executing machine learning (ML) inferences. At the same time, developers are being asked to maintain or reduce power consumption. What’s needed is a more flexible architectural approach to a core

how developers can leverage multicore microcontrollers to balance performance and

we get technical

12

13

Why and how to get started with multicore microcontrollers for IoT devices at the Edge

IoT developers are interested in multicore microcontrollers because they allow them to separate their application into multiple execution domains. Separate execution domains allow precise control of the application’s performance, features, and power needs. For example, one core may be used to interact with a user through a high- resolution display and touch panel, while the second core is used manage the real-time requirements of the system such as controlling a motor, relays and sampling sensors. There are many ways that a developer can partition their application, but the two biggest paradigms are to separate the application into: ■ Feature rich/real-time ■ Real-time/secure In the first paradigm, feature rich/ real-time, the system is exactly like the one described in the paragraph above. Feature rich application components, such as the display, ML inferences, audio playback, and memory storage, among others, are all handled by one core. The second core then handles real-time functions such as motor control, sensing, and communication stacks (Figure 1). The second paradigm separates the application into real-time and secure functionality. In the first core, the application may handle things like the display, memory access, and real-time audio

multicore microcontrollers, it’s best to select a development board that has the following characteristics: ■ Includes an LCD for feature rich application exploration ■ Expansion I/O ■ Is low cost ■ Has a well proven ecosystem behind it including example code, community forums, and access to knowledgeable FAEs Let’s look at several examples from STMicroelectronics, starting with the STM32H745I-DISCO (Figure 3). This board is based on the STM32H745ZIT6 dual core microcontroller that comprises an Arm Cortex-M7 core running at 480 megahertz (MHz) and a second Arm Cortex-M4 processor running at 240 MHz. The part includes a double-precision floating point unit and an L1 cache with 16 kilobytes (Kbytes) of data and 16 Kbytes of instruction cache. The discovery board is particularly interesting because it includes additional capabilities such as: ■ An SAI audio codec ■ A microelectromechanical systems (MEMS) microphone ■ On-board QUAD SPI flash ■ 4 gigabyte (Gbyte) eMMC ■ Daughterboard expansion ■ Ethernet ■ Headers for audio and headphones

start experimenting with multicore microcontrollers and really scale up an application. For developers who are looking for a development board that has additional capabilities and far more expansion I/O, the STM32H757I- EVAL may be a better fit (Figure 4). The STM32H757I-EVAL includes additional capabilities over the evaluation board such as: ■ 8 M x 32-bit SRAM ■ 1 Gbit twin quad SPI NOR flash ■ Embedded trace macrocell (ETM) for instruction tracing ■ Potentiometer ■ LEDs ■ Buttons (tamper, joystick, wake- up) These extra capabilities, especially the I/O expansion, can be extremely useful to developers looking to get started.

Figure 1: One paradigm for application design with multicore microcontrollers is to place the feature rich application components in one core and the real-time components in the second core. Image source: STMicroelectronics

Figure 3: The STM32H745I-DISCO board integrates a wide range of on-board sensors and memory capabilities that allow developers to test out the dual core microcontrollers running at 480 MHz and 240 MHz. Image source: STMicroelectronics playback. The second core, on the other hand, may do nothing more than act as a security processor. As such, the second core would handle storage of critical data like device and network keys, handle encryption, secure bootloader, and any other features deemed to fall within the secure software category (Figure 2).

energy constraints. Several multicore microcontrollers from STMicroelectronics’ STM32H7 line will be introduced by way of example. The article will also examine several use cases where developers can leverage multicore processing and split the workload between multiple cores. Introduction to multicore microcontrollers As mentioned, multicore microcontrollers have more than one processing core. There are

two types of configurations which are often used, symmetric and asymmetric processing. Symmetric core configurations contain two or more of the exact same processing cores. For example, they might both be Arm Cortex-M4 processors. Asymmetric cores on the other hand may contain an Arm Cortex-M7 processor and an Arm Cortex-M4 processor. They could also contain an Arm Cortex-M4 and an Arm Cortex-M0+ processor. The combinations are many and depend upon application and design requirements.

Having looked at several development boards, the next step is to outline some recommendations for getting started with a multicore microcontroller application.

There are other potential ways to parse up a multicore

microcontrollers’ application space, but these two paradigms seem to be the most popular among IoT developers.

Figure 4: The STM32H757I-EVAL board provides developers with lots of expansion space, easy access to peripherals, and an LCD screen to get started with multicore applications. Image source: STMicroelectronics

Selecting a multicore microcontroller development board

While multicore microcontrollers are becoming very popular, they are still not quite mainstream and selecting one can be tricky. For a developer looking to work with

The development board has a lot of built-in capabilities that make it extremely easy to

Figure 2: Another paradigm for application design with multicore microcontrollers is to place the real-time application components in one core and all the security components in a second core. Image source: STMicroelectronics

we get technical

14

15

Why and how to get started with multicore microcontrollers for IoT devices at the Edge

Figure 6: The STM32Cube_FW_H7 provides several examples that demonstrate how to configure an operating system with multicore processors. Image source: Beningo Embedded Group board and then follow these ‘tips and tricks’ will find that they save quite a bit of time and grief

How to start that first multicore application

For developers of IoT systems at the network Edge, multicore microcontrollers provide the ability to better match and balance functionality, performance, and power consumption.

No matter which of the two STM32H7 development boards is selected, there are two main tools that are needed to get started. The first is STMicroelectronics’ STM32CubeIDE , a free integrated development environment (IDE) that lets developers compile their application code and deploy it to the development board. STM32CubeIDE also provides the resources necessary to step through and debug an application, and is available for major operating systems including Windows, Linux and MacOS. The second tool is STMicroelectronics’ STM32H7 firmware package. This includes examples for the STM32H7

how those capabilities can be leveraged by the application

Tips and tricks for working with multicore microcontrollers

■ Download the application examples for the STM32H7 processors and run the multicore application examples for the selected development board. The H747 includes two: one for FreeRTOS and one for OpenAMP ■ When debugging an application, don’t forget that there are now two cores running! Make sure to select the correct thread within the debug environment to examine its call history ■ Leverage internal hardware resources, such as a hardware semaphore, to synchronize application execution on the cores ■ Developers that start with a well-supported development

Getting started with multicore microcontrollers is not difficult, but it does require that developers start to think about their application’s design a bit differently. Here are a few ‘tips and tricks’ for getting started with multicore microcontrollers: ■ Carefully evaluate the application to determine which application domain separation makes the most sense. It is possible to mix domains on a single processor, but performance can be affected if not done carefully ■ Take the time to explore the capabilities that are built into the OpenAMP framework and

when working with multicore microcontrollers for the first time.

Specifically, there are two folders that developers will want to pay attention to. The first is the applications folder which has two examples that show how to use OpenAMP (Figure 5). These examples show how to transmit data back and forth between the microcontroller cores where one core sends data to the other core, which then retransmits it back to the first core. Both examples perform this in a different way. One is baremetal, without an operating system, while the other is with FreeRTOS. The second set of examples demonstrates how to configure both cores with and without an RTOS (Figure 6). One example shows how to run FreeRTOS on each core, while the other shows how to use an RTOS on one core and run the second core baremetal. There are several other examples

throughout the firmware package that demonstrate other capabilities, but these are good choices to get started. Loading an example project will result in a developer seeing a project layout similar to that shown in Figure 7. As illustrated, the project is broken up into application code for each core. The build configuration can also be setup such that a developer is working with only one core at a time. This can be seen in Figure 7, through the grayed-out files. A full description of the example code is beyond the scope of this article, but the reader can examine the readme.txt file that is associated with any of the examples to get a detailed description of how it works, and then examine the source code to see how the inter-processor communication (IPC) is actually performed.

Conclusion For developers of IoT systems at the network Edge, multicore microcontrollers provide the ability to better match and balance functionality, performance, and power consumption per the application’s requirements. Such microcontrollers allow a developer to partition their application into domains such as feature rich/ real-time or real-time/secure processing. This ability to separate an application into different domains allows a developer to disable a core to conserve energy when the processing domain is no longer needed or turn it on in order to enhance application performance. As shown, there are several different development boards that can be used to start exploring multicore microcontroller application design and take full control over its performance and energy profile.

development boards for: ■ Multicore processing ■ Using FreeRTOS ■ Peripheral drivers ■ FatFS (file system)

Developers will want to download the firmware application package and become familiar with the examples that are supported by the chosen development board.

Figure 7: An example OpenAMP Ping-Pong project demonstrates to developers how to create a communication channel between the two CPU cores. Image source: Beningo Embedded Group

Figure 5: The STM32Cube_FW_H7 provides several examples that demonstrate how to get started with multicore processing using OpenAMP. Image source: Beningo Embedded Group

we get technical

16

17

2.5 V (still considered logic HIGH for 3.3V pins). IMPORTANT: the SAMD51 GPIO pins are NOT 5V tolerant! Make sure you use a divider, diode, etc. to drop the voltage if you’re trying to sense something from 5V logic.

nodes. In the photo below, the black wire goes to ground, the green wire goes to the high side of the limiting resistor for the ‘toasting’ LED (i.e. 5V during ‘toasting’ and 0V otherwise), and the yellow wire goes to the ‘cancel’ button node opposite GND. Attach all of the sensors and fan to the mounting plate. You’ll want to position the fan to steadily blow air over the sensors. Use the I2C hub to connect all of the sensors together, and use the long Grove cable to connect the hub to the Wio Terminal. You’ll also want long wires to run from the Wio Terminal to the ammonia sensor (as it is an analog sensor, not I2C). You need the MOSFET in open- drain configuration to successfully control the ‘cancel’ button. The Wio Terminal might support open- drain GPIO, but I was too lazy to dig through the SAMD51 datasheet to figure out how to do this in code. The voltage divider is needed to convert the 5V ‘toasting’ node to

Data collection The model I created worked in my environment. It may or may not work for you, which means you’ll likely need to collect data in your environment. Head to github. com/ShawnHymel/perfect-toast- machine to view all of the code for this project. Upload toast-odor- data-collection to the Wio Terminal. Read the comments in the code to determine which libraries you need to install prior to running the code. Make sure the Wio Terminal is plugged into a computer for the data collection process. I recommend waiting 15-30 minutes to let the gas sensors warm up. Use Python (v3+) to run serial-data- collect-csv.py to have it listen for serial data from the Wio Terminal. This will log each toasting instance to a CSV file on your computer. See this readme to learn how to use serial-data-collect-csv.py. Start the toasting process with a piece of bread. Press button C (on the top of the Wio Terminal) to tag the data in one of three states: background (not toasting), toasting,

How to build an AI-powered toaster We can treat the toasting process

Screw/bolt everything to the aluminum plate (or some other mounting device).

Written by Shawn Hymel. License: Attribution Arduino

Mechanical build Construct a cage or arm that suspends the collection of sensors above the toaster. The microcontroller (Wio Terminal) should not be placed with the sensors to avoid letting it get too hot.

■ Grove SPG VOC and eCO2 gas sensor ■ Grove BME680 temperature, pressure, and humidity sensor ■ Grove I2C Hub (6 port) ■ Grove cable (100 cm) ■ Ammonia gas sensor ■ Pololu Carrier for MQ Gas Sensors ■ Fan (40 mm, 5V) ■ 2x 10kΩ resistors ■ N-channel MOSFET ■ Mounting plate (e.g. a small piece of aluminum) ■ Various wire, screws, nuts, standoffs

Hardware connections First, we need to hack the toaster. Open the toaster and find the circuit board that controls the toasting process. Use a multimeter to identify the following 3 nodes: ■ Ground (GND) ■ Node that becomes 3.3 or 5 V during the toasting process (for example, an LED that turns on when you press the lever down) ■ Node that connects to GND when the ‘cancel’ button is pressed

like a predictive maintenance problem: how do we stop the toasting before the bread in question becomes irrevocably damaged (i.e. burnt)? We’ll use a variety of gas sensors and machine learning to accomplish this task.

Required hardware

You will need the following components: ■ Wio Terminal ■ Grove Multichannel Gas Sensor v2

Tack-solder 3 wires to each of these

we get technical

18

19

How to build an AI-powered toaster

Machine learning model training Clone this Edge Impulse project as a starting point: studio. edgeimpulse.com/public/129477/ latest If you wish to use your own data, delete all of the data in the project and upload the data from the unzipped out.zip file. Note that you should upload files in the training/ directory to training in Edge Impulse and upload files in testing/ to testing. Go to Raw data. Click Save parameters and then Generate features. When that’s done, go to Regression. Feel free to modify the model, if you’d like. Click Start training and wait for training to finish. Go to Model testing and click Classify all. When that’s done, you should see some estimates. Expected outcome is the ground- truth label that expresses the number of seconds until the toast is burned, which is calculated based on when the state label transitioned from ‘toasting’ to ‘burnt’ (0 being the moment we believe the toast went from ‘toast’ to ‘burnt toast’). The result is the output of our model, given the test input data. For the most part, the model is capable of predicting ‘time till burnt’ within a few seconds. Interestingly, the model seems to be more accurate the closer to 0.

or burnt. Use your senses (sight, smell) to determine when you think the toast is ‘burnt’. See the CSV files in the datasets/ directory to see how I labeled the raw data. You’re welcome to use that data as a starting point (I just can’t promise that it will give you a model that works in your environment). Each folder is the brand, and a description of the bread used. The prefix of each CSV file is where I pulled the bread from (e.g. room temperature, refrigerator, freezer).

Data curation

Recommended reading All code in this project can be found in this GitHub repository. The Edge Impulse project used for training the machine learning model is found here. Interestingly enough, creating the perfect toast is an exercise in predictive maintenance. Instead of ‘toast’, imagine we’re talking about an expensive piece of machinery. Is it possible to create a system that predicts when the machinery will fail (analogous to the toast burning) and notify us before it does fail (e.g. stopping the toasting process just before burning)? Performing routine maintenance might help limit or prevent machine failure, but it is often done more than necessary, which increases equipment downtime. Predictive maintenance systems help limit this downtime by notifying us before a machine breaks so we can repair it at the appropriate times. You can read more about predictive maintenance in this blog post.

Run this notebook in Google Colab, following all of the directions: github.com/ShawnHymel/perfect- toast-machine/blob/main/ptm_ dataset_curation.ipynb Note that the script will automatically download the dataset from the GitHub directory. You can skip those cells and manually upload your data to the dataset/ folder in Colab if you wish to use your own data. At the end of Step 2, you can view plots of your captured raw gas samples (one at a time). Blue is ‘background’, green is ‘toasting’, and red is ‘burnt’. At the end of Step 3, you should see means, standard deviations, minimums, and ranges printed out. Copy those down – you’ll need the means and standard deviations for your inference code. Step 4 will produce an out.zip file containing your curated dataset. Download this file and unzip it.

Copy the means and standard deviations from the Colab script to the means and std_devs arrays, respectively. The first time you try the project, pay attention to the number displayed on the Wio Terminal. This shows the number of predicted seconds until the toast is burned. If your toast comes out to light, lower the CANCEL_THRESHOLD in the code (i.e. wait until it is closer to being burned before popping the toast up). If the toast is too dark, increase the CANCEL_THRESHOLD (i.e. stop the toasting process sooner). With a little tweaking, you should be able to make perfect toast! Try different types of bread, different thicknesses, different starting temperatures, etc. It should also work for two slices of bread and even bagels!

Deployment

Go to the Deployment page, select Arduino library, and click Build. When the library has been downloaded (do not unzip it!) open the Arduino IDE, click Sketch > Include Library > Add .ZIP Library … select the Arduino library that you just downloaded from Edge Impulse. Note the name of the library! If it is different than ei- perfect-toast-machine-arduino- x.x.x.zip, you will need to change the .h include file name in your inference code. Copy the perfect-toast-machine Arduino code found here to a new Arduino project. Read the comments at the beginning to see which libraries you need to install. Rename perfect-toast- machine_inferencing.h if you used a different project name in Edge Impulse. Upload the code to your Wio Terminal.

we get technical

20

21

retroelectro

Programming a calculator to form concepts: the organizers of the Dartmouth Summer Research Project Written by David Ray, Cyber City Circuits

A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence

In the summer of 1956, a groundbreaking proposal was made for what would become a milestone event in technological history: the Dartmouth Summer Research Project on Artificial Intelligence. This initiative, conceived by a group of visionary scientists, aimed to explore the nascent field of AI, which at the time was more a concept of science fiction than a tangible reality. The proposal was simple yet ambitious: to assemble a group of mathematicians, logicians, and computer scientists for two months to delve into creating machines capable of simulating human intelligence. The goal was not just to mimic human thought but to surpass it, to automate processes that until then had been the exclusive domain of the human mind. This project laid the groundwork for what we now recognize as AI/ ML, influencing everything from the development of expert systems to the neural networks that power today’s AI applications. The Dartmouth conference became a beacon of innovation, igniting a revolution that would reshape technology, business, and everyday life in ways its creators could hardly envision.

Minsky. Finishing his doctorate, Dr. McCarthy worked as a junior professor at Princeton before joining Dartmouth College’s faculty in the summer of 1955. While at Dartmouth College, McCarthy introduced the term ‘Artificial Intelligence’ to describe the scope of topics outlined in the 1956 Summer Research Project Proposal. One of McCarthy’s more significant works was the development of the Retro Electro fun fact: while enrolled at Cal Tech, McCarthy was suspended for not attending any Physical Education classes, and he enlisted in the US Army in 1945. Joining shortly before the war ended.

John McCarthy, Dartmouth College John McCarthy (1927-2011) is most famous for coining the term Artificial Intelligence. After completing his undergraduate degree at the California Institute of Technology (Caltech) in 1948, McCarthy pursued a PhD in mathematics from Princeton University. At the time, computers were just beginning to emerge as powerful tools for scientific and engineering tasks, and McCarthy saw their potential to model the human thought process. As part of his PhD program, he spent at least one summer working at Bell Labs. This is where he met Claude Shannon and Marvin

Message to the reader: this article complements a previous article about the proposal for the Dartmouth summer research project on artificial intelligence. If you would like to learn more, please read ‘ Programming a Calculator to Form Concepts: The Birth of Artificial Intelligence ’

“Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” – J. McCarthy

This is the story of those creators.

we get technical

22

23

retroelectro

LISP programming language. Due to its flexible memory management and ability to process symbolic expressions quickly, LISP became the language of choice for AI research and development. It introduced several pioneering concepts, including tree data structures, automatic storage management, and a self-hosting compiler. In 1959 while working at Stanford University, where he stayed until retiring at the beginning of 2001, he published a paper titled ‘Programs with Common Sense,’ where he worked with Marvin Minsky and explained the need to find a way to teach common sense and natural law, aiming to equip AI systems with the everyday knowledge that humans take for granted. McCarthy was an early pioneer of time-share computing, which allowed multiple users to interact with a single computer simultaneously. This idea was instrumental in the development of the modern internet and cloud computing. For more on time-share computing, read the Retro Electro article on ‘ The Aloha System: Task II ’

“(The) main reason the 1956 Dartmouth workshop did not live up to my expectations is that AI is harder than we thought.” - Marvin Minsky

laying the groundwork for the future of digital design. For formally bringing Boolean logic to electrical engineering, he was awarded the Alfred Noble Prize (not to be confused with the Nobel Prize) in 1939. If there were a proper beginning of the ‘digital age,’ this document would likely be it. Immediately following his master’s program, he started a PhD program in mathematics at MIT, where he worked on problems describing genetics using algebra and Boolean operators. After school, Dr. Shannon took a position at Bell Laboratories, where he solved problems ranging from ‘color coding’ to encryption. This was during the beginning of the United States’ active involvement in World War II. While at Bell Labs, Shannon seems to have compulsively solved highly complex problems that others couldn’t. He was described as ‘finding answers to important questions nobody else was asking’. He was not ‘cleared’ to work in the area of encryption, but that did not stop him. In his spare time, he worked on the problems surrounding encryption and then explained it to the engineers in

that department while having lunch in the cafeteria. It was later discovered that his work was instrumental in the encryption of communications used in the Manhattan Project and between Winston Churchill and Roosevelt. In 1952, Shannon built ‘Theseus’. An electromechanical ‘mouse in a maze’ that could solve itself automatically and in a very short time. It was made up of a couple of motors, several dozen relays, and a bar magnet dressed up like a mouse. It could navigate a customizable maze to a goal and after it initially solved the maze, it could be lifted and placed anywhere it would move straight to the goal, without any false moves. “The real significance of this mouse and maze, lies in the four rather unusual operations it is able to perform. It has the ability to solve a problem by trial and error means, remember a solution and apply it when necessary at a later date, add new information to the solution already remembered, and forget one solution and learn a new one when the problem is changed.” – C.E. Shannon

Analyzer’, which allowed him to fund his master’s degree in electrical engineering. The ‘Differential Analyzer’ could solve differential equations to the sixth degree. Research scientists presented him with equations each day and Shannon configured the machine to solve them. The machine was made up of around a hundred relays to control the operations, and much of his time was spent returning it to working order and repairing malfunctions. He said he would think of new ways to design each circuit as he worked on it. He found that the symbolic logic he learned at the University of Michigan could be used to describe what happens in a switching relay circuit. His master’s thesis, ‘A Symbolic Analysis of Relay and Switching Circuits,’ is one of the most important foundational works in Computer Science. In it, he shows how logic operators, like ‘and,’ ‘or,’ etc., can be used to solve and simplify problems with relays used in telephone switching systems,

Claude E. Shannon, Bell Labs Claude Elwood Shannon (1916- 2001) was an electrical engineer and unicycle enthusiast. His father was an attorney and judge, while his mother was the principal of the local high school. As a child, he was a hobbyist mechanic who built model planes and a radio- controlled boat. He even built a small telegraph between his house and his childhood friend’s house. As a young man, he earned money by repairing radios at the local store. Being an overachiever, he graduated from the University of Michigan in 1936 with bachelor’s degrees in mathematics and electrical engineering. Afterward, he found a position as a research assistant running MIT’s ‘Differential

Marvin Minsky, Harvard University

If McCarthy was the ‘Father of AI,’ then Minsky was the Architect. He grew up in New York City and attended the Bronx High School of Science and was a Navy veteran. In an interview, Minsky explained that when he graduated from grade school in 1944, the military draft was still active in support of World War II. To avoid being drafted into the Army, he enlisted in the Navy, where he was trained in electronics, radio, RADAR, etc. Concerning his time in the Navy, he recounts that he was in boot camp when Japan surrendered, which was a relief. Retro Electro fun fact: legend has it that during his tenure at Bell The Bronx High School of Science taught many of the world’s visionaries, including Carl Sagan, Labs, Minsky invented the ‘Useless Machine’ novelty toy, which is now on office desks worldwide.

“That’s the story of my life, the interplay between Mathematics and electrical engineering.”

– C.E. Shannon

we get technical

24

25

Page 1 Page 2-3 Page 4-5 Page 6-7 Page 8-9 Page 10-11 Page 12-13 Page 14-15 Page 16-17 Page 18-19 Page 20-21 Page 22-23 Page 24-25 Page 26-27 Page 28-29 Page 30-31 Page 32-33 Page 34-35 Page 36-37 Page 38-39 Page 40-41 Page 42-43 Page 44-45 Page 46-47 Page 48-49 Page 50-51 Page 52-53 Page 54

Powered by