DigiKey-emag-Sensors-Vol-7

(dnn.py) shows each step of the process required to acquire data and prepare its classification by the inference model (Listing 1). Here, the process begins by using OpenCV's DNN methods (cv.dnn. readNetFromCaffe) to read the network and associated weights for an existing inference model. In this case, the model is a Caffe implementation of the Google MobileNet Single Shot Detector (SSD) detection network known for achieving high accuracy with relatively small model sizes. After loading the class names with the supported class identifiers and class labels, the sample application identifies the available cameras and executes a series of initialization routines (not shown in Listing 1). The bulk of the sample code deals with preparing the depth map (depth_map) and IR map (ir_map) before combining them (cv. addWeighted) into a single array to enhance accuracy. Finally, the code calls another OpenCV DNN method (cv.dnn.blobFromImage) which converts the combined image into the four-dimensional blob data type required for inference. The next line of code sets the resulting blob as the input to the inference model (net.setInput(blob)). The call to net.forward() invokes the inference model which returns the classification results.

Listing 1: This snippet from a sample application in the Analog Devices 3D ToF SDK distribution demonstrates the few steps required to acquire depth and IR images and classify them with an inference model. (Code source: Analog Devices)

we get technical

11

Powered by