How to Optimize Intra Logistics to Streamline and Speed Industry 4.0 Supply Chains
LiDAR SLAM can be more accurate than visual SLAM, but it is generally more expensive to implement. Alternatively, 5G can be used to provide localization information to enhance visual SLAM estimates. The use of private 5G networks in warehouses and factories can augment embedded sensors for SLAM. Some AMRs implement indoor precise positioning using 5G transmission points/reception points (TRP) to plot a grid for centimeter-level accuracy on x-, y-, and z-axes.
A range of sensors and sensor fusion, AI, ML, and wireless connectivity are needed to support autonomous operation and safety in AMRs. While the performance demands for AGVs are lower, they still rely on multiple sensors to support safe and efficient operation. There are two overarching categories of sensors: n Proprioceptive sensors measure values internal to the robot like wheel speed, loading, battery charge, and so on. n Exteroceptive sensors provide information about the robot's
n Downward-looking sensor to detect the edge of a platform (called cliff detection) n Communications modules to provide connectivity and can optionally offer Bluetooth angle of arrival (AoA) and angle of departure (AoD) sensing for real-time location services (RTLS) or 5G Transmission Points/Reception Points (TRP) to plot a grid with centimeter- level accuracy n 2D LiDAR to calculate the proximity of obstacles ahead of the vehicle n Wide angle 3D depth vision system suitable for object identification and localization n High-performance compute processor on board for sensor fusion, AI, and ML
n The pose of an intruder, like a person entering close to the robot, relative to an external reference frame or relative to the robot n The estimated pose of the robot after moving at a given velocity for a predetermined time n Calculating the velocity profile needed for the robot to move from its current pose to a second pose Pose is a predefined function in several robot software development environments. For example, the robot_pose_ekf package is included in the Robot Operating System (ROS), an open-source development platform. Robot_pose_ekf can be used to estimate the 3D pose of a robot based on (partial) pose measurements from various sensors. It uses an extended Kalman filter with a 6D model (3D position and 3D orientation) to combine measurements from the encoder for wheel odometry, a camera for visual odometry, and the IMU. Since the various sensors operate with different rates and latencies, robot_pose_ekf does not require all sensor data to be continuously or simultaneously available. Each sensor is used to provide a pose estimate with a covariance. Robot_pose-ekf identifies the available sensor information at any point in time and adjusts accordingly.
Robot pose and sensor fusion AMR navigation is a complex process. One of the first steps is for the AMR to know where it is and what direction it's facing. That combination of data is called the robot's pose. The concept of pose can also be applied to the arms and end effectors of multi- axis stationary robots. Sensor fusion combines inputs from the IMU, encoders, and other sensors to determine the pose. The pose algorithm estimates the (x, y) position of the robot and the orientation angle θ, with respect to the coordinate axes. The function q = (x, y, θ) defines the robot’s pose. For AMRs, pose information has a variety of uses, including:
Sensor fusion and SLAM Many environments where AMRs operate include variable obstacles that can move from time to time. Although a basic map of the facility is useful, more is needed. When moving around an industrial facility, AMRs need more than pose information; they also employ SLAM to ensure efficient operation. SLAM adds real-time environment mapping to support navigation. Two basic approaches to SLAM are: n Visual SLAM that pairs a camera with an IMU n LiDAR SLAM that combines a laser sensor like 2D or 3D LiDAR with an IMU
Successful navigation relies on an AMR's ability to adapt to changing environmental
environment like distance measurements, landmark locations, and obstacle identification such as people entering the robot's path.
Sensor fusion in AGVs and AMRs relies on combinations of proprioceptive and exteroceptive sensors. Examples of sensors in AMRs include (Figure 1): n Laser scanner for object detection with 20+ meter (m) range n IMU with a 6-axis gyroscope and accelerometer, and sometimes including a magnetometer n Encoders with millimeter (mm) resolution on the wheels n Contact sensor like a microswitch in the bumper to immediately stop motion if an unexpected object is contacted
n Two forward-looking 3D cameras with a 4 m range
Figure 1: Exemplary AMR showing the diversity and positions of the embedded sensors. (Image Source: Qualcomm)
we get technical
72
73
Powered by FlippingBook