Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

High-Precision SLAM Based on the Tight Coupling of Dual Lidar Inertial Odometry for Multi-Scene Applications

High-Precision SLAM Based on the Tight Coupling of Dual Lidar Inertial Odometry for Multi-Scene... applied sciences Article High-Precision SLAM Based on the Tight Coupling of Dual Lidar Inertial Odometry for Multi-Scene Applications 1 1 , 2 1 1 Kui Xiao , Wentao Yu *, Weirong Liu , Feng Qu and Zhenyan Ma College of Computer and Information Engineering, Central South University of Forestry and Technology, Changsha 410004, China; 20191100304@csuft.edu.cn (K.X.); qufeng0817@csuft.edu.cn (F.Q.); t20070552@csuft.edu.cn (Z.M.) School of Computer Science and Engineering, Central South University, Changsha 410083, China; frat@csu.edu.cn * Correspondence: wentaoyu@csuft.edu.cn; Tel.: +86-0731-134-6758-9376 Featured Application: This paper fuses different sensors to form a general high-precision SLAM framework for multi-scene applications. The algorithm framework in this paper can be extended to the fields of autonomous driving, robot navigation, and 3D reconstruction. Abstract: Simultaneous Localization and Mapping (SLAM) is an essential feature in many applica- tions of mobile vehicles. To solve the problem of poor positioning accuracy, single use of mapping scene, and unclear structural characteristics in indoor and outdoor SLAM, a new framework of tight coupling of dual lidar inertial odometry is proposed in this paper. Firstly, through external calibration and an adaptive timestamp synchronization algorithm, the horizontal and vertical lidar data are fused, which compensates for the narrow vertical field of view (FOV) of the lidar and makes the characteristics of vertical direction more complete in the mapping process. Secondly, the dual lidar data is tightly coupled with an Inertial Measurement Unit (IMU) to eliminate the motion distortion of Citation: Xiao, K.; Yu, W.; Liu, W.; the dual lidar odometry. Then, the value of the lidar odometry after correcting distortion and the Qu, F.; Ma, Z. High-Precision SLAM pre-integrated value of IMU are used as constraints to establish a non-linear least-squares objective Based on the Tight Coupling of Dual Lidar Inertial Odometry for function. Joint optimization is then performed to obtain the best value of the IMU state values, Multi-Scene Applications. Appl. Sci. which will be used to predict the state of IMU at the next time step. Finally, experimental results are 2022, 12, 939. https://doi.org/ presented to verify the effectiveness of the proposed method. 10.3390/app12030939 Keywords: simultaneous localization and mapping; dual lidar inertial odometry; IMU; time Academic Editors: António synchronization; tight coupling Paulo Moreira, Pedro Neto and Félix Vilariño Received: 30 November 2021 Accepted: 13 January 2022 1. Introduction Published: 18 January 2022 Simultaneous localization and mapping (SLAM) require building a map of an un- Publisher’s Note: MDPI stays neutral known environment by a mobile vehicle and simultaneously localizing the vehicle in such a with regard to jurisdictional claims in map [1–3]. SLAM is essential for vehicles to fulfill many tasks, including vehicle rescue [4] published maps and institutional affil- and exploration [5]. The perception of the unknown external environment by various iations. onboard sensors provides vital information for SLAM. Thus, integrating different sensors to develop a practical SLAM framework that can be applied in multiple scenes is essential. Generally, both vision-based and lidar-based SLAM [6–10] are used. Although the vision-based SLAM can obtain high-precision positioning [11–13], the vision sensors are Copyright: © 2022 by the authors. vulnerable to light change in the environment and cannot work in dark or untextured Licensee MDPI, Basel, Switzerland. scenes. In comparison, lidar is not affected by light and can usually measure the angle and This article is an open access article distance of obstacles with higher accuracy. Therefore, this paper focuses on the design of distributed under the terms and lidar-based SLAM to adapt the multi-scene applications. conditions of the Creative Commons In recent years, many different lidar-based SLAM schemes have been proposed. Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ Among them, the Lidar Odometry and Mapping in Real-time (LOAM) method, where a 4.0/). single-line lidar and a motor are used to form a multiline lidar to realize low-drift and Appl. Sci. 2022, 12, 939. https://doi.org/10.3390/app12030939 https://www.mdpi.com/journal/applsci Appl. Sci. 2022, 12, 939 2 of 18 low-calculation real-time positioning and mapping, has been studied extensively [14]. In the LOAM method, SLAM is divided into two parts: lidar odometry and lidar mapping. In the lidar odometry part, to reduce the computation, the plane smoothness of the lidar point cloud, which is utilized to distinguish the edge points, is calculated according to the curvature and then invalid point clouds are discarded to improve the feature of point cloud. After the point cloud information is obtained through feature extraction, a scan-to-scan method [15] is proposed to realize feature matching between frames, including edge points and planar points matching. To eliminate the motion distortion of lidar, linear interpolation is utilized. After the distortion is eliminated, the pose transformation of the lidar point cloud data of two adjacent frames is acquired to obtain the lidar odometry. Then, after accumulating a certain number of point cloud data, lidar mapping is performed through a map-to-map matching method [16]. Although LOAM can create a high-precision point cloud map, it cannot remove moving people or objects during feature extraction. Further- more, in an environment with fewer feature points, it is easy for the lidar odometry to fail, resulting in positioning and mapping with worse accuracy. To solve the above problems, a Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain (LeGO-LOAM) [17] is proposed. Its core comprises four modules: point cloud segmentation and denoising, feature extraction, lidar odometry, and lidar mapping. Firstly, the point cloud segmentation technology [18] solves the defects of moving people or objects in LOAM mapping and filters out noise. Then, feature extraction is applied to obtain planar and edge features from the segmented point cloud. The lidar odometry module uses the features to find out the optimal pose transform between two consecutive scans with the help of a two-step Levenberg–Marquardt (LM) optimization method [19]. The extracted features are further processed by a scan-to-map [20] matching method to obtain a global point cloud map in lidar mapping. LeGO-LOAM optimizes LOAM to a certain extent, but in large scenes (e.g., long corridors) or environments with few feature points, the low frequency of lidar and less characteristic information can lead to significant errors in positioning and mapping. Some recent research has used low-cost IMU [21–23] to assist lidar for SLAM. The simplest way to integrate the IMU with lidar is loose coupling [24], in which IMU is regarded as an independent module to assist the lidar. The authors of [25] loosely couple IMU and optional GPS measurement with lidar through the Extended Kalman Filter (EKF) to improve computational efficiency and accuracy. However, the IMU error will continue to accumulate in the case of long distances, so the localization error is still increasing in such cases. To solve this problem, IMU and lidar are generally coupled via a tight coupling method that usually offers improved accuracy. A tightly coupled lidar inertial odometry and mapping framework (LIO-Mapping) is introduced in the literature [26]. It comprises the state optimization for the odometry and the refinement with rotational constraints. Results showed that this method outperformed the state-of-the-art lidar-only and loosely coupled methods. Since LIO-Mapping is designed to process all sensor measurements, real-time performance is not achieved. The authors of [27] proposed a tightly coupled Lidar Inertial Odometry via Smoothing and Mapping (LIO-SAM) based on LeGO-LOAM. LIO-SAM performs highly accurate, real-time mobile vehicle trajectory estimation and map building, suitable for multi-sensor fusion and global optimization. Although the tight coupling of lidar and IMU can improve the localization accuracy, lidar sensors have certain drawbacks. Conventional lidar can only be used to scan the environment in a narrow range of vertical angles and the point cloud feature information obtained is limited. For example, in an indoor corridor or staircase environment, lidar can receive the point cloud information of the sidewall, but only a tiny part of the point cloud information is obtained from the floor and ceiling. In such cases, the matched lidar features can easily cause ill-constrained pose estimation and incomplete mapping. Aiming to broaden the FOV of lidar, lidars can be actuated in a number of ways. The periodic nodding and continuous rotation can be adopted and the resulting configurations can potentially enlarge the FOV. While the implementation of this method depends on Appl. Sci. 2022, 12, 939 3 of 18 a complex mechanical structure, the localization accuracy is relatively low [28,29]. More recently, a new design for a 3D sensor system was introduced [30], involving construction from a 2D range scanner coupled with a passive linkage mechanism, such as a spring. However, the system requires reasonably continual excitation to induce motion of the sen- sor head. Therefore, the system is not considered appropriate for electric ground vehicles operating with infrequent accelerations on smooth terrain. In order to solve the above problems, some researchers have started investigating the use of multiple 3D lidars for better coverage of the environment. In [31], a high-precision lidar odometry system was proposed to achieve robust and real-time operation under challenging perceptual condi- tions. In this system, the two lidars are mounted at 180 degrees to each other to make up for the self-similar areas with low lidar observability. The authors of [32] proposed a system to achieve robust and simultaneous extrinsic calibration, odometry, and mapping for multiple lidars, while [33] proposed a scheme to combine multiple lidars with complementary FOV for feature-based lidar-inertia odometry and mapping. While the above methods can work well in single-scene applications, their adaptability to different environments was not considered, especially the environments that need to perceive height information. This paper proposes a high-precision SLAM framework based on tight coupling of dual lidar inertial to overcome the above problems. In this framework, the horizontal lidar and the vertical lidar are integrated to broaden the FOV. Then the dual lidar and IMU are tightly coupled to improve the localization accuracy. In the outdoor environment, this paper introduces GPS measurements to further improve positioning accuracy. The main contributions of this paper include: 1. A general high-precision SLAM framework is provided by fusing different sensors. It can adapt to multi-scene applications, such as a corridor with fewer features, stairs with height, and complex outdoor environments. 2. The horizontal and vertical lidars are fused by external calibration and adaptive time synchronization algorithms to solve the narrow vertical FOV of the lidar. 3. To improve the positioning accuracy of SLAM in the environment with height infor- mation (e.g., stairs), the dual lidar odometry and IMU are tightly coupled. The dual lidar odometry measurement and IMU pre-integration are jointly optimized to obtain more accurate IMU state values, which will be used to eliminate the motion distortion of the dual lidar to improve the localization accuracy. 4. In addition, several practical experiments are carried out to verify the feasibility and effectiveness of the proposed method. 2. Prerequisites 2.1. IMU State Prediction and Pre-Integration Usually, the frequency of IMU is much higher than lidar. IMU can obtain the accelera- tion and angular velocity at each time step, which can be used to predict the next state of IMU through integration. In general, the IMU states at time t can be modeled as: X = [p , v , R , b ] (1) t t t where X represents the state in the world frame W at time t; p , v , and R represent t t the position, velocity, and rotation matrix, respectively, at time t; and a ˆ and w ˆ are the t t acceleration and angular velocity of the raw IMU measurements (a ˆ and w ˆ are affected by t t a slowly varying bias, b ). The next state, X , is predicted through the integration of IMU, where Dt is the t+Dt interval between two consecutive IMU measurements. The state prediction value of the Appl. Sci. 2022, 12, 939 4 of 18 IMU is then used to infer the motion of the vehicle. The velocity, position, and rotation of the vehicle at time t + Dt can be computed by Equation (2) [27]: a a v = v + gDt + R (a ˆ b n )Dt t+Dt t t t t t 1 2 1 a a 2 p = p + v Dt + gDt + R (a ˆ b n )Dt t t t t t+Dt t 2 2 t (2) 1 w w Dt w ˆ b n Dt 2 Dt Dt q = q t+Dt t where g is gravitational acceleration; b is the varying bias of the acceleration a ˆ ; and n is t t the white noise of the acceleration a ˆ . The quaternion q under Hamilton notation (which corresponds to R ) is used; is used for the multiplication of two quaternions; b is the varying bias of the angular velocity w ˆ ; and n is the white noise of the angular velocity w ˆ . t t The motion state of IMU between the i-th and the j-th time steps can be represented by the IMU pre-integrated measurement value Dp , Dv , Dq , which can be computed by: ij ij ij Dv = R v v gDt ij j i ij T 1 2 Dp = R p p v Dt gDt i ij ij i j i 2 ij (3) j1 1 w w Dt w ˆ b n 1 ij k 2 k k Dq = q q = ij j k=i where Dp , Dv , Dq is the position, velocity, and rotation matrix of the IMU, respectively, ij ij ij which has the covariance C in the error-state model. Due to space limitations, we invite readers to refer to the descriptions in [26,34] to understand the detailed derivation of IMU pre-integration. 2.2. Segmentation and Feature Extraction Before extracting features, to filter out the noise, the point cloud is segmented to reduce noise interference. D = fd , d , . . . , d g is the point cloud information acquired at time t; t 1 2 n d is a point in D ; D is projected onto a range image; the point d represents a pixel of the t t t t image; and the pixel value r represents the Euclidean distance from the point d to the t t lidar. An image-based segmentation method [17] is applied to the range image to group points into many clusters. Points of the same category have the same label (the ground is a particular category). Features are then extracted from ground points and segmented points, with the corresponding edge points and plane points obtained by calculating the curvature E of each point. To evenly extract features from all directions, the range image is divided horizontally into several equal sub-images. Then, the points are sorted in each row of the sub-image based on their curvature values, E. The curvature E is computed by Equation (4). E = k (r r )k (4) å l t jSjkr k l2S,l6=t where S is the set of continuous points d from the same row of the range image; l is a point in the set S; E is a threshold which is set to distinguish different types of features; th the points are called with E > E edge features; and the points are called with E < E th th planar features. 3. Multi-Sensor Fusion 3.1. Hardware System Description The configuration of our system is shown in Figure 1, with the left model obtained through Unigraphics NX and the designed equipment in our paper shown on the right. The designed equipment is a scanning arm composed of various sensors that can be installed on a backpack or a vehicle. The sensor suite used in the scanning arm includes two 10 HZ Velodyne VLP-16 lidars, a 200 HZ YIS510A IMU, and a 5 HZ HY-269 GPS. The panoramic Appl. Sci. 2022, 12, x FOR PEER REVIEW 5 of 19 where S is the set of continuous points d from the same row of the range image; λ is a point in the set S ; E is a threshold which is set to distinguish different types of th features; the points are called with E > E edge features; and the points are called with th E < E planar features. th 3. Multi-Sensor Fusion 3.1. Hardware System Description The configuration of our system is shown in Figure 1, with the left model obtained through Unigraphics NX and the designed equipment in our paper shown on the right. The designed equipment is a scanning arm composed of various sensors that can be in- Appl. Sci. 2022, 12, 939 5 of 18 stalled on a backpack or a vehicle. The sensor suite used in the scanning arm includes two 10 HZ Velodyne VLP-16 lidars, a 200 HZ YIS510A IMU, and a 5 HZ HY-269 GPS. The panoramic camera prepares for the follow-up research. The lidar has a hor  izont al FOV of camera prepares for the follow-up research. The lidar has a horizontal FOV of 360 with 0.4 360° with 0.4° resolution, a vertical FOV 15° with 2° resolution, and an accuracy of ±3 cm. resolution, a vertical FOV 15 with 2 resolution, and an accuracy of3 cm. The Roll/Pitch of Th IMU e Ro accuracy ll/Pitch o is 0.25 f IMU , HY ac -269 cura GPS cy is is 0. a small-volume 25°, HY-269 G positioning PS is a sma andldir l-vo ectional lume pos receiver ition , ing and and the positioning accuracy of GPS is 1.2 m. directional receiver, and the positioning accuracy of GPS is 1.2 m. Figure 1. A CAD (Computer Aided Design) model of the scanning arm is shown on the left, with a Figure 1. A CAD (Computer Aided Design) model of the scanning arm is shown on the left, with a photograph of the scanning arm shown on the right. photograph of the scanning arm shown on the right. 3.2. System Overview 3.2. System Overview In this paper, horizontal and vertical lidars are fused to make up for the shortcomings In this paper, horizontal and vertical lidars are fused to make up for the shortcomings of lidar ’s narrow FOV. When indoors, accurate positioning information is obtained through of lidar’s narrow FOV. When indoors, accurate positioning information is obtained the tight coupling of dual lidar inertial odometry. When outdoors, GPS measurements through the tight coupling of dual lidar inertial odometry. When outdoors, GPS measure- are added, providing a relatively accurate initial position for positioning, thereby further ments are added, providing a relatively accurate initial position for positioning, thereby improving outdoor positioning accuracy. Figure 2 provides a brief overview of our proposed framework. Firstly, after obtaining further improving outdoor positioning accuracy. the IMU data, the IMU state prediction and pre-integration are updated, as detailed in Figure 2 provides a brief overview of our proposed framework. Firstly, after obtain- Section 2.1. IMU pre-integration is used to construct a factor graph constraint, which will ing the IMU data, the IMU state prediction and pre-integration are updated, as detailed be used for joint optimization. Secondly, the dual lidar point clouds are obtained through in Section 2.1. IMU pre-integration is used to construct a factor graph constraint, which external calibration and an adaptive timestamp synchronization algorithm, as detailed in will be used for joint optimization. Secondly, the dual lidar point clouds are obtained Section 3.3. Since lidar has continuous measurement, the lidar measurement is obtained in through external calibration and an adaptive timestamp synchronization algorithm, as continuous motion and it is almost certain that motion distortion will occur. When dual lidar detailed in Section 3.3. Since lidar has continuous measurement, the lidar measurement is point cloud is received, deskewing is applied on the dual lidar raw point cloud to obtain obtained in continuous motion and it is almost certain that motion distortion will occur. deskewed dual lidar point cloud [14]. To be ground-optimized and reduce the amount of calculation, point cloud segmentation is applied to filter out noise (e.g., moving people or When dual lidar point cloud is received, deskewing is applied on the dual lidar raw point objects) and the planar points and edge points are distinguished by calculating the curvature cloud to obtain deskewed dual lidar point cloud [14]. To be ground-optimized and reduce of the lidar point, as detailed in Section 2.2. After feature extraction, the dual lidar point the amount of calculation, point cloud segmentation is applied to filter out noise (e.g., cloud is registered to construct the lidar odometer factor. Then, the sliding window factor moving people or objects) and the planar points and edge points are distinguished by graph optimization method of local finite frame is used, with the IMU pre-integration factor and lidar odometer factor jointly optimized to realize the mutual parameter optimization of the dual lidar and IMU. Joint optimization is taken to obtain a maximum a posteriori (MAP) estimation of the IMU states, as detailed in Section 3.4, avoiding the drift from IMU propagation. To further improve outdoor positioning accuracy, GPS is introduced in this framework. When GPS is loosely coupled with IMU through an Unscented Kalman Filter (UKF), GPS measurements are used to further improve the positioning accuracy of IMU, with the optimized IMU state then used for the state estimation of IMU at the next time step. GPS measurements are added, providing a relatively accurate initial position for localization, thereby further improving outdoor localization accuracy [25]. However, the Appl. Sci. 2022, 12, x FOR PEER REVIEW 6 of 19 calculating the curvature of the lidar point, as detailed in Section 2.2. After feature extrac- tion, the dual lidar point cloud is registered to construct the lidar odometer factor. Then, the sliding window factor graph optimization method of local finite frame is used, with the IMU pre-integration factor and lidar odometer factor jointly optimized to realize the mutual parameter optimization of the dual lidar and IMU. Joint optimization is taken to obtain a maximum a posteriori (MAP) estimation of the IMU states, as detailed in Section 3.4, avoiding the drift from IMU propagation. To further improve outdoor positioning accuracy, GPS is introduced in this framework. When GPS is loosely coupled with IMU through an Unscented Kalman Filter (UKF), GPS measurements are used to further im- prove the positioning accuracy of IMU, with the optimized IMU state then used for the state estimation of IMU at the next time step. GPS measurements are added, providing a Appl. Sci. 2022, 12, 939 6 of 18 relatively accurate initial position for localization, thereby further improving outdoor lo- calization accuracy [25]. However, the drift of lidar inertial odometry grows very slowly; in practice, GPS measurements are used when the estimated position covariance is larger drift of lidar inertial odometry grows very slowly; in practice, GPS measurements are used than the received GPS position covariance. Finally, the output of the figure is the global when the estimated position covariance is larger than the received GPS position covariance. map and localization. Finally, the output of the figure is the global map and localization. Figure Figure 2. 2. An An o overview verview o of our f our prproposed fram oposed framework. ework. 3.3. Fusion of Horizontal Lidar and Vertical Lidar 3.3. Fusion of Horizontal Lidar and Vertical Lidar To accurately fuse the horizontal and vertical lidar data, their individual coordinate To accurately fuse the horizontal and vertical lidar data, their individual coordinate systems must be transformed into a unified coordinate system, which is termed “frame systems must be transformed into a unified coordinate system, which is termed “frame coordinate system”. The horizontal lidar coordinate system is used as the frame coordinate coordinate system”. The horizontal lidar coordinate system is used as the frame coordi- system; the vertical lidar coordinate system is registered in this frame coordinate system nate system; the vertical lidar coordinate system is registered in this frame coordinate sys- through external parameter calibration, with the external parameters obtained by the CAD tem through external parameter calibration, with the external parameters obtained by the model of the scanning arm. This paper adopts the adaptive time synchronization algorithm CAD model of the scanning arm. This paper adopts the adaptive time synchronization to ensure that the timestamps of the horizontal and vertical lidars can be simultaneously algorithm to ensure that the timestamps of the horizontal and vertical lidars can be sim- output. The fusion process of dual lidar is divided into two parts: (1) external parameter ultaneously output. The fusion process of dual lidar is divided into two parts: (1) external calibration of the horizontal and vertical lidars and (2) the adaptive time synchronization algorithm. Both parts are described as follows. parameter calibration of the horizontal and vertical lidars and (2) the adaptive time syn- chronization algorithm. Both parts are described as follows. 3.3.1. External Parameter Calibration of Horizontal Lidar and Vertical Lidar Assuming that L is the horizontal lidar coordinate system and L is the vertical lidar 3.3.1. External Param 1 eter Calibration of Horizontal Lidar and Vert 2ical Lidar coordinate system, the horizontal lidar coordinate system is used as the frame coordinate Assuming that L is the horizontal lidar coordinate system and L is the vertical li- 1 2 system and the vertical lidar coordinate system L is registered in this frame coordinate L L dar coordinate system, the horizontal lidar coordinate system is used as the frame coordi- 1 1 system L through the rotation matrix R and translation matrix H , which can be L L 2 2 computed by Equation (5): L L 1 1 L = R  L + H (5) 1 2 L L 2 2 L L 1 1 where R is the rotation matrix and H is the translation matrix which can be obtained by L L 2 2 external parameters. From the hardware structure of the scanning arm, the vertical lidar coordinate system first rotates around the y-axis and then translates to the horizontal lidar coordinate system, where 2 3 cos q 0 sin q 4 5 R = 0 1 0 (6) sin q 0 cos q Appl. Sci. 2022, 12, x FOR PEER REVIEW 7 of 19 nate system and the vertical lidar coordinate system L is registered in this frame coor- L L 1 1 dinate system L through the rotation matrix  and translation matrix H , which L L 2 2 can be computed by Equation (5): LL LL =×  +H (5) Appl. Sci. 2022, 12, x FOR PEER REVIEW 12 7 of 19 LL L L 1 1 where  is the rotation matrix and H is the translation matrix which can be ob- L L 2 2 nate system and the vertical lidar coordinate system L is registered in this frame coor- tained by external parameters. From the hardware structure of the scanning arm, the ver- L L 1 1 dinate system L through the rotation matrix  and translation matrix H , which L L 2 2 tical lidar coordinate system first rotates around the y -axis and then translates to the hor- can be computed by Equation (5): izontal lidar coordinate system, LL LL =×  +H (5) LL where 22 L L 1 1 where  is the rotation matrix and H is the translation matrix which can be ob- L L cosθθ 0 sin  2 2 L tained by external parameters. From the hardware structure of the scanning arm, the ver-  = 01 0 (6)  tical lidar coordinate system first rotates around the y -axis and then translates to the hor-  −sinθθ 0 cos izontal lidar coordinate system,  where Appl. Sci. 2022, 12, 939 7 of 18 where the angle of the rotation matrix θ =−77.94 and the translation matrix cosθθ 0 sin  1  Hx =(,0,z) , with  = and01 0 . 1 xm =−0.177 zm =−0 .213 (6) wher Le the angle of the rotation matrixLq = 77.94 and the translation matrix H = (x, 0, z), 2 2 L  −sinθθ 0 cos with x = 0.177 m and z = 0.213 m.  The point cloud of the lidar after external parameter calibration in the frame coordi- The point cloud of the lidar after external parameter calibration in the frame coordinate nate system is shown in Figure 3, with the point clouds of the vertical and horizontal lidars where the angle of the rotation matrix θ =−77.94 and the translation matrix system is shown in Figure 3, with the point clouds of the vertical and horizontal lidars not af notHx fecting af =(, fect each0,z ing) other , with each ot . her. and . xm =−0.177 zm =−0.213 The point cloud of the lidar after external parameter calibration in the frame coordi- nate system is shown in Figure 3, with the point clouds of the vertical and horizontal lidars not affecting each other. Figure 3. Dual lidar point clouds’ information after external parameter calibration. Figure 3. Dual lidar point clouds’ information after external parameter calibration. Figure 3. Dual lidar point clouds’ information after external parameter calibration. 3.3.2. The Adaptive Time Synchronization Algorithm 3.3.2. The Adaptive Time Synchronization Algorithm 3.3.2. The Adaptive Time Synchronization Algorithm In this paper, the lidar time synchronization is achieved through hardware time syn- In this paper, the lidar time synchronization is achieved through hardware time syn- chronization and GNSS is used as the master clock. However, there is a time offset between chronization and In this paGNSS is used as per, the lidar ti the master me synchroniz clock. However, th ation is aere is chiev a time ed thoffset be rough ha - rdware time syn- the two lidars. As the time offset is constant, an adaptive timestamp synchronization tween the two lidars. As the time offset is constant, an adaptive timestamp synchroniza- chronization and GNSS is used as the master clock. However, there is a time offset be- algorithm is used to solve it. The adaptive time synchronization algorithm is used to tion algorithm is used to solve it. The adaptive time synchronization algorithm is used to tween the two lidars. As the time offset is constant, an adaptive timestamp synchroniza- match its timestamp and realize the simultaneous localization and mapping of the lidars. match its timestamp and realize the simultaneous localization and mapping of the lidars. The tion a timestamps lgorithm of i the s used dual lidar to sol isv shown e it. The ada in Figurep 4ti , wher ve time e lidar_201/scan synchronization algorithm represents is used to The timestamps of the dual lidar is shown in Figure 4, where lidar_201/scan represents the timestamp of the horizontal lidar and lidar_202/scan represents the timestamp of the the timestamp of the horizontal lidar and lidar_202/scan represents the timestamp of the match its timestamp and realize the simultaneous localization and mapping of the lidars. vertical lidar. vertical lidar. The timestamps of the dual lidar is shown in Figure 4, where lidar_201/scan represents the timestamp of the horizontal lidar and lidar_202/scan represents the timestamp of the vertical lidar. Figure 4. Timestamps of dual lidar. Figure 4. Timestamps of dual lidar. The output of the adaptive time synchronization algorithm only depends on the The output of the adaptive time synchronization algorithm only depends on the timestamp, not on the arrival time of dual lidar point cloud messages. It means that the timestamp, not on the arrival time of dual lidar point cloud messages. It means that the adaptive time synchronization algorithm can be safely used on dual lidar point cloud Figure 4. Timestamps of dual lidar. messages that have suffered arbitrary processing delays. As shown in Figure 5, time goes from left to right, with the first row representing the horizontal lidar timestamp and the The output of the adaptive time synchronization algorithm only depends on the second row representing the vertical lidar timestamp. Each dot represents the lidar point cloud with the timestamp. The blue dot represents the pivot of the lidar point clouds, timestamp, not on the arrival time of dual lidar point cloud messages. It means that the with the broken line linking the dual lidar point clouds in a set. Suppose the horizontal lidar point clouds’ queue is PL and the vertical lidar point clouds’ queue is PV , with P P lidar point clouds inserted in a topic-specific queue (PL or PV ) as they arrive. Once P P each topic-specific queue contains at least one message, the latest message is found at the head of the queues, known as the pivot. Assume that the queue with pivots is PL , which matches PV , being the smallest difference between the timestamps, T. The two queues are combined into a set; finally, this set is synchronous output. Appl. Sci. 2022, 12, x FOR PEER REVIEW 8 of 19 adaptive time synchronization algorithm can be safely used on dual lidar point cloud messages that have suffered arbitrary processing delays. As shown in Figure 5, time goes from left to right, with the first row representing the horizontal lidar timestamp and the second row representing the vertical lidar timestamp. Each dot represents the lidar point cloud with the timestamp. The blue dot represents the pivot of the lidar point clouds, with the broken line linking the dual lidar point clouds in a set. Suppose the horizontal lidar point clouds’ queue is PL and the vertical lidar point clouds’ queue is PV , with lidar P P point clouds inserted in a topic-specific queue ( PL or PV ) as they arrive. Once each P P topic-specific queue contains at least one message, the latest message is found at the head of the queues, known as the pivot. Assume that the queue with pivots is PL , which matches PV , being the smallest difference between the timestamps, . The two queues Appl. Sci. 2022, 12, 939 8 of 18 are combined into a set; finally, this set is synchronous output. Figure 5. Timestamp matching of dual lidar. Figure 5. Timestamp matching of dual lidar. 3.4. Tight Coupling of Dual Lidar and IMU 3.4. Tight Coupling of Dual Lidar and IMU Although the fusion of dual lidar makes up for the shortcomings of lidar ’s narrow Although the fusion of dual lidar makes up for the shortcomings of lidar’s narrow FOV, the localization accuracy is improved. However, lidars mounted on moving vehicles FOV, the localization accuracy is improved. However, lidars mounted on moving vehicles suffer from motion distortion, which directly affects the localization accuracy. IMU can suffer from motion distortion, which directly affects the localization accuracy. IMU can accurately measure the three-axis acceleration and three-axis angular velocity of moving accurately measure the three-axis acceleration and three-axis angular velocity of moving vehicles, and provide a priori information for lidar odometry. However, in the case of vehicles, and provide a priori information for lidar odometry. However, in the case of long or short distances, the IMU error will continue to accumulate, which directly affects long or short distances, the IMU error will continue to accumulate, which directly affects positioning accuracy. In this paper, the tight coupling of dual lidar and IMU is proposed to posi solve tioni the ng above accura pr cy. In thi oblems.sThe paper, the ti process is gh divided t couplin into g of d two ual parts: lidar (1) and external IMU is p parameter roposed to solve the above problems. The process is divided into two parts: (1) external parameter calibration of horizontal lidar and IMU and time synchronization, and (2) joint optimization, cal which ibratar ion e o described f horizont as alfollows: lidar and IMU and time synchronization, and (2) joint optimiza- tion, which are described as follows: 3.4.1. External Parameter Calibration of Horizontal Lidar and IMU and Time Synchronization 3.4.1. Similar Externato l PSection aramete3.3 r C , a the librexternal ation of parameters Horizontal L of idar theand lidar IMU andand T IMU ar im eealso Sync obtained hroni- zat based ion on the CAD model. To accurately fuse data from the horizontal lidar, the vertical lidar, and the IMU, the IMU coordinate system I is also registered in this frame coordinate Similar to Section 3.3, the external parameters of the lidar and IMU are also obtained L L 1 1 system L through the rotation matrix R and the translation matrix H . It can be seen based on the 1 CAD model. To accurately fuse data from the horizontal lidar, the vertical I I from the structure of the scanning arm in Figure 1 that the IMU is located directly under the lidar, and the IMU, the IMU coordinate system I is also registered in this frame coordi- horizontal lidar. Therefore, the IMU coordinate system can be registered in the horizontal L L 1 1 L  Η 1 I I nate system through the rotation matrix and the translation matrix . It can lidar coordinate system only by translation, where be seen from the structure of the scanning arm in Figure 1 that the IMU is located directly 2 3 1 0 0 under the horizontal lidar. Therefore, the IMU coordinate system can be registered in the 4 5 R = 0 1 0 (7) horizontal lidar coordinate system only by translation, 0 0 1 where 1 10 0  where the translation matrix H = [0.62.66, 0,0.125]. L   = 01 0 In this paper, YIS510A IMU is used, which supports trigger correction of its internal (7)  clock by PPS signal. Lidar-IMU time synchronization is achieved through hardware time  00 1  synchronization, with GNSS used as the master clock. GNSS can output PPS signals to unify the clock source of lidar-IMU and achieve time synchronization. where the translation matrix Η=[−0.62.66,0,−0.125] . 3.4.2. Joint Optimization In this paper, a fixed-lag smoother and marginalization are introduced to obtain the optimal state. The fixed-lag smoother maintains a certain amount of IMU state in the sliding window, and the sliding window can effectively control the amount of calculation. When the new state enters the sliding window, the past state is marginalized. The state W W W variable to be estimated for the whole window is X = [X , . . . , X ,Z ], where X is the h k I h state of IMU at the starting point h of the sliding window; X is the state of IMU at the end L L L 1 1 1 of the sliding window; and Z = [R , H ] is the external parameter between lidar and I I I IMU. Then, the following cost function with a Mahalanobis norm is minimized to obtain the MAP estimation of the states X, < = 2 2 V X = min ku (X)k + ku (c, X)k c + ku (Z , X)k (8) o å f å g V C V+1 X 2> L ; : V+1 c2Q V2fh,...,k1g b Appl. Sci. 2022, 12, 939 9 of 18 where u (X) is the prior information from marginalization [26]. u (c, X) is the residual o f of the relative lidar constraints that can be represented as point-to-plane distance [26], where c 2 Q is the residual for each relative lidar measurement with the previous correspondences. b 2 fh + 1, . . . , kg, h + 1 and k are the timestamps of the lidar sweep next to the starting one and the current lidar sweep in the window, with the covariance c V matrix C determined by the lidar accuracy [35]. u (Z , X) is the residual of the IMU V+1 constraints, where 2 3 T 1 2 R p p v Dt gDt Dp V+1 V 2 V,V+1 V,V+1 6 7 6 7 6 R v v gDt Dv 7 V V+1 V,V+1 V,V+1 6 7 h i V 6 7 1 1 u (Z , X) = 6 7 (9) g 2 Dq V+1 V,V+1 V V+1 6 7 xyz 6 7 a a 6 7 b b 4 V+1 V 5 w w b b V+1 V u (Z , X) can be obtained by IMU state prediction (Equation (2)) and IMU pre-integration V+1 h i 1 1 (Equation (3)), and Dq q stands for the vector part of a quaternion. V,V+1 V V+1 xyz With the continuous-time linearized propagation of the error states and the IMU noise parameters, the covariances C of the pre-integration measurements and biases can be V+1 estimated. The cost function, in the form of a non-linear least-square, can be solved by the Gauss–Newton algorithm. Ceres Solver [36] is used for solving this nonlinear problem. It can be seen from Section 3.2 that the new states X are obtained by the joint optimiza- Appl. Sci. 2022, 12, x FOR PEER REVIEW 10 of 19 tion, which is used as the next state of the IMU, avoiding the drift from IMU propagation. The state of the IMU is applied to deskewing, thereby eliminating the motion distortion of the dual lidar. 4. Experiment 4. Experiment 4.1. Data Acquisition Equipment 4.1. Data Acquisition Equipment To verify the performance of the proposed high-precision SLAM framework based To verify the performance of the proposed high-precision SLAM framework based on on the tight coupling of Dual Lidar Inertial Odometry (HSDLIO) for multi-scene applica- the tight coupling of Dual Lidar Inertial Odometry (HSDLIO) for multi-scene applications, tions, this paper describes a series of experiments to qualitatively and quantitatively ana- this paper describes a series of experiments to qualitatively and quantitatively analyze our lyze our proposed framework. The device runs on an industrial computer with a proces- proposed framework. The device runs on an industrial computer with a processor of i7-9700 sor of i7-9700 through the port and network port, and the system configured of the indus- through the port and network port, and the system configured of the industrial computer is trial computer is ubuntu16.04. The scanning arm is installed on the backpack, with the ubuntu16.04. The scanning arm is installed on the backpack, with the industrial computer industrial computer and battery installed in the backpack. The small display screen of the and battery installed in the backpack. The small display screen of the backpack shows the backpack shows the localization and mapping in real-time, with the backpack equipment localization and mapping in real-time, with the backpack equipment shown in Figure 6. shown in Figure 6. Data of three representative environments are collected, including the Data of three representative environments are collected, including the featureless corridor, featureless corridor, stairs with height, and complex outdoor environments. To illustrate stairs with height, and complex outdoor environments. To illustrate the effectiveness of the the effectiveness of the HSDLIO algorithm, LeGO-LOAM and LIO-SAM algorithms are HSDLIO algorithm, LeGO-LOAM and LIO-SAM algorithms are compared with it. compared with it. Figure 6. Backpack collection equipment. Figure 6. Backpack collection equipment. 4.2. Indoor Experiment 1 Indoor experiment 1 was carried out in a corridor with few feature points. Since there was no GPS signal indoors, SLAM was mainly carried out through the tight coupling of dual lidar odometry and IMU. To highlight the performance of the HSDLIO algorithm, LeGO-LOAM and LIO-SAM algorithms are compared with it. Figure 7a shows the environment map of the corridor, while Figure 7b–d shows the overall mapping of LeGO-LOAM, LIO-SAM, and HSDLIO algorithms in the corridor scene, with the white points representing backpack motion tra- jectories. Compared with the HSDLIO algorithm in Figure 7d, the overall mapping of LeGO-LOAM (Figure 7b) has a significant deviation and the structure is unclear. Figure 8a–b shows that the mapping of LeGO-LOAM is incomplete because the lidar has a con- siderable drift in a scene with fewer feature points. The mapping results of LIO-SAM (Fig- ure 7c) is close to the real environment, with its structure complete. The main reason is that LIO-SAM tightly couples the horizontal lidar and IMU to make up for the lidar drift. However, Figure 9a–b indicates the mapping result of HSDLIO is more abundant than LIO-SAM in the top surface and ground information. The reason is that HSDLIO fuses horizontal and vertical lidars so that its angle of FOV in both horizontal and vertical di- rections is 360°. The dual lidars and IMU are tightly coupled to make up for the lidar drift. Appl. Sci. 2022, 12, 939 10 of 18 4.2. Indoor Experiment 1 Indoor experiment 1 was carried out in a corridor with few feature points. Since there was no GPS signal indoors, SLAM was mainly carried out through the tight coupling of dual lidar odometry and IMU. To highlight the performance of the HSDLIO algorithm, LeGO-LOAM and LIO-SAM algorithms are compared with it. Figure 7a shows the environment map of the corridor, while Figure 7b–d shows the overall mapping of LeGO-LOAM, LIO-SAM, and HSDLIO algorithms in the corridor scene, with the white points representing backpack motion trajectories. Compared with the HSDLIO algorithm in Figure 7d, the overall mapping of LeGO-LOAM (Figure 7b) has a significant deviation and the structure is unclear. Figure 8a,b shows that the mapping of LeGO-LOAM is incomplete because the lidar has a considerable drift in a scene with fewer feature points. The mapping results of LIO-SAM (Figure 7c) is close to the real environment, with its structure complete. The main reason is that LIO-SAM tightly couples the horizontal lidar and IMU to make up for the lidar drift. However, Figure 9a,b indicates the mapping result of HSDLIO is more abundant than LIO-SAM in the top surface and ground information. The reason is that HSDLIO fuses horizontal and Appl. Sci. 2022, 12, x FOR PEER REVIEW 11 of 19 vertical lidars so that its angle of FOV in both horizontal and vertical directions is 360 . The dual lidars and IMU are tightly coupled to make up for the lidar drift. (a) (b) (c) (d) Figure 7. (a) Corridor scene for experiment 1. (b) Mapping results of LeGO-LOAM in the corridor Figure 7. (a) Corridor scene for experiment 1. (b) Mapping results of LeGO-LOAM in the corridor scene, with the mapping of LeGO-LOAM having a significant deviation and the structure incom- scene, with the mapping of LeGO-LOAM having a significant deviation and the structure incomplete. plete. (c) Mapping results of LIO-SAM in the corridor scene, with LIO-SAM having a complete map- (c) Mapping results of LIO-SAM in the corridor scene, with LIO-SAM having a complete mapping ping structure but lacking information on the top and ground. (d) Mapping results of HSDLIO in structure but lacking information on the top and ground. (d) Mapping results of HSDLIO in the the corridor scene, with HSDLIO having a complete mapping structure. corridor scene, with HSDLIO having a complete mapping structure. (a) (b) Figure 8. Comparison of mapping details at the corner of the corridor, where (a) the mapping of LeGO-LOAM has a significant deviation and (b) the mapping of the HSDLIO structure is clearer and more complete than LeGO-LOAM. Appl. Sci. 2022, 12, x FOR PEER REVIEW 11 of 19 (a) (b) (c) (d) Figure 7. (a) Corridor scene for experiment 1. (b) Mapping results of LeGO-LOAM in the corridor scene, with the mapping of LeGO-LOAM having a significant deviation and the structure incom- plete. (c) Mapping results of LIO-SAM in the corridor scene, with LIO-SAM having a complete map- ping structure but lacking information on the top and ground. (d) Mapping results of HSDLIO in Appl. Sci. 2022, 12, 939 11 of 18 the corridor scene, with HSDLIO having a complete mapping structure. (a) (b) Figure 8. Comparison of mapping details at the corner of the corridor, where (a) the mapping of Appl. Sci. 2022, 12, x FOR PEER REVIEW Figure 8. Comparison of mapping details at the corner of the corridor, where (a) the mapping of 12 of 19 LeGO-LOAM has a significant deviation and (b) the mapping of the HSDLIO structure is clearer and LeGO-LOAM has a significant deviation and (b) the mapping of the HSDLIO structure is clearer more complete than LeGO-LOAM. and more complete than LeGO-LOAM. (a) (b) Figure 9. Comparison of mapping details at the top of the corridor, where (a) the mapping of Figure 9. Comparison of mapping details at the top of the corridor, where (a) the mapping of LIO- LIO-SAM lacks top information and (b) the mapping of HSDLIO shows finer structural details of SAM lacks top information and (b) the mapping of HSDLIO shows finer structural details of the the environment. environment. In the long corridor scene, compared with LeGO-LOAM and LIO-SAM, the structure of HSDLIO mapping is more complete and the point cloud information on the ground and In the long corridor scene, compared with LeGO-LOAM and LIO-SAM, the structure top surface is richer. To verify the positioning accuracy of the HSDLIO, the trajectories of HSDLIO mapping is more complete and the point cloud information on the ground of LeGO-LOAM, LIO-SAM, and HSDLIO are compared in Figure 10. It can be observed and top surface is richer. To verify the positioning accuracy of the HSDLIO, the trajectories in Figure 10 that the LeGO-LOAM trajectory has drifted. The main reason is that in the of LeGO-LOAM, LIO-SAM, and HSDLIO are compared in Figure 10. It can be observed LeGO-LOAM algorithm, the lidar odometry error continues to increase in a long corridor ienvir n Figure 10 onment with tha fewer t the LeGO- feature points. LOAM tra Both LIO-SAM jectory ha ands drifted. HSDLIO use The main reason is tha the tight coupling t in the of IMU and lidar for positioning, which improves positioning accuracy. HSDLIO further LeGO-LOAM algorithm, the lidar odometry error continues to increase in a long corridor improves the accuracy of positioning through the tight coupling of dual lidars and IMU. environment with fewer feature points. Both LIO-SAM and HSDLIO use the tight cou- To further improve the positioning accuracy of HSDLIO, the relative translational error pling of IMU and lidar for positioning, which improves positioning accuracy. HSDLIO (RTE)—the distance from the start point to the end point—is introduced. In all experiments, further improves the accuracy of positioning through the tight coupling of dual lidars and the data collection started and ended at the same point; when the positioning accuracy of IMU. HSDLIO LeGO− LOAM LIO SAM -5 -10 -15 start point -20 -25 -30 15 20 -5 0 Figure 10. Comparison of LeGO-LOAM, LIO-SAM, and HSDLIO trajectories. The red line is the trajectory of HSDLIO, the blue line is the trajectory of LIO-SAM, and the black line is the trajectory of LeGO-LOAM, with the latter having drifted. To further improve the positioning accuracy of HSDLIO, the relative translational error (RTE)—the distance from the start point to the end point—is introduced. In all ex- periments, the data collection started and ended at the same point; when the positioning accuracy of the device is very high, the start and end point will coincide. The RTE, when the backpack returns to the start point, is shown in Table 1. The RTE can be computed by Equation (10): 22 2 (10) RTE=+ x y + z oo o Appl. Sci. 2022, 12, x FOR PEER REVIEW 12 of 19 (a) (b) Figure 9. Comparison of mapping details at the top of the corridor, where (a) the mapping of LIO- SAM lacks top information and (b) the mapping of HSDLIO shows finer structural details of the environment. Appl. Sci. 2022, 12, 939 12 of 18 In the long corridor scene, compared with LeGO-LOAM and LIO-SAM, the structure of HSDLIO mapping is more complete and the point cloud information on the ground the device is very high, the start and end point will coincide. The RTE, when the backpack and top surface is richer. To verify the positioning accuracy of the HSDLIO, the trajectories returns to the start point, is shown in Table 1. The RTE can be computed by Equation (10): of LeGO-LOAM, LIO-SAM, and HSDLIO are compared in Figure 10. It can be observed in Figure 10 that the LeGO-LOAM trajectory has drifted. The main reason is that in the 2 2 2 RTE = x + y + z (10) o o LeGO-LOAM algorithm, the lidar odometry error continues to increase in a long corridor where x = x x , y = y y , z = z z , x , y , and z are the coordinates of the environment with fewer feature points. Both LIO-SAM and HSDLIO use the tight cou- o s e o s e s s o s e s starting point and x , y , and z are the coordinates of the end point. The result of LeGO- e e pling of IMU and lid e ar for positioning, which improves positioning accuracy. HSDLIO LOAM is not shown because its trajectory has severely shifted. Table 1 shows that the further improves the accuracy of positioning through the tight coupling of dual lidars and error of HSDLIO in the x and y direction is similar to LIO-SAM, but the accuracy in the z IMU. direction is higher than LIO-SAM and the RTE of HSDLIO is smaller than LIO-SAM. HSDLIO LeGO− LOAM LIO− SAM -5 -10 -15 start point -20 -25 -30 5 10 -5 Figure 10. Comparison of LeGO-LOAM, LIO-SAM, and HSDLIO trajectories. The red line is the Figure 10. Comparison of LeGO-LOAM, LIO-SAM, and HSDLIO trajectories. The red line is the trajectory of HSDLIO, the blue line is the trajectory of LIO-SAM, and the black line is the trajectory of trajectory of HSDLIO, the blue line is the trajectory of LIO-SAM, and the black line is the trajectory LeGO-LOAM, with the latter having drifted. of LeGO-LOAM, with the latter having drifted. Table 1. Relative translational error when the backpack returns to the starting point (meters). To further improve the positioning accuracy of HSDLIO, the relative translational Algorithm LeGO-LOAM LIO-SAM HSDLIO error (RTE)—the distance from the start point to the end point—is introduced. In all ex- x Fail 0.079 0.077 periments, the data collection started and ended at the same point; when the positioning y Fail 0.092 0.090 accuracy of the device is very high, the start and end point will coincide. The RTE, when z Fail 0.121 0.001 the backpack returns to the start point, is shown in Table 1. The RTE can be computed by RTE Fail 0.171 0.118 Equation (10): 4.3. Indoor Experiment 2 22 2 (10) RTE=+ x y + z oo o Indoor experiment 2 aimed to show the effectiveness of HSDLIO in the indoor envi- ronment with a certain height information. In this experiment, the backpack was carried up and down four flights of stairs. Figure 11a shows the stairs scene, while Figure 11b–d shows the overall mapping of LeGO-LOAM, LIO-SAM, and HSDLIO algorithms. The white points are backpack localization trajectories. LeGO-LOAM (Figure 11b) cannot obtain the structure of the stairs. Because the vertical FOV of the horizontal lidar is relatively narrow, the mapping result of LIO-SAM (Figure 11c) does not show the height information. With the help of the vertical lidar and the tight coupling of the IMU, the mapping of HSDLIO (Figure 11d) is the most complete and accurate, which is closer to the real stairs’ scene. Figure 12 shows the trajectories obtained by LeGO-LOAM, LIO-SAM, and HSDLIO. The red, black, and green lines represent the trajectories of HSDLIO, LeGO-LOAM, and LIO-SAM, respectively, with the trajectory of HSDLIO approximating the real trajectory. LeGO-LOAM has no IMU correction, which causes the drift of the trajectory. Although the trajectory of LIO-SAM does not drift, due to the narrow vertical FOV of the horizontal lidar, a significant error occurs in the z direction information. Comparing the height Appl. Sci. 2022, 12, x FOR PEER REVIEW 13 of 19 where xx=−x , yy=−y , zz=−z , x , y , and z are the coordinates of the os e os e os e s s s x y z starting point and , , and are the coordinates of the end point. The result of e e e LeGO-LOAM is not shown because its trajectory has severely shifted. Table 1 shows that the error of HSDLIO in the and direction is similar to LIO-SAM, but the accuracy in the x y z direction is higher than LIO-SAM and the RTE of HSDLIO is smaller than LIO-SAM. Table 1. Relative translational error when the backpack returns to the starting point (meters). Algorithm LeGO-LOAM LIO-SAM HSDLIO Fail 0.079 0.077 Fail 0.092 0.090 Fail 0.121 0.001 RTE Fail 0.171 0.118 4.3. Indoor Experiment 2 Appl. Sci. 2022, 12, 939 Indoor experiment 2 aimed to show the effectiveness of HSDLIO in the indoor e 13n of vi- 18 ronment with a certain height information. In this experiment, the backpack was carried up and down four flights of stairs. Figure 11a shows the stairs scene, while Figure 11b–d information of LIO-SAM and HSDLIO in the z direction in Figure 13, it can be seen shows the overall mapping of LeGO-LOAM, LIO-SAM, and HSDLIO algorithms. The that when going up and down four floors of stairs, the localization results of LIO-SAM white points are backpack localization trajectories. LeGO-LOAM (Figure 11b) cannot ob- provide no height information and also indicate a great drift, while HSDLIO provides tain the structure of the stairs. Because the vertical FOV of the horizontal lidar is relatively accurate height information. Therefore, the tight coupling of dual lidars and IMU makes narrow, the mapping result of LIO-SAM (Figure 11c) does not show the height infor- the advantages of HSDLIO clearly apparent. Table 2 shows that the error of HSDLIO in mation. With the help of the vertical lidar and the tight coupling of the IMU, the mapping the x and y direction is smaller than LIO-SAM, but the accuracy in the z direction is much of HSDLIO (Figure 11d) is the most complete and accurate, which is closer to the real higher than LIO-SAM and the RTE of HSDLIO is much smaller than LIO-SAM. stairs’ scene. Appl. Sci. 2022, 12, x FOR PEER REVIEW 14 of 19 (a) (b) (c) (d) height information. Therefore, the tight coupling of dual lidars and IMU makes the ad- Figure 11. (a) Stairs scene for experiment 2. (b) Mapping results of LeGO-LOAM in the stairs scene, Figure 11. (a) Stairs scene for experiment 2. (b) Mapping results of LeGO-LOAM in the stairs scene, vantages of HSDLIO clearly apparent. Table 2 shows that the error of HSDLIO in the x with LeGO-LOAM’s mapping result having failed. (c) Mapping results of LIO-SAM in the stairs with LeGO-LOAM’s mapping result having failed. (c) Mapping results of LIO-SAM in the stairs and y direction is smaller than LIO-SAM, but the accuracy in the z direction is much scene, with LIO-SAM’s mapping results lacking height information. (d) Mapping results of HSDLIO scene, with LIO-SAM’s mapping results lacking height information. (d) Mapping results of HSDLIO in the stairs scene, with the mapping of HSDLIO the most complete and accurate. higher than LIO-SAM and the RTE of HSDLIO is much smaller than LIO-SAM. in the stairs scene, with the mapping of HSDLIO the most complete and accurate. Figure 12 shows the trajectories obtained by LeGO-LOAM, LIO-SAM, and HSDLIO. The red, black, and green lines represent the trajectories of HSDLIO, LeGO-LOAM, and HSDLIO LIO-SAM, respectively, with the trajectory of HSDLIO approximating the real trajectory. LeGO− LOAM LIO− SAM LeGO-LOAM has no IMU correction, which causes the drift of the trajectory. Although the trajectory of LIO-SAM does not drift, due to the narrow vertical FOV of the horizontal lidar, a significant error occurs in the z direction information. Comparing the height in- formation of LIO-SAM and HSDLIO in the z direction in Figure 13, it can be seen that when going up and down four floors of stairs, the localization results of LIO-SAM provide no height information and also indicate a great drift, while HSDLIO provides accurate -2 -2 -1 -2 Figure Figure 12. 12. Comparison Comparison of Le of LeGO-LOAM, GO-LOA LIO-SAM, M, LIO-S and AM, HSDLIO and HSDLIO trajectories. trajectories. The red The red line is the line is the trajectory of HSDLIO, the blue line is the trajectory of LIO-SAM, and the black line is the trajec- trajectory of HSDLIO, the blue line is the trajectory of LIO-SAM, and the black line is the trajectory tory of LeGO-LOAM. While LeGO-LOAM’s trajectory drifted and LIO-SAM’s trajectory produced of LeGO-LOAM. While LeGO-LOAM’s trajectory drifted and LIO-SAM’s trajectory produced cu- cumulative errors in the z direction, the trajectory of HSDLIO approximated the real trajectory. mulative errors in the z direction, the trajectory of HSDLIO approximated the real trajectory. 4.4. Outdoor Experiment The outdoor experiment was carried out on a city street. This scene has rich feature HSDLIO down four floors of stair points, so the drift of lidar odometry grows very slowly. The localization accuracy of LeGO-LOAM, LIO-SAM, and HSDLIO have very little difference, but the mapping of HSDLIO can provide finer structural details. go up four floors of stair 0 1020 304050 607080 90 100 LIO− SAM down four floors of stair go up four floors of stair -1 0 1020 304050 607080 90 100 Figure 13. Comparison of the height information of LIO-SAM and HSDLIO in the z direction. The trajectory of HSDLIO provides height information from going up and down four floors of stairs, while LIO-SAM failed to position in the direction. Table 2. Relative translational error when the backpack returns to the starting point (meters). Algorithm LeGO-LOAM LIO-SAM HSDLIO Fail 0.033 0.011 Fail 0.036 0.024 Fail 0.062 0.004 RTE Fail 0.078 0.026 Appl. Sci. 2022, 12, x FOR PEER REVIEW 14 of 19 height information. Therefore, the tight coupling of dual lidars and IMU makes the ad- vantages of HSDLIO clearly apparent. Table 2 shows that the error of HSDLIO in the x and direction is smaller than LIO-SAM, but the accuracy in the z direction is much higher than LIO-SAM and the RTE of HSDLIO is much smaller than LIO-SAM. HSDLIO LeGO LOAM LIO− SAM -2 -2 -1 -2 Figure 12. Comparison of LeGO-LOAM, LIO-SAM, and HSDLIO trajectories. The red line is the trajectory of HSDLIO, the blue line is the trajectory of LIO-SAM, and the black line is the trajectory of LeGO-LOAM. While LeGO-LOAM’s trajectory drifted and LIO-SAM’s trajectory produced cu- Appl. Sci. 2022, 12, 939 14 of 18 mulative errors in the z direction, the trajectory of HSDLIO approximated the real trajectory. HSDLIO down four floors of stair go up four floors of stair 0 1020 304050 607080 90 100 LIO− SAM down four floors of stair go up four floors of stair -1 0 1020 304050 607080 90 100 Figure 13. Comparison of the height information of LIO-SAM and HSDLIO in the z direction. The Figure 13. Comparison of the height information of LIO-SAM and HSDLIO in the z direction. The trajectory of HSDLIO provides height information from going up and down four floors of stairs, trajectory of HSDLIO provides height information from going up and down four floors of stairs, while LIO-SAM failed to position in the z direction. while LIO-SAM failed to position in the direction. Appl. Sci. 2022, 12, x FOR PEER REVIEW 15 of 19 Table 2. Relative translational error when the backpack returns to the starting point (meters). Table 2. Relative translational error when the backpack returns to the starting point (meters). Algorithm LeGO-LOAM LIO-SAM HSDLIO Algorithm LeGO-LOAM LIO-SAM HSDLIO x Fail 0.033 0.011 Fail 0.033 0.011 4.4. Outdoor Experi o ment y Fail 0.036 0.024 o Fail 0.036 0.024 The outdooro experiment was carried out on a city street. This scene has rich feature z Fail 0.062 0.004 points, so the drift of o lidar odometry grows very slowly. The localization accuracy of Fail 0.062 0.004 LeGO-LOAM, LIO-SAM, and HSDLIO have very little difference, but the mapping of RTE Fail 0.078 0.026 RTE Fail 0.078 0.026 HSDLIO can provide finer structural details. The trajectories obtained by LeGO-LOAM, LIO-SAM, and HSDLIO are shown in Fig- The trajectories obtained by LeGO-LOAM, LIO-SAM, and HSDLIO are shown in ure 14. Because there is more feature information outdoors, the drift of lidar odometry Figure 14. Because there is more feature information outdoors, the drift of lidar odometry grows very slowly. LeGO-LOAM can obtain relatively accurate localization only through grows very slowly. LeGO-LOAM can obtain relatively accurate localization only through horizontal lidar. In LIO-SAM and HSDLIO, GPS measurement is introduced by loosely horizontal lidar. In LIO-SAM and HSDLIO, GPS measurement is introduced by loosely coupling with IMU through UKF, which can provide relatively accurate initial position- coupling with IMU through UKF, which can provide relatively accurate initial positioning. ing. The localization accuracy of HSDLIO and LIO-SAM is higher than LeGO-LOAM. The The localization accuracy of HSDLIO and LIO-SAM is higher than LeGO-LOAM. The available data in Table 3 further supports the performance of the HSDLIO algorithm. available data in Table 3 further supports the performance of the HSDLIO algorithm. HSDLIO LeGO− LOAM LIO− SAM -5 -10 -15 -20 0 204060 80 100 Figure 14. Comparison of LeGO-LOAM, LIO-SAM, and HSDLIO trajectories. The red dashed line is Figure 14. Comparison of LeGO-LOAM, LIO-SAM, and HSDLIO trajectories. The red dashed line the trajectory of HSDLIO, the blue dashed line is the trajectory of LIO-SAM, and the black dashed is the trajectory of HSDLIO, the blue dashed line is the trajectory of LIO-SAM, and the black dashed line is the line is the traje trajectory ctory of LeGO-LOAM. of LeGO-LOAM. Table 3. Relative translational error when the backpack returns to the starting point (meters). Algorithm LeGO-LOAM LIO-SAM HSDLIO RTE 0.082 0.036 0.031 The 3D city street scene is shown in Figure 15a. In Figure 15a–d, the marks in the white circle represent trees and the marks in the yellow circle represent the building struc- ture. Figure 15b–d shows the overall mapping of LeGO-LOAM, LIO-SAM, and HSDLIO algorithms in the city street scene, with the white points being their localization trajecto- ries. Figure 15b–c shows that the overall effect of the mapping lacks building height infor- mation, while the structure of buildings and trees is blurred. The mapping details are an- alyzed at the marker, compared with the mapping results of LeGO-LOAM and LIO-SAM (Figure 15b–c), while the HSDLIO mapping result is shown in Figure 15d. It can be seen in Figure 15d that the structure of the trees and buildings is complete and has finer details. Appl. Sci. 2022, 12, 939 15 of 18 Table 3. Relative translational error when the backpack returns to the starting point (meters). Algorithm LeGO-LOAM LIO-SAM HSDLIO RTE 0.082 0.036 0.031 The 3D city street scene is shown in Figure 15a. In Figure 15a–d, the marks in the white circle represent trees and the marks in the yellow circle represent the building structure. Figure 15b–d shows the overall mapping of LeGO-LOAM, LIO-SAM, and HSDLIO algorithms in the city street scene, with the white points being their localization trajectories. Figure 15b,c shows that the overall effect of the mapping lacks building height information, while the structure of buildings and trees is blurred. The mapping details are analyzed at the marker, compared with the mapping results of LeGO-LOAM and LIO-SAM Appl. Sci. 2022, 12, x FOR PEER REVIEW 16 of 19 (Figure 15b,c), while the HSDLIO mapping result is shown in Figure 15d. It can be seen in Figure 15d that the structure of the trees and buildings is complete and has finer details. (a) (b) (c) (d) Figure 15. (a) City street scene, with the marks in the white circle representing trees and the marks Figure 15. (a) City street scene, with the marks in the white circle representing trees and the marks in the yellow circle representing the building structure. (b) The mapping results of LeGO-LOAM in in the yellow circle representing the building structure. (b) The mapping results of LeGO-LOAM the city street scene provide a lack of building height information and the structure of buildings and in the city street scene provide a lack of building height information and the structure of buildings trees is blurred. (c) The mapping results of LIO-SAM in the city street scene provide a lack of build- and trees is blurred. (c) The mapping results of LIO-SAM in the city street scene provide a lack of ing height information, though the mapping of trees is relatively clear. (d) The mapping results of building height information, though the mapping of trees is relatively clear. (d) The mapping results HSDLIO in the city street scene show that the point cloud features of the trees and the building of HSDLIO in the city street scene show that the point cloud features of the trees and the building structure are finer and more complete, including finer structural details. structure are finer and more complete, including finer structural details. To further demonstrate the advantages of HSDLIO mapping, the mapping details of To further demonstrate the advantages of HSDLIO mapping, the mapping details the building are shown in Figure 16, with the mapping results of LeGO-LOAM and LIO- of the building are shown in Figure 16, with the mapping results of LeGO-LOAM and SAM lacking the structural details and height information of the building. However, the LIO-SAM lacking the structural details and height information of the building. However, mapping result of HSDLIO show finer structural details and more height information of the mapping result of HSDLIO show finer structural details and more height information the building. of the building. (a) (b) Appl. Sci. 2022, 12, x FOR PEER REVIEW 16 of 19 (a) (b) (c) (d) Figure 15. (a) City street scene, with the marks in the white circle representing trees and the marks in the yellow circle representing the building structure. (b) The mapping results of LeGO-LOAM in the city street scene provide a lack of building height information and the structure of buildings and trees is blurred. (c) The mapping results of LIO-SAM in the city street scene provide a lack of build- ing height information, though the mapping of trees is relatively clear. (d) The mapping results of HSDLIO in the city street scene show that the point cloud features of the trees and the building structure are finer and more complete, including finer structural details. To further demonstrate the advantages of HSDLIO mapping, the mapping details of the building are shown in Figure 16, with the mapping results of LeGO-LOAM and LIO- SAM lacking the structural details and height information of the building. However, the Appl. Sci. 2022, 12, 939 16 of 18 mapping result of HSDLIO show finer structural details and more height information of the building. (a) Appl. Sci. 2022, 12, x FOR PEER REVIEW 17 of 19 (b) (c) Figure 16. Comparison of mapping details in the city street scene showing that (a) the mapping of Figure 16. Comparison of mapping details in the city street scene showing that (a) the mapping LeGO-LOAM lacks structural details of the building, (b) the mapping of LIO-SAM lacks the height of LeGO-LOAM lacks structural details of the building, (b) the mapping of LIO-SAM lacks the of the building structure, and (c) the mapping of HSDLIO provides finer structural details of the height of the building structure, and (c) the mapping of HSDLIO provides finer structural details of building. the building. 5. Conclusions 5. Conclusions This paper proposes a high-precision SLAM framework for multi-scene applications. This paper proposes a high-precision SLAM framework for multi-scene applications. In this framework, dual lidars are fused to make up for the shortcomings of lidars’ narrow In this framework, dual lidars are fused to make up for the shortcomings of lidars’ narrow FOV FOV and h and hence ence improve t improve the he complet completeness eness o of f mapp mapping. ing. M Meanwhile, eanwhile, du dual al llidars idars a and nd IMU IMU are t are tightly ightly co coupled upled to to impr impr ove t ovehthe e loca localization lization accaccuracy uracy. Ext . e Extensive nsive experiment experiments s were ca wer r- e ried carried out out and and the results sh the results showed owed that that co compar mpared with the commonly ed with the commonlyused LeGO-LOAM used LeGO-LOAM and and LIO-SAM methods, o LIO-SAM methods, our ur propo proposed sed method method can can p pr roduce oduce more more precise precise loca localization lization and and more accurate mapping results with more details. more accurate mapping results with more details. Author Contributions: Conceptualization, K.X.; methodology, K.X. and W.Y.; software, K.X., W.Y., Author Contributions: Conceptualization, K.X.; methodology, K.X. and W.Y.; software, K.X., W.Y., W.L. and F.Q W.L. and F.Q.;.; validation, validation, W W.Y..Y. and Z and Z.M.; .M.; fo formal rmal analy analysis, sis, K.X. K.X. an and W d W.L.; .L.; investigation, investigation, W.L. W.L. and K.X.; and K.X. resour ; re ces, souK.X rces, . K.X and .W and .Y.; W.Y data.;curation, data curation, K.X. and K.X. W and W .Y.; writing—original .Y.; writing—orig draft, inal draft, K K.X.; writing—r .X.; writing— eview and editing, K.X. and W.Y.; visualization, K.X.; supervision, K.X. and Z.M.; project administration, review and editing, K.X. and W.Y.; visualization, K.X.; supervision, K.X. and Z.M.; project admin- istration, K.X. and K.X W.Y..; afunding nd W.Y.; fu acquisition, nding acqu K.X. isition, and W K.X. .Y. All and W.Y authors . All authors have read and agreed to the have read and agreed to the published pu version blished of vers the manuscript. ion of the manuscript. Fund Funding: ing: This resear This researchch wa was funded s funded by by the th National e National Natural Na Science tural Sc Foundation ience Foun (grant dation (g nos.rant nos. 61602529 61602529 and and 61672539).61672539). Thi This work wass work was al also supported so supported by Hunan Key by Hunan Key Laboratory of Laboratory of Intelligent Logistics Intelligent Tech- Logistics Technology (2019TP1015) and Scientific Research Project of Hunan Education Department nology (2019TP1015) and Scientific Research Project of Hunan Education Department (No. 17C1650). (No. 17C1650). Institutional Review Board Statement: Not applicable. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest. References 1. Dissanayake, M.W.M.G.; Newman, P.; Clark, S.; Durrant-Whyte, H.F.; Csorba, M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom. 2001, 17, 229–241. 2. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. 3. Khairuddin, A.R.; Talib, M.S.; Haron, H. Review on Simultaneous Localization and Mapping (SLAM). In Proceedings of the IEEE International Conference on Control System, Computing and Engineering (ICCSCE), George Town, Malaysia, 25–27 No- vember 2016; pp. 85–90. 4. Belter, D.; Nowicki, M.; Skrzypczy’Nski, P. Lightweight RGB-D SLAM System for Search and Rescue Robots, Progress in Auto- mation, Robotics and Measuring Techniques; Springer: Cham, Switzerland,2015; Volume 2, pp. 11–21. 5. Sim, R.; Roy, N. Global A-Optimal Robot Exploration in Slam. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp.661–666. 6. Cheng, Z.; Wang, G. Real-Time RGB-D SLAM with Points and Lines. In Proceedings of the 2018 2nd IEEE Advanced Infor- mation Management, Communicates, Electronic and Automation Control Conference (IMCEC), Xi’an, China, 25–27 May 2018; pp. 119–122. 7. Liu, T.; Zhang, X.; Wei, Z.; Yuan, Z. A robust fusion method for RGB-D SLAM. In Proceedings of the 2013 Chinese Automation Congress, Changsha, China, 7–8 November 2013; pp. 474–481. Appl. Sci. 2022, 12, 939 17 of 18 Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest. References 1. Dissanayake, M.W.M.G.; Newman, P.; Clark, S.; Durrant-Whyte, H.F.; Csorba, M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom. 2001, 17, 229–241. [CrossRef] 2. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. [CrossRef] 3. Khairuddin, A.R.; Talib, M.S.; Haron, H. Review on Simultaneous Localization and Mapping (SLAM). In Proceedings of the IEEE International Conference on Control System, Computing and Engineering (ICCSCE), George Town, Malaysia, 25–27 November 2016; pp. 85–90. 4. Belter, D.; Nowicki, M.; Skrzypczy’Nski, P. Lightweight RGB-D SLAM System for Search and Rescue Robots. In Progress in Automation, Robotics and Measuring Techniques; Springer: Cham, Switzerland, 2015; Volume 2, pp. 11–21. 5. Sim, R.; Roy, N. Global A-Optimal Robot Exploration in Slam. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 661–666. 6. Cheng, Z.; Wang, G. Real-Time RGB-D SLAM with Points and Lines. In Proceedings of the 2018 2nd IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Xi’an, China, 25–27 May 2018; pp. 119–122. 7. Liu, T.; Zhang, X.; Wei, Z.; Yuan, Z. A robust fusion method for RGB-D SLAM. In Proceedings of the 2013 Chinese Automation Congress, Changsha, China, 7–8 November 2013; pp. 474–481. 8. Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-Time Loop Closure in 2D LIDAR SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–20 May 2016; pp. 1271–1278. 9. Yu, Y.; Gao, W.; Liu, C. A GPS-aided Omnidirectional Visual-Inertial State Estimator in Ubiquitous Environments. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 7750–7755. 10. Wang, Z.; Zhang, J.; Chen, S. Robust High Accuracy Visual-Inertial-Lidar SLAM System. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 6636–6641. 11. Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [CrossRef] 12. Zhou, Y.; Li, H.; Kneip, L. Canny-VO: Visual Odometry with RGB-D Cameras Based on Geometric 3-D–2-D Edge Alignment. IEEE Trans. Robot. 2019, 35, 184–199. [CrossRef] 13. Hu, G.; Huang, S.; Zhao, L.; Alempijevic, A.; Dissanayake, G. A robust RGB-D SLAM algorithm. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 1714–1719. 14. Zhang, J.; Singh, S. Loam: Lidar odometry and mapping in real-time. In Proceedings of the Robotics: Science and Systems X, Berkeley, CA, USA, 12–16 July 2014. 15. Tsardoulias, E.; Petrou, L. Critical rays scan match SLAM. J. Intell. Robot. Syst. 2013, 72, 441–462. [CrossRef] 16. Jiang, B.; Zhu, Y.; Liu, M. A Triangle Feature Based Map-to-map Matching and Loop Closure for 2D Graph SLAM. In Proceedings of the 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), Dali, China, 6–8 December 2019; pp. 2719–2725. 17. Shan, T.; Englot, B.J. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. 18. Bogoslavskyi, I.; Stachniss, C. Fast range image-based segmentation of sparse 3D lidar scans for online operation. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 163–169. 19. Ranganathan, A. The levenberg-marquardt algorithm. Tutoral LM Algorithm 2004, 11, 101–110. 20. Torres-Torriti, M.; Guesalaga, A. Scan-to-map matching using the Hausdorff distance for robust mobile robot localization. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation (ICRA), Pasadena, CA, USA, 19–23 May 2008; pp. 455–460. 21. Hassani, A.; Morris, N.; Spenko, M. Experimental integrity evaluation of tightly-integrated IMU/LiDAR including return-light intensity data. In Proceedings of the 32nd International Technical Meeting of The Satellite Division of the Institute of Navigation (ION), Miami, FL, USA, 16–20 September 2019; pp. 2637–2658. 22. Velas, M.; Spanel, M.; Hradis, M. CNN for IMU assisted odometry estimation using velodyne LiDAR. In Proceedings of the 2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Torres Vedras, Portugal, 25–27 April 2018; pp. 71–77. 23. Deilamsalehy, H.; Havens, T.C. Sensor fused three-dimensional localization using IMU, camera and LiDAR. In Proceedings of the 2016 IEEE Sensors, Orlando, FL, USA, 30 October–3 November 2016; pp. 1–3. 24. Xie, G.; Zong, Q.; Zhang, X. Loosely-coupled lidar-inertial odometry and mapping in real time. Int. J. Intell. Robot. Appl. 2021, 5, 119–129. [CrossRef] Appl. Sci. 2022, 12, 939 18 of 18 25. Moore, T.; Stouch, D. A generalized extended kalman filter implementation for the robot operating system. In Intelligent Autonomous Systems 13; Springer: Cham, Switzerland, 2016; pp. 335–348. 26. Ye, H.; Chen, Y.; Liu, M. Tightly coupled 3d lidar inertial odometry and mapping. In Proceedings of the 2019 IEEE International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3144–3150. 27. Shan, T.; Englot, B.; Meyers, D. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 5135–5142. 28. Wellington, C.; Stentz, A. Learning predictions of the load-bearing surface for autonomous rough-terrain navigation in vegetation. In Springer Tracts in Advanced Robotics (STAR); Springer: Berlin/Heidelberg, Germany, 2003; Volume 24, pp. 83–92. 29. Bosse, M.; Zlot, R. Continuous 3D scan-matching with a spinning 2D laser. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, 12–17 May 2009; pp. 4312–4319. 30. Bosse, M.; Zlot, R.; Flick, P. Zebedee: Design of a spring-mounted 3-d range sensor with application to mobile mapping. IEEE Trans. Robot. 2012, 28, 1104–1119. [CrossRef] 31. Palieri, M.; Morrell, B. Locus: A multi-sensor lidar-centric solution for high-precision odometry and 3d mapping in real-time. IEEE Robot. Autom. Lett. 2020, 6, 421–428. [CrossRef] 32. Jiao, J.; Ye, H.; Zhu, Y.; Liu, M. Robust odometry and mapping for multi-lidar systems with online extrinsic calibration. IEEE Trans. Robot. 2021, 1–10. [CrossRef] 33. Nguyen, T.M.; Yuan, S. MILIOM: Tightly Coupled Multi-Input Lidar-Inertia Odometry and Mapping. IEEE Robot. Autom. Lett. 2021, 6, 5573–5580. [CrossRef] 34. Forster, C.; Carlone, L.; Dellaert, F. On-Manifold Preintegration for Real-Time Visual-Inertial Odometry. IEEE Trans. Robot. Autom. 2017, 33, 1–21. [CrossRef] 35. Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [CrossRef] 36. Ceres Solver. Available online: http://ceres-solver.org (accessed on 5 October 2021). http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Sciences Multidisciplinary Digital Publishing Institute

High-Precision SLAM Based on the Tight Coupling of Dual Lidar Inertial Odometry for Multi-Scene Applications

Applied Sciences , Volume 12 (3) – Jan 18, 2022

Loading next page...
 
/lp/multidisciplinary-digital-publishing-institute/high-precision-slam-based-on-the-tight-coupling-of-dual-lidar-inertial-0XV31RfCog

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Multidisciplinary Digital Publishing Institute
Copyright
© 1996-2022 MDPI (Basel, Switzerland) unless otherwise stated Disclaimer The statements, opinions and data contained in the journals are solely those of the individual authors and contributors and not of the publisher and the editor(s). MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Terms and Conditions Privacy Policy
ISSN
2076-3417
DOI
10.3390/app12030939
Publisher site
See Article on Publisher Site

Abstract

applied sciences Article High-Precision SLAM Based on the Tight Coupling of Dual Lidar Inertial Odometry for Multi-Scene Applications 1 1 , 2 1 1 Kui Xiao , Wentao Yu *, Weirong Liu , Feng Qu and Zhenyan Ma College of Computer and Information Engineering, Central South University of Forestry and Technology, Changsha 410004, China; 20191100304@csuft.edu.cn (K.X.); qufeng0817@csuft.edu.cn (F.Q.); t20070552@csuft.edu.cn (Z.M.) School of Computer Science and Engineering, Central South University, Changsha 410083, China; frat@csu.edu.cn * Correspondence: wentaoyu@csuft.edu.cn; Tel.: +86-0731-134-6758-9376 Featured Application: This paper fuses different sensors to form a general high-precision SLAM framework for multi-scene applications. The algorithm framework in this paper can be extended to the fields of autonomous driving, robot navigation, and 3D reconstruction. Abstract: Simultaneous Localization and Mapping (SLAM) is an essential feature in many applica- tions of mobile vehicles. To solve the problem of poor positioning accuracy, single use of mapping scene, and unclear structural characteristics in indoor and outdoor SLAM, a new framework of tight coupling of dual lidar inertial odometry is proposed in this paper. Firstly, through external calibration and an adaptive timestamp synchronization algorithm, the horizontal and vertical lidar data are fused, which compensates for the narrow vertical field of view (FOV) of the lidar and makes the characteristics of vertical direction more complete in the mapping process. Secondly, the dual lidar data is tightly coupled with an Inertial Measurement Unit (IMU) to eliminate the motion distortion of Citation: Xiao, K.; Yu, W.; Liu, W.; the dual lidar odometry. Then, the value of the lidar odometry after correcting distortion and the Qu, F.; Ma, Z. High-Precision SLAM pre-integrated value of IMU are used as constraints to establish a non-linear least-squares objective Based on the Tight Coupling of Dual Lidar Inertial Odometry for function. Joint optimization is then performed to obtain the best value of the IMU state values, Multi-Scene Applications. Appl. Sci. which will be used to predict the state of IMU at the next time step. Finally, experimental results are 2022, 12, 939. https://doi.org/ presented to verify the effectiveness of the proposed method. 10.3390/app12030939 Keywords: simultaneous localization and mapping; dual lidar inertial odometry; IMU; time Academic Editors: António synchronization; tight coupling Paulo Moreira, Pedro Neto and Félix Vilariño Received: 30 November 2021 Accepted: 13 January 2022 1. Introduction Published: 18 January 2022 Simultaneous localization and mapping (SLAM) require building a map of an un- Publisher’s Note: MDPI stays neutral known environment by a mobile vehicle and simultaneously localizing the vehicle in such a with regard to jurisdictional claims in map [1–3]. SLAM is essential for vehicles to fulfill many tasks, including vehicle rescue [4] published maps and institutional affil- and exploration [5]. The perception of the unknown external environment by various iations. onboard sensors provides vital information for SLAM. Thus, integrating different sensors to develop a practical SLAM framework that can be applied in multiple scenes is essential. Generally, both vision-based and lidar-based SLAM [6–10] are used. Although the vision-based SLAM can obtain high-precision positioning [11–13], the vision sensors are Copyright: © 2022 by the authors. vulnerable to light change in the environment and cannot work in dark or untextured Licensee MDPI, Basel, Switzerland. scenes. In comparison, lidar is not affected by light and can usually measure the angle and This article is an open access article distance of obstacles with higher accuracy. Therefore, this paper focuses on the design of distributed under the terms and lidar-based SLAM to adapt the multi-scene applications. conditions of the Creative Commons In recent years, many different lidar-based SLAM schemes have been proposed. Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ Among them, the Lidar Odometry and Mapping in Real-time (LOAM) method, where a 4.0/). single-line lidar and a motor are used to form a multiline lidar to realize low-drift and Appl. Sci. 2022, 12, 939. https://doi.org/10.3390/app12030939 https://www.mdpi.com/journal/applsci Appl. Sci. 2022, 12, 939 2 of 18 low-calculation real-time positioning and mapping, has been studied extensively [14]. In the LOAM method, SLAM is divided into two parts: lidar odometry and lidar mapping. In the lidar odometry part, to reduce the computation, the plane smoothness of the lidar point cloud, which is utilized to distinguish the edge points, is calculated according to the curvature and then invalid point clouds are discarded to improve the feature of point cloud. After the point cloud information is obtained through feature extraction, a scan-to-scan method [15] is proposed to realize feature matching between frames, including edge points and planar points matching. To eliminate the motion distortion of lidar, linear interpolation is utilized. After the distortion is eliminated, the pose transformation of the lidar point cloud data of two adjacent frames is acquired to obtain the lidar odometry. Then, after accumulating a certain number of point cloud data, lidar mapping is performed through a map-to-map matching method [16]. Although LOAM can create a high-precision point cloud map, it cannot remove moving people or objects during feature extraction. Further- more, in an environment with fewer feature points, it is easy for the lidar odometry to fail, resulting in positioning and mapping with worse accuracy. To solve the above problems, a Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain (LeGO-LOAM) [17] is proposed. Its core comprises four modules: point cloud segmentation and denoising, feature extraction, lidar odometry, and lidar mapping. Firstly, the point cloud segmentation technology [18] solves the defects of moving people or objects in LOAM mapping and filters out noise. Then, feature extraction is applied to obtain planar and edge features from the segmented point cloud. The lidar odometry module uses the features to find out the optimal pose transform between two consecutive scans with the help of a two-step Levenberg–Marquardt (LM) optimization method [19]. The extracted features are further processed by a scan-to-map [20] matching method to obtain a global point cloud map in lidar mapping. LeGO-LOAM optimizes LOAM to a certain extent, but in large scenes (e.g., long corridors) or environments with few feature points, the low frequency of lidar and less characteristic information can lead to significant errors in positioning and mapping. Some recent research has used low-cost IMU [21–23] to assist lidar for SLAM. The simplest way to integrate the IMU with lidar is loose coupling [24], in which IMU is regarded as an independent module to assist the lidar. The authors of [25] loosely couple IMU and optional GPS measurement with lidar through the Extended Kalman Filter (EKF) to improve computational efficiency and accuracy. However, the IMU error will continue to accumulate in the case of long distances, so the localization error is still increasing in such cases. To solve this problem, IMU and lidar are generally coupled via a tight coupling method that usually offers improved accuracy. A tightly coupled lidar inertial odometry and mapping framework (LIO-Mapping) is introduced in the literature [26]. It comprises the state optimization for the odometry and the refinement with rotational constraints. Results showed that this method outperformed the state-of-the-art lidar-only and loosely coupled methods. Since LIO-Mapping is designed to process all sensor measurements, real-time performance is not achieved. The authors of [27] proposed a tightly coupled Lidar Inertial Odometry via Smoothing and Mapping (LIO-SAM) based on LeGO-LOAM. LIO-SAM performs highly accurate, real-time mobile vehicle trajectory estimation and map building, suitable for multi-sensor fusion and global optimization. Although the tight coupling of lidar and IMU can improve the localization accuracy, lidar sensors have certain drawbacks. Conventional lidar can only be used to scan the environment in a narrow range of vertical angles and the point cloud feature information obtained is limited. For example, in an indoor corridor or staircase environment, lidar can receive the point cloud information of the sidewall, but only a tiny part of the point cloud information is obtained from the floor and ceiling. In such cases, the matched lidar features can easily cause ill-constrained pose estimation and incomplete mapping. Aiming to broaden the FOV of lidar, lidars can be actuated in a number of ways. The periodic nodding and continuous rotation can be adopted and the resulting configurations can potentially enlarge the FOV. While the implementation of this method depends on Appl. Sci. 2022, 12, 939 3 of 18 a complex mechanical structure, the localization accuracy is relatively low [28,29]. More recently, a new design for a 3D sensor system was introduced [30], involving construction from a 2D range scanner coupled with a passive linkage mechanism, such as a spring. However, the system requires reasonably continual excitation to induce motion of the sen- sor head. Therefore, the system is not considered appropriate for electric ground vehicles operating with infrequent accelerations on smooth terrain. In order to solve the above problems, some researchers have started investigating the use of multiple 3D lidars for better coverage of the environment. In [31], a high-precision lidar odometry system was proposed to achieve robust and real-time operation under challenging perceptual condi- tions. In this system, the two lidars are mounted at 180 degrees to each other to make up for the self-similar areas with low lidar observability. The authors of [32] proposed a system to achieve robust and simultaneous extrinsic calibration, odometry, and mapping for multiple lidars, while [33] proposed a scheme to combine multiple lidars with complementary FOV for feature-based lidar-inertia odometry and mapping. While the above methods can work well in single-scene applications, their adaptability to different environments was not considered, especially the environments that need to perceive height information. This paper proposes a high-precision SLAM framework based on tight coupling of dual lidar inertial to overcome the above problems. In this framework, the horizontal lidar and the vertical lidar are integrated to broaden the FOV. Then the dual lidar and IMU are tightly coupled to improve the localization accuracy. In the outdoor environment, this paper introduces GPS measurements to further improve positioning accuracy. The main contributions of this paper include: 1. A general high-precision SLAM framework is provided by fusing different sensors. It can adapt to multi-scene applications, such as a corridor with fewer features, stairs with height, and complex outdoor environments. 2. The horizontal and vertical lidars are fused by external calibration and adaptive time synchronization algorithms to solve the narrow vertical FOV of the lidar. 3. To improve the positioning accuracy of SLAM in the environment with height infor- mation (e.g., stairs), the dual lidar odometry and IMU are tightly coupled. The dual lidar odometry measurement and IMU pre-integration are jointly optimized to obtain more accurate IMU state values, which will be used to eliminate the motion distortion of the dual lidar to improve the localization accuracy. 4. In addition, several practical experiments are carried out to verify the feasibility and effectiveness of the proposed method. 2. Prerequisites 2.1. IMU State Prediction and Pre-Integration Usually, the frequency of IMU is much higher than lidar. IMU can obtain the accelera- tion and angular velocity at each time step, which can be used to predict the next state of IMU through integration. In general, the IMU states at time t can be modeled as: X = [p , v , R , b ] (1) t t t where X represents the state in the world frame W at time t; p , v , and R represent t t the position, velocity, and rotation matrix, respectively, at time t; and a ˆ and w ˆ are the t t acceleration and angular velocity of the raw IMU measurements (a ˆ and w ˆ are affected by t t a slowly varying bias, b ). The next state, X , is predicted through the integration of IMU, where Dt is the t+Dt interval between two consecutive IMU measurements. The state prediction value of the Appl. Sci. 2022, 12, 939 4 of 18 IMU is then used to infer the motion of the vehicle. The velocity, position, and rotation of the vehicle at time t + Dt can be computed by Equation (2) [27]: a a v = v + gDt + R (a ˆ b n )Dt t+Dt t t t t t 1 2 1 a a 2 p = p + v Dt + gDt + R (a ˆ b n )Dt t t t t t+Dt t 2 2 t (2) 1 w w Dt w ˆ b n Dt 2 Dt Dt q = q t+Dt t where g is gravitational acceleration; b is the varying bias of the acceleration a ˆ ; and n is t t the white noise of the acceleration a ˆ . The quaternion q under Hamilton notation (which corresponds to R ) is used; is used for the multiplication of two quaternions; b is the varying bias of the angular velocity w ˆ ; and n is the white noise of the angular velocity w ˆ . t t The motion state of IMU between the i-th and the j-th time steps can be represented by the IMU pre-integrated measurement value Dp , Dv , Dq , which can be computed by: ij ij ij Dv = R v v gDt ij j i ij T 1 2 Dp = R p p v Dt gDt i ij ij i j i 2 ij (3) j1 1 w w Dt w ˆ b n 1 ij k 2 k k Dq = q q = ij j k=i where Dp , Dv , Dq is the position, velocity, and rotation matrix of the IMU, respectively, ij ij ij which has the covariance C in the error-state model. Due to space limitations, we invite readers to refer to the descriptions in [26,34] to understand the detailed derivation of IMU pre-integration. 2.2. Segmentation and Feature Extraction Before extracting features, to filter out the noise, the point cloud is segmented to reduce noise interference. D = fd , d , . . . , d g is the point cloud information acquired at time t; t 1 2 n d is a point in D ; D is projected onto a range image; the point d represents a pixel of the t t t t image; and the pixel value r represents the Euclidean distance from the point d to the t t lidar. An image-based segmentation method [17] is applied to the range image to group points into many clusters. Points of the same category have the same label (the ground is a particular category). Features are then extracted from ground points and segmented points, with the corresponding edge points and plane points obtained by calculating the curvature E of each point. To evenly extract features from all directions, the range image is divided horizontally into several equal sub-images. Then, the points are sorted in each row of the sub-image based on their curvature values, E. The curvature E is computed by Equation (4). E = k (r r )k (4) å l t jSjkr k l2S,l6=t where S is the set of continuous points d from the same row of the range image; l is a point in the set S; E is a threshold which is set to distinguish different types of features; th the points are called with E > E edge features; and the points are called with E < E th th planar features. 3. Multi-Sensor Fusion 3.1. Hardware System Description The configuration of our system is shown in Figure 1, with the left model obtained through Unigraphics NX and the designed equipment in our paper shown on the right. The designed equipment is a scanning arm composed of various sensors that can be installed on a backpack or a vehicle. The sensor suite used in the scanning arm includes two 10 HZ Velodyne VLP-16 lidars, a 200 HZ YIS510A IMU, and a 5 HZ HY-269 GPS. The panoramic Appl. Sci. 2022, 12, x FOR PEER REVIEW 5 of 19 where S is the set of continuous points d from the same row of the range image; λ is a point in the set S ; E is a threshold which is set to distinguish different types of th features; the points are called with E > E edge features; and the points are called with th E < E planar features. th 3. Multi-Sensor Fusion 3.1. Hardware System Description The configuration of our system is shown in Figure 1, with the left model obtained through Unigraphics NX and the designed equipment in our paper shown on the right. The designed equipment is a scanning arm composed of various sensors that can be in- Appl. Sci. 2022, 12, 939 5 of 18 stalled on a backpack or a vehicle. The sensor suite used in the scanning arm includes two 10 HZ Velodyne VLP-16 lidars, a 200 HZ YIS510A IMU, and a 5 HZ HY-269 GPS. The panoramic camera prepares for the follow-up research. The lidar has a hor  izont al FOV of camera prepares for the follow-up research. The lidar has a horizontal FOV of 360 with 0.4 360° with 0.4° resolution, a vertical FOV 15° with 2° resolution, and an accuracy of ±3 cm. resolution, a vertical FOV 15 with 2 resolution, and an accuracy of3 cm. The Roll/Pitch of Th IMU e Ro accuracy ll/Pitch o is 0.25 f IMU , HY ac -269 cura GPS cy is is 0. a small-volume 25°, HY-269 G positioning PS is a sma andldir l-vo ectional lume pos receiver ition , ing and and the positioning accuracy of GPS is 1.2 m. directional receiver, and the positioning accuracy of GPS is 1.2 m. Figure 1. A CAD (Computer Aided Design) model of the scanning arm is shown on the left, with a Figure 1. A CAD (Computer Aided Design) model of the scanning arm is shown on the left, with a photograph of the scanning arm shown on the right. photograph of the scanning arm shown on the right. 3.2. System Overview 3.2. System Overview In this paper, horizontal and vertical lidars are fused to make up for the shortcomings In this paper, horizontal and vertical lidars are fused to make up for the shortcomings of lidar ’s narrow FOV. When indoors, accurate positioning information is obtained through of lidar’s narrow FOV. When indoors, accurate positioning information is obtained the tight coupling of dual lidar inertial odometry. When outdoors, GPS measurements through the tight coupling of dual lidar inertial odometry. When outdoors, GPS measure- are added, providing a relatively accurate initial position for positioning, thereby further ments are added, providing a relatively accurate initial position for positioning, thereby improving outdoor positioning accuracy. Figure 2 provides a brief overview of our proposed framework. Firstly, after obtaining further improving outdoor positioning accuracy. the IMU data, the IMU state prediction and pre-integration are updated, as detailed in Figure 2 provides a brief overview of our proposed framework. Firstly, after obtain- Section 2.1. IMU pre-integration is used to construct a factor graph constraint, which will ing the IMU data, the IMU state prediction and pre-integration are updated, as detailed be used for joint optimization. Secondly, the dual lidar point clouds are obtained through in Section 2.1. IMU pre-integration is used to construct a factor graph constraint, which external calibration and an adaptive timestamp synchronization algorithm, as detailed in will be used for joint optimization. Secondly, the dual lidar point clouds are obtained Section 3.3. Since lidar has continuous measurement, the lidar measurement is obtained in through external calibration and an adaptive timestamp synchronization algorithm, as continuous motion and it is almost certain that motion distortion will occur. When dual lidar detailed in Section 3.3. Since lidar has continuous measurement, the lidar measurement is point cloud is received, deskewing is applied on the dual lidar raw point cloud to obtain obtained in continuous motion and it is almost certain that motion distortion will occur. deskewed dual lidar point cloud [14]. To be ground-optimized and reduce the amount of calculation, point cloud segmentation is applied to filter out noise (e.g., moving people or When dual lidar point cloud is received, deskewing is applied on the dual lidar raw point objects) and the planar points and edge points are distinguished by calculating the curvature cloud to obtain deskewed dual lidar point cloud [14]. To be ground-optimized and reduce of the lidar point, as detailed in Section 2.2. After feature extraction, the dual lidar point the amount of calculation, point cloud segmentation is applied to filter out noise (e.g., cloud is registered to construct the lidar odometer factor. Then, the sliding window factor moving people or objects) and the planar points and edge points are distinguished by graph optimization method of local finite frame is used, with the IMU pre-integration factor and lidar odometer factor jointly optimized to realize the mutual parameter optimization of the dual lidar and IMU. Joint optimization is taken to obtain a maximum a posteriori (MAP) estimation of the IMU states, as detailed in Section 3.4, avoiding the drift from IMU propagation. To further improve outdoor positioning accuracy, GPS is introduced in this framework. When GPS is loosely coupled with IMU through an Unscented Kalman Filter (UKF), GPS measurements are used to further improve the positioning accuracy of IMU, with the optimized IMU state then used for the state estimation of IMU at the next time step. GPS measurements are added, providing a relatively accurate initial position for localization, thereby further improving outdoor localization accuracy [25]. However, the Appl. Sci. 2022, 12, x FOR PEER REVIEW 6 of 19 calculating the curvature of the lidar point, as detailed in Section 2.2. After feature extrac- tion, the dual lidar point cloud is registered to construct the lidar odometer factor. Then, the sliding window factor graph optimization method of local finite frame is used, with the IMU pre-integration factor and lidar odometer factor jointly optimized to realize the mutual parameter optimization of the dual lidar and IMU. Joint optimization is taken to obtain a maximum a posteriori (MAP) estimation of the IMU states, as detailed in Section 3.4, avoiding the drift from IMU propagation. To further improve outdoor positioning accuracy, GPS is introduced in this framework. When GPS is loosely coupled with IMU through an Unscented Kalman Filter (UKF), GPS measurements are used to further im- prove the positioning accuracy of IMU, with the optimized IMU state then used for the state estimation of IMU at the next time step. GPS measurements are added, providing a Appl. Sci. 2022, 12, 939 6 of 18 relatively accurate initial position for localization, thereby further improving outdoor lo- calization accuracy [25]. However, the drift of lidar inertial odometry grows very slowly; in practice, GPS measurements are used when the estimated position covariance is larger drift of lidar inertial odometry grows very slowly; in practice, GPS measurements are used than the received GPS position covariance. Finally, the output of the figure is the global when the estimated position covariance is larger than the received GPS position covariance. map and localization. Finally, the output of the figure is the global map and localization. Figure Figure 2. 2. An An o overview verview o of our f our prproposed fram oposed framework. ework. 3.3. Fusion of Horizontal Lidar and Vertical Lidar 3.3. Fusion of Horizontal Lidar and Vertical Lidar To accurately fuse the horizontal and vertical lidar data, their individual coordinate To accurately fuse the horizontal and vertical lidar data, their individual coordinate systems must be transformed into a unified coordinate system, which is termed “frame systems must be transformed into a unified coordinate system, which is termed “frame coordinate system”. The horizontal lidar coordinate system is used as the frame coordinate coordinate system”. The horizontal lidar coordinate system is used as the frame coordi- system; the vertical lidar coordinate system is registered in this frame coordinate system nate system; the vertical lidar coordinate system is registered in this frame coordinate sys- through external parameter calibration, with the external parameters obtained by the CAD tem through external parameter calibration, with the external parameters obtained by the model of the scanning arm. This paper adopts the adaptive time synchronization algorithm CAD model of the scanning arm. This paper adopts the adaptive time synchronization to ensure that the timestamps of the horizontal and vertical lidars can be simultaneously algorithm to ensure that the timestamps of the horizontal and vertical lidars can be sim- output. The fusion process of dual lidar is divided into two parts: (1) external parameter ultaneously output. The fusion process of dual lidar is divided into two parts: (1) external calibration of the horizontal and vertical lidars and (2) the adaptive time synchronization algorithm. Both parts are described as follows. parameter calibration of the horizontal and vertical lidars and (2) the adaptive time syn- chronization algorithm. Both parts are described as follows. 3.3.1. External Parameter Calibration of Horizontal Lidar and Vertical Lidar Assuming that L is the horizontal lidar coordinate system and L is the vertical lidar 3.3.1. External Param 1 eter Calibration of Horizontal Lidar and Vert 2ical Lidar coordinate system, the horizontal lidar coordinate system is used as the frame coordinate Assuming that L is the horizontal lidar coordinate system and L is the vertical li- 1 2 system and the vertical lidar coordinate system L is registered in this frame coordinate L L dar coordinate system, the horizontal lidar coordinate system is used as the frame coordi- 1 1 system L through the rotation matrix R and translation matrix H , which can be L L 2 2 computed by Equation (5): L L 1 1 L = R  L + H (5) 1 2 L L 2 2 L L 1 1 where R is the rotation matrix and H is the translation matrix which can be obtained by L L 2 2 external parameters. From the hardware structure of the scanning arm, the vertical lidar coordinate system first rotates around the y-axis and then translates to the horizontal lidar coordinate system, where 2 3 cos q 0 sin q 4 5 R = 0 1 0 (6) sin q 0 cos q Appl. Sci. 2022, 12, x FOR PEER REVIEW 7 of 19 nate system and the vertical lidar coordinate system L is registered in this frame coor- L L 1 1 dinate system L through the rotation matrix  and translation matrix H , which L L 2 2 can be computed by Equation (5): LL LL =×  +H (5) Appl. Sci. 2022, 12, x FOR PEER REVIEW 12 7 of 19 LL L L 1 1 where  is the rotation matrix and H is the translation matrix which can be ob- L L 2 2 nate system and the vertical lidar coordinate system L is registered in this frame coor- tained by external parameters. From the hardware structure of the scanning arm, the ver- L L 1 1 dinate system L through the rotation matrix  and translation matrix H , which L L 2 2 tical lidar coordinate system first rotates around the y -axis and then translates to the hor- can be computed by Equation (5): izontal lidar coordinate system, LL LL =×  +H (5) LL where 22 L L 1 1 where  is the rotation matrix and H is the translation matrix which can be ob- L L cosθθ 0 sin  2 2 L tained by external parameters. From the hardware structure of the scanning arm, the ver-  = 01 0 (6)  tical lidar coordinate system first rotates around the y -axis and then translates to the hor-  −sinθθ 0 cos izontal lidar coordinate system,  where Appl. Sci. 2022, 12, 939 7 of 18 where the angle of the rotation matrix θ =−77.94 and the translation matrix cosθθ 0 sin  1  Hx =(,0,z) , with  = and01 0 . 1 xm =−0.177 zm =−0 .213 (6) wher Le the angle of the rotation matrixLq = 77.94 and the translation matrix H = (x, 0, z), 2 2 L  −sinθθ 0 cos with x = 0.177 m and z = 0.213 m.  The point cloud of the lidar after external parameter calibration in the frame coordi- The point cloud of the lidar after external parameter calibration in the frame coordinate nate system is shown in Figure 3, with the point clouds of the vertical and horizontal lidars where the angle of the rotation matrix θ =−77.94 and the translation matrix system is shown in Figure 3, with the point clouds of the vertical and horizontal lidars not af notHx fecting af =(, fect each0,z ing) other , with each ot . her. and . xm =−0.177 zm =−0.213 The point cloud of the lidar after external parameter calibration in the frame coordi- nate system is shown in Figure 3, with the point clouds of the vertical and horizontal lidars not affecting each other. Figure 3. Dual lidar point clouds’ information after external parameter calibration. Figure 3. Dual lidar point clouds’ information after external parameter calibration. Figure 3. Dual lidar point clouds’ information after external parameter calibration. 3.3.2. The Adaptive Time Synchronization Algorithm 3.3.2. The Adaptive Time Synchronization Algorithm 3.3.2. The Adaptive Time Synchronization Algorithm In this paper, the lidar time synchronization is achieved through hardware time syn- In this paper, the lidar time synchronization is achieved through hardware time syn- chronization and GNSS is used as the master clock. However, there is a time offset between chronization and In this paGNSS is used as per, the lidar ti the master me synchroniz clock. However, th ation is aere is chiev a time ed thoffset be rough ha - rdware time syn- the two lidars. As the time offset is constant, an adaptive timestamp synchronization tween the two lidars. As the time offset is constant, an adaptive timestamp synchroniza- chronization and GNSS is used as the master clock. However, there is a time offset be- algorithm is used to solve it. The adaptive time synchronization algorithm is used to tion algorithm is used to solve it. The adaptive time synchronization algorithm is used to tween the two lidars. As the time offset is constant, an adaptive timestamp synchroniza- match its timestamp and realize the simultaneous localization and mapping of the lidars. match its timestamp and realize the simultaneous localization and mapping of the lidars. The tion a timestamps lgorithm of i the s used dual lidar to sol isv shown e it. The ada in Figurep 4ti , wher ve time e lidar_201/scan synchronization algorithm represents is used to The timestamps of the dual lidar is shown in Figure 4, where lidar_201/scan represents the timestamp of the horizontal lidar and lidar_202/scan represents the timestamp of the the timestamp of the horizontal lidar and lidar_202/scan represents the timestamp of the match its timestamp and realize the simultaneous localization and mapping of the lidars. vertical lidar. vertical lidar. The timestamps of the dual lidar is shown in Figure 4, where lidar_201/scan represents the timestamp of the horizontal lidar and lidar_202/scan represents the timestamp of the vertical lidar. Figure 4. Timestamps of dual lidar. Figure 4. Timestamps of dual lidar. The output of the adaptive time synchronization algorithm only depends on the The output of the adaptive time synchronization algorithm only depends on the timestamp, not on the arrival time of dual lidar point cloud messages. It means that the timestamp, not on the arrival time of dual lidar point cloud messages. It means that the adaptive time synchronization algorithm can be safely used on dual lidar point cloud Figure 4. Timestamps of dual lidar. messages that have suffered arbitrary processing delays. As shown in Figure 5, time goes from left to right, with the first row representing the horizontal lidar timestamp and the The output of the adaptive time synchronization algorithm only depends on the second row representing the vertical lidar timestamp. Each dot represents the lidar point cloud with the timestamp. The blue dot represents the pivot of the lidar point clouds, timestamp, not on the arrival time of dual lidar point cloud messages. It means that the with the broken line linking the dual lidar point clouds in a set. Suppose the horizontal lidar point clouds’ queue is PL and the vertical lidar point clouds’ queue is PV , with P P lidar point clouds inserted in a topic-specific queue (PL or PV ) as they arrive. Once P P each topic-specific queue contains at least one message, the latest message is found at the head of the queues, known as the pivot. Assume that the queue with pivots is PL , which matches PV , being the smallest difference between the timestamps, T. The two queues are combined into a set; finally, this set is synchronous output. Appl. Sci. 2022, 12, x FOR PEER REVIEW 8 of 19 adaptive time synchronization algorithm can be safely used on dual lidar point cloud messages that have suffered arbitrary processing delays. As shown in Figure 5, time goes from left to right, with the first row representing the horizontal lidar timestamp and the second row representing the vertical lidar timestamp. Each dot represents the lidar point cloud with the timestamp. The blue dot represents the pivot of the lidar point clouds, with the broken line linking the dual lidar point clouds in a set. Suppose the horizontal lidar point clouds’ queue is PL and the vertical lidar point clouds’ queue is PV , with lidar P P point clouds inserted in a topic-specific queue ( PL or PV ) as they arrive. Once each P P topic-specific queue contains at least one message, the latest message is found at the head of the queues, known as the pivot. Assume that the queue with pivots is PL , which matches PV , being the smallest difference between the timestamps, . The two queues Appl. Sci. 2022, 12, 939 8 of 18 are combined into a set; finally, this set is synchronous output. Figure 5. Timestamp matching of dual lidar. Figure 5. Timestamp matching of dual lidar. 3.4. Tight Coupling of Dual Lidar and IMU 3.4. Tight Coupling of Dual Lidar and IMU Although the fusion of dual lidar makes up for the shortcomings of lidar ’s narrow Although the fusion of dual lidar makes up for the shortcomings of lidar’s narrow FOV, the localization accuracy is improved. However, lidars mounted on moving vehicles FOV, the localization accuracy is improved. However, lidars mounted on moving vehicles suffer from motion distortion, which directly affects the localization accuracy. IMU can suffer from motion distortion, which directly affects the localization accuracy. IMU can accurately measure the three-axis acceleration and three-axis angular velocity of moving accurately measure the three-axis acceleration and three-axis angular velocity of moving vehicles, and provide a priori information for lidar odometry. However, in the case of vehicles, and provide a priori information for lidar odometry. However, in the case of long or short distances, the IMU error will continue to accumulate, which directly affects long or short distances, the IMU error will continue to accumulate, which directly affects positioning accuracy. In this paper, the tight coupling of dual lidar and IMU is proposed to posi solve tioni the ng above accura pr cy. In thi oblems.sThe paper, the ti process is gh divided t couplin into g of d two ual parts: lidar (1) and external IMU is p parameter roposed to solve the above problems. The process is divided into two parts: (1) external parameter calibration of horizontal lidar and IMU and time synchronization, and (2) joint optimization, cal which ibratar ion e o described f horizont as alfollows: lidar and IMU and time synchronization, and (2) joint optimiza- tion, which are described as follows: 3.4.1. External Parameter Calibration of Horizontal Lidar and IMU and Time Synchronization 3.4.1. Similar Externato l PSection aramete3.3 r C , a the librexternal ation of parameters Horizontal L of idar theand lidar IMU andand T IMU ar im eealso Sync obtained hroni- zat based ion on the CAD model. To accurately fuse data from the horizontal lidar, the vertical lidar, and the IMU, the IMU coordinate system I is also registered in this frame coordinate Similar to Section 3.3, the external parameters of the lidar and IMU are also obtained L L 1 1 system L through the rotation matrix R and the translation matrix H . It can be seen based on the 1 CAD model. To accurately fuse data from the horizontal lidar, the vertical I I from the structure of the scanning arm in Figure 1 that the IMU is located directly under the lidar, and the IMU, the IMU coordinate system I is also registered in this frame coordi- horizontal lidar. Therefore, the IMU coordinate system can be registered in the horizontal L L 1 1 L  Η 1 I I nate system through the rotation matrix and the translation matrix . It can lidar coordinate system only by translation, where be seen from the structure of the scanning arm in Figure 1 that the IMU is located directly 2 3 1 0 0 under the horizontal lidar. Therefore, the IMU coordinate system can be registered in the 4 5 R = 0 1 0 (7) horizontal lidar coordinate system only by translation, 0 0 1 where 1 10 0  where the translation matrix H = [0.62.66, 0,0.125]. L   = 01 0 In this paper, YIS510A IMU is used, which supports trigger correction of its internal (7)  clock by PPS signal. Lidar-IMU time synchronization is achieved through hardware time  00 1  synchronization, with GNSS used as the master clock. GNSS can output PPS signals to unify the clock source of lidar-IMU and achieve time synchronization. where the translation matrix Η=[−0.62.66,0,−0.125] . 3.4.2. Joint Optimization In this paper, a fixed-lag smoother and marginalization are introduced to obtain the optimal state. The fixed-lag smoother maintains a certain amount of IMU state in the sliding window, and the sliding window can effectively control the amount of calculation. When the new state enters the sliding window, the past state is marginalized. The state W W W variable to be estimated for the whole window is X = [X , . . . , X ,Z ], where X is the h k I h state of IMU at the starting point h of the sliding window; X is the state of IMU at the end L L L 1 1 1 of the sliding window; and Z = [R , H ] is the external parameter between lidar and I I I IMU. Then, the following cost function with a Mahalanobis norm is minimized to obtain the MAP estimation of the states X, < = 2 2 V X = min ku (X)k + ku (c, X)k c + ku (Z , X)k (8) o å f å g V C V+1 X 2> L ; : V+1 c2Q V2fh,...,k1g b Appl. Sci. 2022, 12, 939 9 of 18 where u (X) is the prior information from marginalization [26]. u (c, X) is the residual o f of the relative lidar constraints that can be represented as point-to-plane distance [26], where c 2 Q is the residual for each relative lidar measurement with the previous correspondences. b 2 fh + 1, . . . , kg, h + 1 and k are the timestamps of the lidar sweep next to the starting one and the current lidar sweep in the window, with the covariance c V matrix C determined by the lidar accuracy [35]. u (Z , X) is the residual of the IMU V+1 constraints, where 2 3 T 1 2 R p p v Dt gDt Dp V+1 V 2 V,V+1 V,V+1 6 7 6 7 6 R v v gDt Dv 7 V V+1 V,V+1 V,V+1 6 7 h i V 6 7 1 1 u (Z , X) = 6 7 (9) g 2 Dq V+1 V,V+1 V V+1 6 7 xyz 6 7 a a 6 7 b b 4 V+1 V 5 w w b b V+1 V u (Z , X) can be obtained by IMU state prediction (Equation (2)) and IMU pre-integration V+1 h i 1 1 (Equation (3)), and Dq q stands for the vector part of a quaternion. V,V+1 V V+1 xyz With the continuous-time linearized propagation of the error states and the IMU noise parameters, the covariances C of the pre-integration measurements and biases can be V+1 estimated. The cost function, in the form of a non-linear least-square, can be solved by the Gauss–Newton algorithm. Ceres Solver [36] is used for solving this nonlinear problem. It can be seen from Section 3.2 that the new states X are obtained by the joint optimiza- Appl. Sci. 2022, 12, x FOR PEER REVIEW 10 of 19 tion, which is used as the next state of the IMU, avoiding the drift from IMU propagation. The state of the IMU is applied to deskewing, thereby eliminating the motion distortion of the dual lidar. 4. Experiment 4. Experiment 4.1. Data Acquisition Equipment 4.1. Data Acquisition Equipment To verify the performance of the proposed high-precision SLAM framework based To verify the performance of the proposed high-precision SLAM framework based on on the tight coupling of Dual Lidar Inertial Odometry (HSDLIO) for multi-scene applica- the tight coupling of Dual Lidar Inertial Odometry (HSDLIO) for multi-scene applications, tions, this paper describes a series of experiments to qualitatively and quantitatively ana- this paper describes a series of experiments to qualitatively and quantitatively analyze our lyze our proposed framework. The device runs on an industrial computer with a proces- proposed framework. The device runs on an industrial computer with a processor of i7-9700 sor of i7-9700 through the port and network port, and the system configured of the indus- through the port and network port, and the system configured of the industrial computer is trial computer is ubuntu16.04. The scanning arm is installed on the backpack, with the ubuntu16.04. The scanning arm is installed on the backpack, with the industrial computer industrial computer and battery installed in the backpack. The small display screen of the and battery installed in the backpack. The small display screen of the backpack shows the backpack shows the localization and mapping in real-time, with the backpack equipment localization and mapping in real-time, with the backpack equipment shown in Figure 6. shown in Figure 6. Data of three representative environments are collected, including the Data of three representative environments are collected, including the featureless corridor, featureless corridor, stairs with height, and complex outdoor environments. To illustrate stairs with height, and complex outdoor environments. To illustrate the effectiveness of the the effectiveness of the HSDLIO algorithm, LeGO-LOAM and LIO-SAM algorithms are HSDLIO algorithm, LeGO-LOAM and LIO-SAM algorithms are compared with it. compared with it. Figure 6. Backpack collection equipment. Figure 6. Backpack collection equipment. 4.2. Indoor Experiment 1 Indoor experiment 1 was carried out in a corridor with few feature points. Since there was no GPS signal indoors, SLAM was mainly carried out through the tight coupling of dual lidar odometry and IMU. To highlight the performance of the HSDLIO algorithm, LeGO-LOAM and LIO-SAM algorithms are compared with it. Figure 7a shows the environment map of the corridor, while Figure 7b–d shows the overall mapping of LeGO-LOAM, LIO-SAM, and HSDLIO algorithms in the corridor scene, with the white points representing backpack motion tra- jectories. Compared with the HSDLIO algorithm in Figure 7d, the overall mapping of LeGO-LOAM (Figure 7b) has a significant deviation and the structure is unclear. Figure 8a–b shows that the mapping of LeGO-LOAM is incomplete because the lidar has a con- siderable drift in a scene with fewer feature points. The mapping results of LIO-SAM (Fig- ure 7c) is close to the real environment, with its structure complete. The main reason is that LIO-SAM tightly couples the horizontal lidar and IMU to make up for the lidar drift. However, Figure 9a–b indicates the mapping result of HSDLIO is more abundant than LIO-SAM in the top surface and ground information. The reason is that HSDLIO fuses horizontal and vertical lidars so that its angle of FOV in both horizontal and vertical di- rections is 360°. The dual lidars and IMU are tightly coupled to make up for the lidar drift. Appl. Sci. 2022, 12, 939 10 of 18 4.2. Indoor Experiment 1 Indoor experiment 1 was carried out in a corridor with few feature points. Since there was no GPS signal indoors, SLAM was mainly carried out through the tight coupling of dual lidar odometry and IMU. To highlight the performance of the HSDLIO algorithm, LeGO-LOAM and LIO-SAM algorithms are compared with it. Figure 7a shows the environment map of the corridor, while Figure 7b–d shows the overall mapping of LeGO-LOAM, LIO-SAM, and HSDLIO algorithms in the corridor scene, with the white points representing backpack motion trajectories. Compared with the HSDLIO algorithm in Figure 7d, the overall mapping of LeGO-LOAM (Figure 7b) has a significant deviation and the structure is unclear. Figure 8a,b shows that the mapping of LeGO-LOAM is incomplete because the lidar has a considerable drift in a scene with fewer feature points. The mapping results of LIO-SAM (Figure 7c) is close to the real environment, with its structure complete. The main reason is that LIO-SAM tightly couples the horizontal lidar and IMU to make up for the lidar drift. However, Figure 9a,b indicates the mapping result of HSDLIO is more abundant than LIO-SAM in the top surface and ground information. The reason is that HSDLIO fuses horizontal and Appl. Sci. 2022, 12, x FOR PEER REVIEW 11 of 19 vertical lidars so that its angle of FOV in both horizontal and vertical directions is 360 . The dual lidars and IMU are tightly coupled to make up for the lidar drift. (a) (b) (c) (d) Figure 7. (a) Corridor scene for experiment 1. (b) Mapping results of LeGO-LOAM in the corridor Figure 7. (a) Corridor scene for experiment 1. (b) Mapping results of LeGO-LOAM in the corridor scene, with the mapping of LeGO-LOAM having a significant deviation and the structure incom- scene, with the mapping of LeGO-LOAM having a significant deviation and the structure incomplete. plete. (c) Mapping results of LIO-SAM in the corridor scene, with LIO-SAM having a complete map- (c) Mapping results of LIO-SAM in the corridor scene, with LIO-SAM having a complete mapping ping structure but lacking information on the top and ground. (d) Mapping results of HSDLIO in structure but lacking information on the top and ground. (d) Mapping results of HSDLIO in the the corridor scene, with HSDLIO having a complete mapping structure. corridor scene, with HSDLIO having a complete mapping structure. (a) (b) Figure 8. Comparison of mapping details at the corner of the corridor, where (a) the mapping of LeGO-LOAM has a significant deviation and (b) the mapping of the HSDLIO structure is clearer and more complete than LeGO-LOAM. Appl. Sci. 2022, 12, x FOR PEER REVIEW 11 of 19 (a) (b) (c) (d) Figure 7. (a) Corridor scene for experiment 1. (b) Mapping results of LeGO-LOAM in the corridor scene, with the mapping of LeGO-LOAM having a significant deviation and the structure incom- plete. (c) Mapping results of LIO-SAM in the corridor scene, with LIO-SAM having a complete map- ping structure but lacking information on the top and ground. (d) Mapping results of HSDLIO in Appl. Sci. 2022, 12, 939 11 of 18 the corridor scene, with HSDLIO having a complete mapping structure. (a) (b) Figure 8. Comparison of mapping details at the corner of the corridor, where (a) the mapping of Appl. Sci. 2022, 12, x FOR PEER REVIEW Figure 8. Comparison of mapping details at the corner of the corridor, where (a) the mapping of 12 of 19 LeGO-LOAM has a significant deviation and (b) the mapping of the HSDLIO structure is clearer and LeGO-LOAM has a significant deviation and (b) the mapping of the HSDLIO structure is clearer more complete than LeGO-LOAM. and more complete than LeGO-LOAM. (a) (b) Figure 9. Comparison of mapping details at the top of the corridor, where (a) the mapping of Figure 9. Comparison of mapping details at the top of the corridor, where (a) the mapping of LIO- LIO-SAM lacks top information and (b) the mapping of HSDLIO shows finer structural details of SAM lacks top information and (b) the mapping of HSDLIO shows finer structural details of the the environment. environment. In the long corridor scene, compared with LeGO-LOAM and LIO-SAM, the structure of HSDLIO mapping is more complete and the point cloud information on the ground and In the long corridor scene, compared with LeGO-LOAM and LIO-SAM, the structure top surface is richer. To verify the positioning accuracy of the HSDLIO, the trajectories of HSDLIO mapping is more complete and the point cloud information on the ground of LeGO-LOAM, LIO-SAM, and HSDLIO are compared in Figure 10. It can be observed and top surface is richer. To verify the positioning accuracy of the HSDLIO, the trajectories in Figure 10 that the LeGO-LOAM trajectory has drifted. The main reason is that in the of LeGO-LOAM, LIO-SAM, and HSDLIO are compared in Figure 10. It can be observed LeGO-LOAM algorithm, the lidar odometry error continues to increase in a long corridor ienvir n Figure 10 onment with tha fewer t the LeGO- feature points. LOAM tra Both LIO-SAM jectory ha ands drifted. HSDLIO use The main reason is tha the tight coupling t in the of IMU and lidar for positioning, which improves positioning accuracy. HSDLIO further LeGO-LOAM algorithm, the lidar odometry error continues to increase in a long corridor improves the accuracy of positioning through the tight coupling of dual lidars and IMU. environment with fewer feature points. Both LIO-SAM and HSDLIO use the tight cou- To further improve the positioning accuracy of HSDLIO, the relative translational error pling of IMU and lidar for positioning, which improves positioning accuracy. HSDLIO (RTE)—the distance from the start point to the end point—is introduced. In all experiments, further improves the accuracy of positioning through the tight coupling of dual lidars and the data collection started and ended at the same point; when the positioning accuracy of IMU. HSDLIO LeGO− LOAM LIO SAM -5 -10 -15 start point -20 -25 -30 15 20 -5 0 Figure 10. Comparison of LeGO-LOAM, LIO-SAM, and HSDLIO trajectories. The red line is the trajectory of HSDLIO, the blue line is the trajectory of LIO-SAM, and the black line is the trajectory of LeGO-LOAM, with the latter having drifted. To further improve the positioning accuracy of HSDLIO, the relative translational error (RTE)—the distance from the start point to the end point—is introduced. In all ex- periments, the data collection started and ended at the same point; when the positioning accuracy of the device is very high, the start and end point will coincide. The RTE, when the backpack returns to the start point, is shown in Table 1. The RTE can be computed by Equation (10): 22 2 (10) RTE=+ x y + z oo o Appl. Sci. 2022, 12, x FOR PEER REVIEW 12 of 19 (a) (b) Figure 9. Comparison of mapping details at the top of the corridor, where (a) the mapping of LIO- SAM lacks top information and (b) the mapping of HSDLIO shows finer structural details of the environment. Appl. Sci. 2022, 12, 939 12 of 18 In the long corridor scene, compared with LeGO-LOAM and LIO-SAM, the structure of HSDLIO mapping is more complete and the point cloud information on the ground the device is very high, the start and end point will coincide. The RTE, when the backpack and top surface is richer. To verify the positioning accuracy of the HSDLIO, the trajectories returns to the start point, is shown in Table 1. The RTE can be computed by Equation (10): of LeGO-LOAM, LIO-SAM, and HSDLIO are compared in Figure 10. It can be observed in Figure 10 that the LeGO-LOAM trajectory has drifted. The main reason is that in the 2 2 2 RTE = x + y + z (10) o o LeGO-LOAM algorithm, the lidar odometry error continues to increase in a long corridor where x = x x , y = y y , z = z z , x , y , and z are the coordinates of the environment with fewer feature points. Both LIO-SAM and HSDLIO use the tight cou- o s e o s e s s o s e s starting point and x , y , and z are the coordinates of the end point. The result of LeGO- e e pling of IMU and lid e ar for positioning, which improves positioning accuracy. HSDLIO LOAM is not shown because its trajectory has severely shifted. Table 1 shows that the further improves the accuracy of positioning through the tight coupling of dual lidars and error of HSDLIO in the x and y direction is similar to LIO-SAM, but the accuracy in the z IMU. direction is higher than LIO-SAM and the RTE of HSDLIO is smaller than LIO-SAM. HSDLIO LeGO− LOAM LIO− SAM -5 -10 -15 start point -20 -25 -30 5 10 -5 Figure 10. Comparison of LeGO-LOAM, LIO-SAM, and HSDLIO trajectories. The red line is the Figure 10. Comparison of LeGO-LOAM, LIO-SAM, and HSDLIO trajectories. The red line is the trajectory of HSDLIO, the blue line is the trajectory of LIO-SAM, and the black line is the trajectory of trajectory of HSDLIO, the blue line is the trajectory of LIO-SAM, and the black line is the trajectory LeGO-LOAM, with the latter having drifted. of LeGO-LOAM, with the latter having drifted. Table 1. Relative translational error when the backpack returns to the starting point (meters). To further improve the positioning accuracy of HSDLIO, the relative translational Algorithm LeGO-LOAM LIO-SAM HSDLIO error (RTE)—the distance from the start point to the end point—is introduced. In all ex- x Fail 0.079 0.077 periments, the data collection started and ended at the same point; when the positioning y Fail 0.092 0.090 accuracy of the device is very high, the start and end point will coincide. The RTE, when z Fail 0.121 0.001 the backpack returns to the start point, is shown in Table 1. The RTE can be computed by RTE Fail 0.171 0.118 Equation (10): 4.3. Indoor Experiment 2 22 2 (10) RTE=+ x y + z oo o Indoor experiment 2 aimed to show the effectiveness of HSDLIO in the indoor envi- ronment with a certain height information. In this experiment, the backpack was carried up and down four flights of stairs. Figure 11a shows the stairs scene, while Figure 11b–d shows the overall mapping of LeGO-LOAM, LIO-SAM, and HSDLIO algorithms. The white points are backpack localization trajectories. LeGO-LOAM (Figure 11b) cannot obtain the structure of the stairs. Because the vertical FOV of the horizontal lidar is relatively narrow, the mapping result of LIO-SAM (Figure 11c) does not show the height information. With the help of the vertical lidar and the tight coupling of the IMU, the mapping of HSDLIO (Figure 11d) is the most complete and accurate, which is closer to the real stairs’ scene. Figure 12 shows the trajectories obtained by LeGO-LOAM, LIO-SAM, and HSDLIO. The red, black, and green lines represent the trajectories of HSDLIO, LeGO-LOAM, and LIO-SAM, respectively, with the trajectory of HSDLIO approximating the real trajectory. LeGO-LOAM has no IMU correction, which causes the drift of the trajectory. Although the trajectory of LIO-SAM does not drift, due to the narrow vertical FOV of the horizontal lidar, a significant error occurs in the z direction information. Comparing the height Appl. Sci. 2022, 12, x FOR PEER REVIEW 13 of 19 where xx=−x , yy=−y , zz=−z , x , y , and z are the coordinates of the os e os e os e s s s x y z starting point and , , and are the coordinates of the end point. The result of e e e LeGO-LOAM is not shown because its trajectory has severely shifted. Table 1 shows that the error of HSDLIO in the and direction is similar to LIO-SAM, but the accuracy in the x y z direction is higher than LIO-SAM and the RTE of HSDLIO is smaller than LIO-SAM. Table 1. Relative translational error when the backpack returns to the starting point (meters). Algorithm LeGO-LOAM LIO-SAM HSDLIO Fail 0.079 0.077 Fail 0.092 0.090 Fail 0.121 0.001 RTE Fail 0.171 0.118 4.3. Indoor Experiment 2 Appl. Sci. 2022, 12, 939 Indoor experiment 2 aimed to show the effectiveness of HSDLIO in the indoor e 13n of vi- 18 ronment with a certain height information. In this experiment, the backpack was carried up and down four flights of stairs. Figure 11a shows the stairs scene, while Figure 11b–d information of LIO-SAM and HSDLIO in the z direction in Figure 13, it can be seen shows the overall mapping of LeGO-LOAM, LIO-SAM, and HSDLIO algorithms. The that when going up and down four floors of stairs, the localization results of LIO-SAM white points are backpack localization trajectories. LeGO-LOAM (Figure 11b) cannot ob- provide no height information and also indicate a great drift, while HSDLIO provides tain the structure of the stairs. Because the vertical FOV of the horizontal lidar is relatively accurate height information. Therefore, the tight coupling of dual lidars and IMU makes narrow, the mapping result of LIO-SAM (Figure 11c) does not show the height infor- the advantages of HSDLIO clearly apparent. Table 2 shows that the error of HSDLIO in mation. With the help of the vertical lidar and the tight coupling of the IMU, the mapping the x and y direction is smaller than LIO-SAM, but the accuracy in the z direction is much of HSDLIO (Figure 11d) is the most complete and accurate, which is closer to the real higher than LIO-SAM and the RTE of HSDLIO is much smaller than LIO-SAM. stairs’ scene. Appl. Sci. 2022, 12, x FOR PEER REVIEW 14 of 19 (a) (b) (c) (d) height information. Therefore, the tight coupling of dual lidars and IMU makes the ad- Figure 11. (a) Stairs scene for experiment 2. (b) Mapping results of LeGO-LOAM in the stairs scene, Figure 11. (a) Stairs scene for experiment 2. (b) Mapping results of LeGO-LOAM in the stairs scene, vantages of HSDLIO clearly apparent. Table 2 shows that the error of HSDLIO in the x with LeGO-LOAM’s mapping result having failed. (c) Mapping results of LIO-SAM in the stairs with LeGO-LOAM’s mapping result having failed. (c) Mapping results of LIO-SAM in the stairs and y direction is smaller than LIO-SAM, but the accuracy in the z direction is much scene, with LIO-SAM’s mapping results lacking height information. (d) Mapping results of HSDLIO scene, with LIO-SAM’s mapping results lacking height information. (d) Mapping results of HSDLIO in the stairs scene, with the mapping of HSDLIO the most complete and accurate. higher than LIO-SAM and the RTE of HSDLIO is much smaller than LIO-SAM. in the stairs scene, with the mapping of HSDLIO the most complete and accurate. Figure 12 shows the trajectories obtained by LeGO-LOAM, LIO-SAM, and HSDLIO. The red, black, and green lines represent the trajectories of HSDLIO, LeGO-LOAM, and HSDLIO LIO-SAM, respectively, with the trajectory of HSDLIO approximating the real trajectory. LeGO− LOAM LIO− SAM LeGO-LOAM has no IMU correction, which causes the drift of the trajectory. Although the trajectory of LIO-SAM does not drift, due to the narrow vertical FOV of the horizontal lidar, a significant error occurs in the z direction information. Comparing the height in- formation of LIO-SAM and HSDLIO in the z direction in Figure 13, it can be seen that when going up and down four floors of stairs, the localization results of LIO-SAM provide no height information and also indicate a great drift, while HSDLIO provides accurate -2 -2 -1 -2 Figure Figure 12. 12. Comparison Comparison of Le of LeGO-LOAM, GO-LOA LIO-SAM, M, LIO-S and AM, HSDLIO and HSDLIO trajectories. trajectories. The red The red line is the line is the trajectory of HSDLIO, the blue line is the trajectory of LIO-SAM, and the black line is the trajec- trajectory of HSDLIO, the blue line is the trajectory of LIO-SAM, and the black line is the trajectory tory of LeGO-LOAM. While LeGO-LOAM’s trajectory drifted and LIO-SAM’s trajectory produced of LeGO-LOAM. While LeGO-LOAM’s trajectory drifted and LIO-SAM’s trajectory produced cu- cumulative errors in the z direction, the trajectory of HSDLIO approximated the real trajectory. mulative errors in the z direction, the trajectory of HSDLIO approximated the real trajectory. 4.4. Outdoor Experiment The outdoor experiment was carried out on a city street. This scene has rich feature HSDLIO down four floors of stair points, so the drift of lidar odometry grows very slowly. The localization accuracy of LeGO-LOAM, LIO-SAM, and HSDLIO have very little difference, but the mapping of HSDLIO can provide finer structural details. go up four floors of stair 0 1020 304050 607080 90 100 LIO− SAM down four floors of stair go up four floors of stair -1 0 1020 304050 607080 90 100 Figure 13. Comparison of the height information of LIO-SAM and HSDLIO in the z direction. The trajectory of HSDLIO provides height information from going up and down four floors of stairs, while LIO-SAM failed to position in the direction. Table 2. Relative translational error when the backpack returns to the starting point (meters). Algorithm LeGO-LOAM LIO-SAM HSDLIO Fail 0.033 0.011 Fail 0.036 0.024 Fail 0.062 0.004 RTE Fail 0.078 0.026 Appl. Sci. 2022, 12, x FOR PEER REVIEW 14 of 19 height information. Therefore, the tight coupling of dual lidars and IMU makes the ad- vantages of HSDLIO clearly apparent. Table 2 shows that the error of HSDLIO in the x and direction is smaller than LIO-SAM, but the accuracy in the z direction is much higher than LIO-SAM and the RTE of HSDLIO is much smaller than LIO-SAM. HSDLIO LeGO LOAM LIO− SAM -2 -2 -1 -2 Figure 12. Comparison of LeGO-LOAM, LIO-SAM, and HSDLIO trajectories. The red line is the trajectory of HSDLIO, the blue line is the trajectory of LIO-SAM, and the black line is the trajectory of LeGO-LOAM. While LeGO-LOAM’s trajectory drifted and LIO-SAM’s trajectory produced cu- Appl. Sci. 2022, 12, 939 14 of 18 mulative errors in the z direction, the trajectory of HSDLIO approximated the real trajectory. HSDLIO down four floors of stair go up four floors of stair 0 1020 304050 607080 90 100 LIO− SAM down four floors of stair go up four floors of stair -1 0 1020 304050 607080 90 100 Figure 13. Comparison of the height information of LIO-SAM and HSDLIO in the z direction. The Figure 13. Comparison of the height information of LIO-SAM and HSDLIO in the z direction. The trajectory of HSDLIO provides height information from going up and down four floors of stairs, trajectory of HSDLIO provides height information from going up and down four floors of stairs, while LIO-SAM failed to position in the z direction. while LIO-SAM failed to position in the direction. Appl. Sci. 2022, 12, x FOR PEER REVIEW 15 of 19 Table 2. Relative translational error when the backpack returns to the starting point (meters). Table 2. Relative translational error when the backpack returns to the starting point (meters). Algorithm LeGO-LOAM LIO-SAM HSDLIO Algorithm LeGO-LOAM LIO-SAM HSDLIO x Fail 0.033 0.011 Fail 0.033 0.011 4.4. Outdoor Experi o ment y Fail 0.036 0.024 o Fail 0.036 0.024 The outdooro experiment was carried out on a city street. This scene has rich feature z Fail 0.062 0.004 points, so the drift of o lidar odometry grows very slowly. The localization accuracy of Fail 0.062 0.004 LeGO-LOAM, LIO-SAM, and HSDLIO have very little difference, but the mapping of RTE Fail 0.078 0.026 RTE Fail 0.078 0.026 HSDLIO can provide finer structural details. The trajectories obtained by LeGO-LOAM, LIO-SAM, and HSDLIO are shown in Fig- The trajectories obtained by LeGO-LOAM, LIO-SAM, and HSDLIO are shown in ure 14. Because there is more feature information outdoors, the drift of lidar odometry Figure 14. Because there is more feature information outdoors, the drift of lidar odometry grows very slowly. LeGO-LOAM can obtain relatively accurate localization only through grows very slowly. LeGO-LOAM can obtain relatively accurate localization only through horizontal lidar. In LIO-SAM and HSDLIO, GPS measurement is introduced by loosely horizontal lidar. In LIO-SAM and HSDLIO, GPS measurement is introduced by loosely coupling with IMU through UKF, which can provide relatively accurate initial position- coupling with IMU through UKF, which can provide relatively accurate initial positioning. ing. The localization accuracy of HSDLIO and LIO-SAM is higher than LeGO-LOAM. The The localization accuracy of HSDLIO and LIO-SAM is higher than LeGO-LOAM. The available data in Table 3 further supports the performance of the HSDLIO algorithm. available data in Table 3 further supports the performance of the HSDLIO algorithm. HSDLIO LeGO− LOAM LIO− SAM -5 -10 -15 -20 0 204060 80 100 Figure 14. Comparison of LeGO-LOAM, LIO-SAM, and HSDLIO trajectories. The red dashed line is Figure 14. Comparison of LeGO-LOAM, LIO-SAM, and HSDLIO trajectories. The red dashed line the trajectory of HSDLIO, the blue dashed line is the trajectory of LIO-SAM, and the black dashed is the trajectory of HSDLIO, the blue dashed line is the trajectory of LIO-SAM, and the black dashed line is the line is the traje trajectory ctory of LeGO-LOAM. of LeGO-LOAM. Table 3. Relative translational error when the backpack returns to the starting point (meters). Algorithm LeGO-LOAM LIO-SAM HSDLIO RTE 0.082 0.036 0.031 The 3D city street scene is shown in Figure 15a. In Figure 15a–d, the marks in the white circle represent trees and the marks in the yellow circle represent the building struc- ture. Figure 15b–d shows the overall mapping of LeGO-LOAM, LIO-SAM, and HSDLIO algorithms in the city street scene, with the white points being their localization trajecto- ries. Figure 15b–c shows that the overall effect of the mapping lacks building height infor- mation, while the structure of buildings and trees is blurred. The mapping details are an- alyzed at the marker, compared with the mapping results of LeGO-LOAM and LIO-SAM (Figure 15b–c), while the HSDLIO mapping result is shown in Figure 15d. It can be seen in Figure 15d that the structure of the trees and buildings is complete and has finer details. Appl. Sci. 2022, 12, 939 15 of 18 Table 3. Relative translational error when the backpack returns to the starting point (meters). Algorithm LeGO-LOAM LIO-SAM HSDLIO RTE 0.082 0.036 0.031 The 3D city street scene is shown in Figure 15a. In Figure 15a–d, the marks in the white circle represent trees and the marks in the yellow circle represent the building structure. Figure 15b–d shows the overall mapping of LeGO-LOAM, LIO-SAM, and HSDLIO algorithms in the city street scene, with the white points being their localization trajectories. Figure 15b,c shows that the overall effect of the mapping lacks building height information, while the structure of buildings and trees is blurred. The mapping details are analyzed at the marker, compared with the mapping results of LeGO-LOAM and LIO-SAM Appl. Sci. 2022, 12, x FOR PEER REVIEW 16 of 19 (Figure 15b,c), while the HSDLIO mapping result is shown in Figure 15d. It can be seen in Figure 15d that the structure of the trees and buildings is complete and has finer details. (a) (b) (c) (d) Figure 15. (a) City street scene, with the marks in the white circle representing trees and the marks Figure 15. (a) City street scene, with the marks in the white circle representing trees and the marks in the yellow circle representing the building structure. (b) The mapping results of LeGO-LOAM in in the yellow circle representing the building structure. (b) The mapping results of LeGO-LOAM the city street scene provide a lack of building height information and the structure of buildings and in the city street scene provide a lack of building height information and the structure of buildings trees is blurred. (c) The mapping results of LIO-SAM in the city street scene provide a lack of build- and trees is blurred. (c) The mapping results of LIO-SAM in the city street scene provide a lack of ing height information, though the mapping of trees is relatively clear. (d) The mapping results of building height information, though the mapping of trees is relatively clear. (d) The mapping results HSDLIO in the city street scene show that the point cloud features of the trees and the building of HSDLIO in the city street scene show that the point cloud features of the trees and the building structure are finer and more complete, including finer structural details. structure are finer and more complete, including finer structural details. To further demonstrate the advantages of HSDLIO mapping, the mapping details of To further demonstrate the advantages of HSDLIO mapping, the mapping details the building are shown in Figure 16, with the mapping results of LeGO-LOAM and LIO- of the building are shown in Figure 16, with the mapping results of LeGO-LOAM and SAM lacking the structural details and height information of the building. However, the LIO-SAM lacking the structural details and height information of the building. However, mapping result of HSDLIO show finer structural details and more height information of the mapping result of HSDLIO show finer structural details and more height information the building. of the building. (a) (b) Appl. Sci. 2022, 12, x FOR PEER REVIEW 16 of 19 (a) (b) (c) (d) Figure 15. (a) City street scene, with the marks in the white circle representing trees and the marks in the yellow circle representing the building structure. (b) The mapping results of LeGO-LOAM in the city street scene provide a lack of building height information and the structure of buildings and trees is blurred. (c) The mapping results of LIO-SAM in the city street scene provide a lack of build- ing height information, though the mapping of trees is relatively clear. (d) The mapping results of HSDLIO in the city street scene show that the point cloud features of the trees and the building structure are finer and more complete, including finer structural details. To further demonstrate the advantages of HSDLIO mapping, the mapping details of the building are shown in Figure 16, with the mapping results of LeGO-LOAM and LIO- SAM lacking the structural details and height information of the building. However, the Appl. Sci. 2022, 12, 939 16 of 18 mapping result of HSDLIO show finer structural details and more height information of the building. (a) Appl. Sci. 2022, 12, x FOR PEER REVIEW 17 of 19 (b) (c) Figure 16. Comparison of mapping details in the city street scene showing that (a) the mapping of Figure 16. Comparison of mapping details in the city street scene showing that (a) the mapping LeGO-LOAM lacks structural details of the building, (b) the mapping of LIO-SAM lacks the height of LeGO-LOAM lacks structural details of the building, (b) the mapping of LIO-SAM lacks the of the building structure, and (c) the mapping of HSDLIO provides finer structural details of the height of the building structure, and (c) the mapping of HSDLIO provides finer structural details of building. the building. 5. Conclusions 5. Conclusions This paper proposes a high-precision SLAM framework for multi-scene applications. This paper proposes a high-precision SLAM framework for multi-scene applications. In this framework, dual lidars are fused to make up for the shortcomings of lidars’ narrow In this framework, dual lidars are fused to make up for the shortcomings of lidars’ narrow FOV FOV and h and hence ence improve t improve the he complet completeness eness o of f mapp mapping. ing. M Meanwhile, eanwhile, du dual al llidars idars a and nd IMU IMU are t are tightly ightly co coupled upled to to impr impr ove t ovehthe e loca localization lization accaccuracy uracy. Ext . e Extensive nsive experiment experiments s were ca wer r- e ried carried out out and and the results sh the results showed owed that that co compar mpared with the commonly ed with the commonlyused LeGO-LOAM used LeGO-LOAM and and LIO-SAM methods, o LIO-SAM methods, our ur propo proposed sed method method can can p pr roduce oduce more more precise precise loca localization lization and and more accurate mapping results with more details. more accurate mapping results with more details. Author Contributions: Conceptualization, K.X.; methodology, K.X. and W.Y.; software, K.X., W.Y., Author Contributions: Conceptualization, K.X.; methodology, K.X. and W.Y.; software, K.X., W.Y., W.L. and F.Q W.L. and F.Q.;.; validation, validation, W W.Y..Y. and Z and Z.M.; .M.; fo formal rmal analy analysis, sis, K.X. K.X. an and W d W.L.; .L.; investigation, investigation, W.L. W.L. and K.X.; and K.X. resour ; re ces, souK.X rces, . K.X and .W and .Y.; W.Y data.;curation, data curation, K.X. and K.X. W and W .Y.; writing—original .Y.; writing—orig draft, inal draft, K K.X.; writing—r .X.; writing— eview and editing, K.X. and W.Y.; visualization, K.X.; supervision, K.X. and Z.M.; project administration, review and editing, K.X. and W.Y.; visualization, K.X.; supervision, K.X. and Z.M.; project admin- istration, K.X. and K.X W.Y..; afunding nd W.Y.; fu acquisition, nding acqu K.X. isition, and W K.X. .Y. All and W.Y authors . All authors have read and agreed to the have read and agreed to the published pu version blished of vers the manuscript. ion of the manuscript. Fund Funding: ing: This resear This researchch wa was funded s funded by by the th National e National Natural Na Science tural Sc Foundation ience Foun (grant dation (g nos.rant nos. 61602529 61602529 and and 61672539).61672539). Thi This work wass work was al also supported so supported by Hunan Key by Hunan Key Laboratory of Laboratory of Intelligent Logistics Intelligent Tech- Logistics Technology (2019TP1015) and Scientific Research Project of Hunan Education Department nology (2019TP1015) and Scientific Research Project of Hunan Education Department (No. 17C1650). (No. 17C1650). Institutional Review Board Statement: Not applicable. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest. References 1. Dissanayake, M.W.M.G.; Newman, P.; Clark, S.; Durrant-Whyte, H.F.; Csorba, M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom. 2001, 17, 229–241. 2. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. 3. Khairuddin, A.R.; Talib, M.S.; Haron, H. Review on Simultaneous Localization and Mapping (SLAM). In Proceedings of the IEEE International Conference on Control System, Computing and Engineering (ICCSCE), George Town, Malaysia, 25–27 No- vember 2016; pp. 85–90. 4. Belter, D.; Nowicki, M.; Skrzypczy’Nski, P. Lightweight RGB-D SLAM System for Search and Rescue Robots, Progress in Auto- mation, Robotics and Measuring Techniques; Springer: Cham, Switzerland,2015; Volume 2, pp. 11–21. 5. Sim, R.; Roy, N. Global A-Optimal Robot Exploration in Slam. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp.661–666. 6. Cheng, Z.; Wang, G. Real-Time RGB-D SLAM with Points and Lines. In Proceedings of the 2018 2nd IEEE Advanced Infor- mation Management, Communicates, Electronic and Automation Control Conference (IMCEC), Xi’an, China, 25–27 May 2018; pp. 119–122. 7. Liu, T.; Zhang, X.; Wei, Z.; Yuan, Z. A robust fusion method for RGB-D SLAM. In Proceedings of the 2013 Chinese Automation Congress, Changsha, China, 7–8 November 2013; pp. 474–481. Appl. Sci. 2022, 12, 939 17 of 18 Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest. References 1. Dissanayake, M.W.M.G.; Newman, P.; Clark, S.; Durrant-Whyte, H.F.; Csorba, M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom. 2001, 17, 229–241. [CrossRef] 2. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. [CrossRef] 3. Khairuddin, A.R.; Talib, M.S.; Haron, H. Review on Simultaneous Localization and Mapping (SLAM). In Proceedings of the IEEE International Conference on Control System, Computing and Engineering (ICCSCE), George Town, Malaysia, 25–27 November 2016; pp. 85–90. 4. Belter, D.; Nowicki, M.; Skrzypczy’Nski, P. Lightweight RGB-D SLAM System for Search and Rescue Robots. In Progress in Automation, Robotics and Measuring Techniques; Springer: Cham, Switzerland, 2015; Volume 2, pp. 11–21. 5. Sim, R.; Roy, N. Global A-Optimal Robot Exploration in Slam. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 661–666. 6. Cheng, Z.; Wang, G. Real-Time RGB-D SLAM with Points and Lines. In Proceedings of the 2018 2nd IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Xi’an, China, 25–27 May 2018; pp. 119–122. 7. Liu, T.; Zhang, X.; Wei, Z.; Yuan, Z. A robust fusion method for RGB-D SLAM. In Proceedings of the 2013 Chinese Automation Congress, Changsha, China, 7–8 November 2013; pp. 474–481. 8. Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-Time Loop Closure in 2D LIDAR SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–20 May 2016; pp. 1271–1278. 9. Yu, Y.; Gao, W.; Liu, C. A GPS-aided Omnidirectional Visual-Inertial State Estimator in Ubiquitous Environments. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 7750–7755. 10. Wang, Z.; Zhang, J.; Chen, S. Robust High Accuracy Visual-Inertial-Lidar SLAM System. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 6636–6641. 11. Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [CrossRef] 12. Zhou, Y.; Li, H.; Kneip, L. Canny-VO: Visual Odometry with RGB-D Cameras Based on Geometric 3-D–2-D Edge Alignment. IEEE Trans. Robot. 2019, 35, 184–199. [CrossRef] 13. Hu, G.; Huang, S.; Zhao, L.; Alempijevic, A.; Dissanayake, G. A robust RGB-D SLAM algorithm. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 1714–1719. 14. Zhang, J.; Singh, S. Loam: Lidar odometry and mapping in real-time. In Proceedings of the Robotics: Science and Systems X, Berkeley, CA, USA, 12–16 July 2014. 15. Tsardoulias, E.; Petrou, L. Critical rays scan match SLAM. J. Intell. Robot. Syst. 2013, 72, 441–462. [CrossRef] 16. Jiang, B.; Zhu, Y.; Liu, M. A Triangle Feature Based Map-to-map Matching and Loop Closure for 2D Graph SLAM. In Proceedings of the 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), Dali, China, 6–8 December 2019; pp. 2719–2725. 17. Shan, T.; Englot, B.J. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. 18. Bogoslavskyi, I.; Stachniss, C. Fast range image-based segmentation of sparse 3D lidar scans for online operation. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 163–169. 19. Ranganathan, A. The levenberg-marquardt algorithm. Tutoral LM Algorithm 2004, 11, 101–110. 20. Torres-Torriti, M.; Guesalaga, A. Scan-to-map matching using the Hausdorff distance for robust mobile robot localization. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation (ICRA), Pasadena, CA, USA, 19–23 May 2008; pp. 455–460. 21. Hassani, A.; Morris, N.; Spenko, M. Experimental integrity evaluation of tightly-integrated IMU/LiDAR including return-light intensity data. In Proceedings of the 32nd International Technical Meeting of The Satellite Division of the Institute of Navigation (ION), Miami, FL, USA, 16–20 September 2019; pp. 2637–2658. 22. Velas, M.; Spanel, M.; Hradis, M. CNN for IMU assisted odometry estimation using velodyne LiDAR. In Proceedings of the 2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Torres Vedras, Portugal, 25–27 April 2018; pp. 71–77. 23. Deilamsalehy, H.; Havens, T.C. Sensor fused three-dimensional localization using IMU, camera and LiDAR. In Proceedings of the 2016 IEEE Sensors, Orlando, FL, USA, 30 October–3 November 2016; pp. 1–3. 24. Xie, G.; Zong, Q.; Zhang, X. Loosely-coupled lidar-inertial odometry and mapping in real time. Int. J. Intell. Robot. Appl. 2021, 5, 119–129. [CrossRef] Appl. Sci. 2022, 12, 939 18 of 18 25. Moore, T.; Stouch, D. A generalized extended kalman filter implementation for the robot operating system. In Intelligent Autonomous Systems 13; Springer: Cham, Switzerland, 2016; pp. 335–348. 26. Ye, H.; Chen, Y.; Liu, M. Tightly coupled 3d lidar inertial odometry and mapping. In Proceedings of the 2019 IEEE International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3144–3150. 27. Shan, T.; Englot, B.; Meyers, D. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 5135–5142. 28. Wellington, C.; Stentz, A. Learning predictions of the load-bearing surface for autonomous rough-terrain navigation in vegetation. In Springer Tracts in Advanced Robotics (STAR); Springer: Berlin/Heidelberg, Germany, 2003; Volume 24, pp. 83–92. 29. Bosse, M.; Zlot, R. Continuous 3D scan-matching with a spinning 2D laser. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, 12–17 May 2009; pp. 4312–4319. 30. Bosse, M.; Zlot, R.; Flick, P. Zebedee: Design of a spring-mounted 3-d range sensor with application to mobile mapping. IEEE Trans. Robot. 2012, 28, 1104–1119. [CrossRef] 31. Palieri, M.; Morrell, B. Locus: A multi-sensor lidar-centric solution for high-precision odometry and 3d mapping in real-time. IEEE Robot. Autom. Lett. 2020, 6, 421–428. [CrossRef] 32. Jiao, J.; Ye, H.; Zhu, Y.; Liu, M. Robust odometry and mapping for multi-lidar systems with online extrinsic calibration. IEEE Trans. Robot. 2021, 1–10. [CrossRef] 33. Nguyen, T.M.; Yuan, S. MILIOM: Tightly Coupled Multi-Input Lidar-Inertia Odometry and Mapping. IEEE Robot. Autom. Lett. 2021, 6, 5573–5580. [CrossRef] 34. Forster, C.; Carlone, L.; Dellaert, F. On-Manifold Preintegration for Real-Time Visual-Inertial Odometry. IEEE Trans. Robot. Autom. 2017, 33, 1–21. [CrossRef] 35. Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [CrossRef] 36. Ceres Solver. Available online: http://ceres-solver.org (accessed on 5 October 2021).

Journal

Applied SciencesMultidisciplinary Digital Publishing Institute

Published: Jan 18, 2022

Keywords: simultaneous localization and mapping; dual lidar inertial odometry; IMU; time synchronization; tight coupling

There are no references for this article.