Open Access Paper
28 December 2022 A vertical constraint enhanced method for indoor LiDAR SLAM algorithms
Zhennan Tian, Ningbo Bu, Jiangjian Xiao, Gen Xu, Haolei Ru, Jianfei Ge
Author Affiliations +
Proceedings Volume 12506, Third International Conference on Computer Science and Communication Technology (ICCSCT 2022); 125060G (2022) https://doi.org/10.1117/12.2662203
Event: International Conference on Computer Science and Communication Technology (ICCSCT 2022), 2022, Beijing, China
Abstract
LiDAR-based SLAM Systems are widely applied in robotics for their accuracy and robustness. However, accurate localization in small-scale indoor scenes is challenging since the point cloud features of the sparse line scan LiDAR cannot always provide sufficient space constraints. In our experiments, even state-of-the-art methods have heavy odometry drift. In this paper, to address this problem, we propose a method that can improve the performance of existing algorithms in small-scale indoor scenes. By installing a LiDAR perpendicular to the existing LiDAR, our method enhances the constraint in the vertical direction. We test our method on LOAM and FAST-LIO2, and the results show significant improvements on our own collected datasets. In addition, to accurately merge the two LiDAR’s cloud points, we propose a high-accuracy dual-LiDAR calibration method with rotation and translation errors less than 0.005 rad and 0.01 m respectively.

1.

INTRODUCTION

Simultaneous Localization and Mapping (SLAM) is the process where the robot estimates the position and builds a map of the environment without a priori information, using only its equipped sensors, and is a prerequisite for many robotic applications, such as autonomous navigation, path planning and 3D scene reconstruction etc. In comparison with visual SLAM, LiDAR-based SLAM can obtain accurate distances directly and is not susceptible to illumination changes. Therefore, LiDAR SLAM is usually more stable and accurate. In the past decades, the problem of LiDAR SLAM has received a lot of attention, and some advanced algorithms have achieved excellent performance. LiDAR Odometry and Mapping (LOAM) is a well-known LiDAR SLAM system. Based on LOAM, many methods with better performance have been proposed subsequently.

However, in small indoor scenes, for horizontally installed LiDAR with small viewing angles, it cannot scan the ceiling or the floor, only by the point cloud of the walls. The extracted feature points are matched without good constraints in the vertical direction, and most of the existing algorithms will have the problem that the odometry drifts in the z-axis direction of the LiDAR coordinate system. To tackle the above issues, based on previous methods, we propose a vertical constraint-enhanced indoor mapping and localization method based on dual-LiDAR system. By installing a vertically posed LiDAR in the system, which is able to scan to the ceiling and the floor, the vertical constraint is enhanced and the odometry drift in the vertical direction is optimized. Figure 1 shows the setup, the data collection and the performance of our method. Furthermore, to accurately merge the cloud points of the two LiDARs, we do some work on dual-LiDAR extrinsics calibration. Our contributions are concluded as follows:

  • We propose a dual-LiDAR-based vertical constraint enhanced method that can improve the performance of existing SLAM algorithms in small-scale indoor scenes.

  • We propose a 3D scanner based dual-LiDAR extrinsic calibration method with high accuracy.

Figure 1.

Data collection scene and map building performance.

00046_PSISDG12506_125060G_page_2_1.jpg

2.

RELATED WORK

2.1

LiDAR SLAM

Zhang et al.1,2 proposed the LOAM algorithm in 2014, which divides the SLAM problem into a high-frequency, low-accuracy odometry problem and a high-accuracy, low-frequency map-building problem. The LOAM algorithm has better localization accuracy and real-time performance, and although it was proposed in 2014, it still ranks well in the KITTI Odometry Benchmark3.

In recent years, based on LOAM, some improved algorithms have been proposed. Shan et al.4 proposed a LeGO-LOAM (lightweight and Ground-Optimized LOAM), which optimizes the ground points by applying point cloud segmentation and filters out a large number of noise points, and divides the poses into two categories for pose estimation. Ji et al.5 proposed the LLOAM (Lidar Odometry and Mapping with Loop closure Detection), which includes a back-end based on factor graph optimization and a front-end with a point clouds segmentation matching based loop-closure detection.

Incorporating IMU can significantly improve the performance of LiDAR odometry and the high frequency of IMU measurements can compensate very well for the motion distortion. LIO-SAM6 is a tightly-coupled LiDAR-based odometry framework built on a factor graph, whose odometry measurements require a 9-axis IMU to provide pose estimation for scan registration within a small size local map. Instead of the optimization-based method like LIO-SAM, FAST-LIO7 and FAST-LIO28 are filtering-based methods with iterative Kalman filtering. In addition, FAST-LIO2 proposes a new data structure ikd-Tree that can directly register the raw points to the map without extracting features, which makes the algorithm work more efficiently.

2.2

Calibration for Dual-LiDAR systems

The multi-LiDAR extrinsic calibration is usually classified into two types: target-based and motion-based.

The target-based method uses feature information such as points, lines, and planes in the environment. Gong9 proposes to build a point cloud map using a benchmark LiDAR based on the assumption of geometric consistency (i.e., assuming a consistent local 3D model for different LiDAR scans) and then use the ICP algorithm to align other LiDARs on the built map. This method works for multi-LiDARs, but depends heavily on the initial poses. Xie10 proposes to jointly calibrate multiple cameras and LiDAR in a pre-built environment with apriltags, but this method requires a large number of high-accuracy measurements, which is costly to implement.

The motion-based method uses motion information to calculate the hand-eye calibration equation. Jiao11 proposed to estimate the motion and use the hand-eye calibration method to obtain the initial extrinsics, and then use the point-to-plane distance method to optimize the results. The Baidu apollo team12 proposed calibration with the aid of GNSS (Global Navigation Satellite System) auxiliary measurements, which has been applied to autonomous vehicles.

3.

METHODOLOGY

3.1

Dual-LiDAR extrinsic calibration

The calibration scene is a structured hall that includes three nonlinearly correlated planes to be scanned simultaneously by two LiDARs. Figure 2 shows a typical scene consisting of two walls and a floor.

Figure 2.

Method overview.

00046_PSISDG12506_125060G_page_3_1.jpg

The method consists of two parts, the calibration of the dual LiDAR and the merged point cloud, where the former supports the latter. The calibration first acquires the point clouds of the LiDARs and scanner, extracts the three linearly independent planes and matches them to obtain the initial values, and then optimizes them with ICP to get the accurate values. Merging the point clouds requires frame (timestamp) alignment of the data and then transforms the point clouds to unify them into a same coordinate system. The merged point cloud is input to the LiDAR SLAM framework.

We set pL as the point cloud acquired by LiDAR, ps as the point cloud acquired by 3D scanner. In this work, LiDAR extrinsic calibration is converted into the point cloud registration problem that can be described as 00046_PSISDG12506_125060G_page_3_2.jpg

Usually, we use ICP (Iterative Closest Point) algorithm to estimate the LiDAR pose. However, directly applying the ICP algorithm to estimate the LiDAR pose, will result in low accuracy. It is necessary to firstly obtain the initial extrinsic between LiDAR and the 3D scanner, and then iterate the optimization by the ICP algorithm.

In this work, we obtain the initial extrinsic based on the normal vectors of the three linearly independent planes in the scene. We set the i-th plane as Πi, the coefficients of its plane equation as βi = [β(i,0), β(i,1), β(i,2), β(i,3)]T.

The distance from a point pn = [xn, yn, zn]T in the point cloud to the plane Πi, is denoted as 00046_PSISDG12506_125060G_page_3_9.jpg.

The coefficients of each of the three plane equations obtained are denoted as β1, β2 and β3. The three planes of intersection point 00046_PSISDG12506_125060G_page_3_7.jpg, can be solved by equation 00046_PSISDG12506_125060G_page_3_8.jpg.

Further, the normal vector [β(i,0), β(i,1), β(i,2)]T of the plane of the 3D scanner is denoted as 00046_PSISDG12506_125060G_page_3_3.jpg, and the normal vector of the LiDAR is denoted as 00046_PSISDG12506_125060G_page_3_4.jpg, then the rotation matrix can be solved by equation 00046_PSISDG12506_125060G_page_3_5.jpg.

The translation vector can be solved by equation 00046_PSISDG12506_125060G_page_3_6.jpg.

3.2

LiDAR SLAM

Our proposed method can be plugged into a different lidar slam framework. For two LiDAR data streams, to accurately merge the point clouds, we need to align the frames or timestamps first, which can be done either by hardware triggering or by using the ROS timestamp alignment at post-processing. After alignment, merge the point clouds collected at the same time, remove the distortion and input them into the existing SLAM framework.

Merging point clouds means unifying two points clouds into the same coordinate system, which requires the extrinsic calibrated by the previous method. The unified coordinate system could be the coordinate system of a LiDAR, so that only one of the point clouds needs to be transformed, saving the computation and providing better real-time performance, or it could be unified into another sensor or an agreed coordinate system, so that the point clouds of both LiDARs need to be transformed.

4.

EXPERIMENT

The study of this paper is validated on the self-developed backpack platform in Figure 3, which consists of two Velodyne VLP-16 LiDARs installed vertically to each other. The 3D scanner used for the calibration is the Z+F IMAGER 5010C, which provides millimeter-level accuracy.

Figure 3.

Calibration scene and hardware platform.

00046_PSISDG12506_125060G_page_4_1.jpg

4.1

Dual-LiDAR extrinsic calibration

To verify the accuracy of the proposed method, we collected LiDAR point clouds at 5 different locations and scanner point clouds at 2 different locations in the scene in Figure 3, where we extracted three linearly independent planes of white wall, red wall and ground to calculate the initial extrinsic, and repeated the computation 10 times for each set of point clouds, totalling 100 repeated experiments. Figure 4 shows the data statistics of the extrinsic in the repeated experiments, the values are well stabilized in a certain range. The precision is evaluated by the standard deviation of the dual-LiDAR extrinsic Rx, Ry, Rz, tx, ty and tz, denoted as σRx, σRy, σRz, σtx, σty and σtz. Table 1 shows the standard deviation of the extrinsic between the two LiDARs in 100 repeated experiments.

Figure 4.

Repeated experiments data statistics.

00046_PSISDG12506_125060G_page_4_2.jpg

Table 1.

Standard deviation of the extrinsic.

 x-axisy-axisz-axis
Rotation (deg)0.00190.00140.0016
Translation (m)0.00360.00440.0038

Repeated experiments verify the accuracy and robustness of our proposed method, which supports accurate fusion of two LiDAR point clouds.

4.2

Vertical constraint enhanced LiDAR SLAM

We tested the method on our self-developed backpack system based on LOAM and FAST-LIO2. Experiment 1 tests the odometry drift of the system in a small room at stationary state and compares it with LOAM. Experiment 2 tests the building performance in a corridor with a floor height of 3 m and a staircase with a width of 3 m, and is compared with FAST-LIO2.

Figure 5 shows the Experiment 1 scene and the point cloud of LiDAR scanning before and after the improvement. The white points compose the complete point cloud of the scene, and the colored lines indicate the points scanned by LiDAR. Figure 6 shows the 15-minute trajectory at stationary state, where the x-axis is the time, the y-axis is the position of the trajectory in a certain direction, the red line marks the trajectory of the original LOAM, and the blue line marks the trajectory of the vertical constraint enhanced LiDAR SLAM.

Figure 5.

Point cloud of LiDAR scanning before and after the improvement.

00046_PSISDG12506_125060G_page_5_1.jpg

Figure 6.

15-minute trajectory at stationary state.

00046_PSISDG12506_125060G_page_5_2.jpg

The figure illustrates that the improved method stabilizes within a certain range more quickly in each direction, and has smaller shake ranges after stabilization. Moreover, the improvement is noticeable in the z-direction.

Figure 7 shows the point cloud registration results in the corridor in Experiment 2. The left shows the registration result with raw FAST-LIO2, where the registration fails and the odometry drifts heavily in the narrow areas of the corridor, and the right shows the enhanced FAST-LIO2, which finishes the whole registrations and has a better resolution. Figure 8 shows the point cloud registration results for the 3 m wide staircase in Experiment 2. Again as before, the raw method fails to register in the staircase, and the enhanced method finishes the registration.

Figure 7.

Point cloud registration results in the corridor.

00046_PSISDG12506_125060G_page_6_1.jpg

Figure 8.

Point cloud registration results in the staircase.

00046_PSISDG12506_125060G_page_6_2.jpg

Both experiments illustrate that our method has a significant improvement in odometry drift problem and registration performance in small-scale indoor scenes compared to the original method.

5.

CONCLUSION

In this paper, we present a vertical constraint enhanced method for indoor LiDAR SLAM. We exploit a vertically installed LiDAR to improve the performance of existing SLAM algorithms in small-scale indoor scenes and propose a method to calibrate the multi-LiDAR extrinsic with the help of a 3D scanner. The experimental results show that our method works and the odometry drift problem is improved. Moreover, the experiment results also show that our calibration method has high accuracy, with rotation and translation errors less than 0.005 rad and 0.01 m respectively, which can merge the point clouds of two LiDARs very accurately.

ACKNOWLEDGMENTS

This research was funded by Ningbo Science and Technology Innovation 2025 Major Project (2020Z019, 2021Z037, 2020Z013, 2021XFCX35).

REFERENCES

[1] 

Zhang, J. and Singh, S., “LOAM: Lidar odometry and mapping in real-time,” Robotics: Science and Systems, 2 (9), 1 –9 (2014). Google Scholar

[2] 

Zhang, J. and Singh, S., “Low-drift and real-time lidar odometry and mapping,” Autonomous Robots, 41 (2), 401 –416 (2017). https://doi.org/10.1007/s10514-016-9548-2 Google Scholar

[3] 

Geiger, A., Lenz, P., Stiller, C. and Urtasun, R., “Vision meets robotics: The kitti datasetThe International Journal of Robotics Research,” 32 (11), 1231 –1237 (2013). Google Scholar

[4] 

Shan, T. and Englot, B., “Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,” in 2018 IEEE/RSJ Inter. Conf. on Intelligent Robots and Systems (IROS), 4758 –4765 (2018). Google Scholar

[5] 

Ji, X., Zuo, L., Zhang, C. and Liu, Y., “Lloam: Lidar odometry and mapping with loop-closure detection based correction,” in 2019 IEEE Inter. Conf. on Mechatronics and Automation (ICMA), 2475 –2480 (2019). Google Scholar

[6] 

Shan, T., Englot, B., Meyers, D., Wang, W., Ratti, C. and Rus, D., “Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping,” in 2020 IEEE/RSJ Inter. Conf. on Intelligent Robots and Systems (IROS), 5135 –5142 (2020). Google Scholar

[7] 

Xu, W. and Zhang, F., “Fast-lio: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter,” IEEE Robotics and Automation Letters, 6 (2), 3317 –3324 (2021). https://doi.org/10.1109/LRA.2021.3064227 Google Scholar

[8] 

Xu, W., Cai, Y., He, D., Lin, J. and Zhang, F., “Fast-lio2: Fast direct lidar-inertial odometry,” IEEE Transactions on Robotics, (2022). Google Scholar

[9] 

Gong, Z., Wen, C., Wang, C. and Li, J., “A target-free automatic self-calibration approach for multibeam laser scanners,” IEEE Transactions on Instrumentation and Measurement, 67 (1), 238 –240 (2017). https://doi.org/10.1109/TIM.2017.2757148 Google Scholar

[10] 

Xie, Y., Shao, R., Guli, P., Li, B. and Wang, L., “Infrastructure based calibration of a multi-camera and multi-lidar system using apriltags,” in 2018 IEEE Intelligent Vehicles Symposium (IV), 605 –610 (2018). Google Scholar

[11] 

Jiao, J., Yu, Y., Liao, Q., Ye, H., Fan, R. and Liu, M., “Automatic calibration of multiple 3d lidars in urban environments,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 15 –20 (2019). Google Scholar

[12] 

.Apollo Auto,” Multiple-LiDAR GNSS Calibration Guide, https://github.com/ApolloAuto/apollo Google Scholar
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Zhennan Tian, Ningbo Bu, Jiangjian Xiao, Gen Xu, Haolei Ru, and Jianfei Ge "A vertical constraint enhanced method for indoor LiDAR SLAM algorithms", Proc. SPIE 12506, Third International Conference on Computer Science and Communication Technology (ICCSCT 2022), 125060G (28 December 2022); https://doi.org/10.1117/12.2662203
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
LIDAR

Clouds

Calibration

3D scanning

Computing systems

Filtering (signal processing)

Robotics

RELATED CONTENT

Lidar on small UAV for 3D mapping
Proceedings of SPIE (October 13 2014)
Out of lab calibration of a rotating 2D scanner for...
Proceedings of SPIE (June 26 2017)
ONR 30 autonomous ground system program overview
Proceedings of SPIE (May 05 2017)

Back to Top