Coarse-to-fine Hybrid 3D Mapping System with Co-calibrated Omnidirectional Camera and Non-repetitive Mid-360 LiDAR
This work is developed by Ziliang Miao, Buwei He, Wenya Xie, supervised by Prof.Xiaoping Hong (ISEE-Lab, SDIM, SUSTech), and is accepted by IEEE Robotics and Automation Letters (RA-L).
Paper: https://arxiv.org/pdf/2301.12934.pdf
Demo Video: https://www.youtube.com/watch?v=Uh0C9VL9YEQ
Example Dataset: https://drive.google.com/file/d/13jmpb2Ft_-LXh990wIxUrSLnYEimYKsa/view?usp=share_link
Update: Versatile Cocalibration Method for Both Omnidirectional and Pinhole Camera-LiDAR https://github.com/ZiliangMiao/Cocalibration
This paper presents a novel 3D mapping robot with an omnidirectional field-of-view (FoV) sensor suite composed of a non-repetitive LiDAR (Livox Mid-360) and an omnidirectional camera (please consult Huanjun Technology, panoholic). Thanks to the non-repetitive scanning nature of the LiDAR, an automatic targetless co-calibration method is proposed to simultaneously calibrate the intrinsic parameters for the omnidirectional camera and the extrinsic parameters for the camera and LiDAR, which is crucial for the required step in bringing color and texture information to the point clouds in surveying and mapping tasks. Comparisons and analyses are made to target-based intrinsic calibration and mutual information (MI)-based extrinsic calibration, respectively. With this co-calibrated sensor suite, the hybrid mapping robot integrates both the odometry-based mapping mode and stationary mapping mode. Meanwhile, we proposed a new workflow to achieve coarse-to-fine mapping, including efficient and coarse mapping in a global environment with odometry-based mapping mode; planning for viewpoints in the region-of-interest (ROI) based on the coarse map (relies on the previous work); navigating to each viewpoint and performing finer and more precise stationary scanning and mapping of the ROI. The fine map is stitched with the global coarse map, which provides a more efficient and precise result than the conventional stationary approaches and the emerging odometry-based approaches, respectively.
Version: Ubuntu 18.04.
Version: ROS Melodic.
Please follow ROS Installation to install.
Version: ceres-solver 2.1.0
Please follow Ceres-Solver Installation to install.
Version: PCL 1.7.4
Version: Eigen 3.3.4
Please follow PCL Installation to install.
Version: OpenCV 3.2.0
Please follow OpenCV Installation to install.
Version: mlpack 3.4.2
Please follow mlpack Installation to install.
The SDK and driver is used for dealing with Livox LiDAR. Remenber to install Livox SDK before Livox ROS Driver.
The SDK of the fisheye camera is in MindVision SDK.
Currently, this cocalibration method only supports omnidirectional camera and non-repetitive scanning LiDAR (Livox).
If you want to calibrate the monocular camera and the Livox LiDAR, please replace the omnidirectional camera model to monocular camera model and modify the corresponding parameters of optimization.
We will consider supporting other types of LiDAR in the future.
Make the data, (dataset_name), cocalibration directories, refer to the file structure below.
Rename the accumulated non-repetitive scanned point cloud "full_fov_cloud.pcd", rename the hdr image "hrd_image.bmp".
Put the two raw files into ~/cocalibration/data/(dataset_name)/cocalibration directory.
Modify the parameters in the config file, cocalibration.yaml.
Recommended Kernel Density Estimation (KDE) bandwidth: 32, 16, 8, 4, 2
├── cocalibration
│ ├── build
│ ├── config
│ │ └── cocalibration.yaml
│ ├── data
│ │ └── (dataset_name)
│ │ └── cocalibration
│ │ ├── edges
│ │ │ ├── lidar_1_filtered.bmp
│ │ │ ├── lidar_2_canny.bmp
│ │ │ ├── lidar_edge_image.bmp
│ │ │ ├── lidar_edge_cloud.pcd
│ │ │ ├── omni_1_filtered.bmp
│ │ │ ├── omni_2_canny.bmp
│ │ │ └── omni_edge_image.bmp
│ │ ├── results
│ │ │ ├── fusion_image_init.bmp
│ │ │ ├── fusion_image_(bandwidth).bmp
│ │ │ ├── cocalib_init.txt
│ │ │ └── cocalib_(bandwidth).txt
│ │ ├── full_fov_cloud.pcd
│ │ ├── flat_lidar_image.bmp
│ │ └── hdr_image.bmp
│ ├── launch
│ │ └── cocalibration.launch
│ ├── include
│ │ ├── common_lib.h
│ │ ├── define.h
│ │ ├── lidar_process.h
│ │ ├── omni_process.h
│ │ └── optimization.h
│ ├── python_scripts
│ │ └── image_process
│ │ ├── omni_image_mask.png
│ │ ├── lidar_flat_image_mask.png
│ │ └── edge_extraction.py
│ ├── src
│ │ ├── lidar_process.cpp
│ │ ├── omni_process.cpp
│ │ ├── optimization.cpp
│ │ └── cocalibration.cpp
│ ├── package.xml
│ └── CMakeLists.txt
├── ReadMe.md
├── .git
└── .gitignore
cd ~/$catkin workspace$
catkin_make
source ./devel/setup.bash
roslaunch cocalibration cocalibration.launch
Thanks for CamVox, Livox-SDK, OCamCalib MATLAB Toolbox, Fast-LIO, and thanks to the help of Wenquan Zhao, Xiao Huang, Jian Bai.