diff --git a/README.md b/README.md index 072390e..3e12ead 100644 --- a/README.md +++ b/README.md @@ -30,7 +30,7 @@ We strongly recommend that the newly proposed SLAM algorithm be tested on our [M 1. **Rich sensory information** including vision, lidar, IMU, GNSS,event, thermal-infrared images and so on 2. **Various scenarios** in real-world environments including lifts, streets, rooms, halls and so on. 3. Our dataset brings **great challenge** to existing cutting-edge SLAM algorithms including [LIO-SAM](https://github.com/TixiaoShan/LIO-SAM) and [ORB-SLAM3](https://github.com/UZ-SLAMLab/ORB_SLAM3). If your proposed algorihm outperforms these SOTA systems on our benchmark, your paper will be much more convincing and valuable. -4. 🔥 Extensive excellent **open-source** projects have been built or evaluated on M2DGR/M2DGE-plus so far, for examples, [**Ground-Fusion**](https://github.com/SJTU-ViSYS/Ground-Fusion), [LVI-SAM-Easyused](https://github.com/Cc19245/LVI-SAM-Easyused), [MM-LINS](https://github.com/lian-yue0515/MM-LINS/), [Log-LIO](https://github.com/tiev-tongji/LOG-LIO), [LIGO](https://github.com/Joanna-HE/LIGO.), [Swarm-SLAM](https://github.com/MISTLab/Swarm-SLAM), [VoxelMap++](https://github.com/uestc-icsp/VoxelMapPlus_Public), [GRIL-Cali](https://github.com/Taeyoung96/GRIL-Calib), [LINK3d](https://github.com/YungeCui/LinK3D), [i-Octree](https://github.com/zhujun3753/i-octree), [LIO-EKF](https://github.com/YibinWu/LIO-EKF), [Fast-LIO ROS2](https://github.com/Lee-JaeWon/FAST_LIO_ROS2), [HC-LIO](https://github.com/piluohong/hc_lio), [LIO-RF](https://github.com/YJZLuckyBoy/liorf), [PIN-SLAM](https://github.com/PRBonn/PIN_SLAM), [LOG-LIO2](https://github.com/tiev-tongji/LOG-LIO2), [Section-LIO](https://github.com/mengkai98/Section-LIO), [I2EKF-LO](https://github.com/YWL0720/I2EKF-LO), [Liloc](https://github.com/Yixin-F/LiLoc) and so on! +4. 🔥 Extensive excellent **open-source** projects have been built or evaluated on M2DGR/M2DGE-plus so far, for examples, [**Ground-Fusion**](https://github.com/SJTU-ViSYS/Ground-Fusion), [LVI-SAM-Easyused](https://github.com/Cc19245/LVI-SAM-Easyused), [MM-LINS](https://github.com/lian-yue0515/MM-LINS/), [Log-LIO](https://github.com/tiev-tongji/LOG-LIO), [LIGO](https://github.com/Joanna-HE/LIGO.), [Swarm-SLAM](https://github.com/MISTLab/Swarm-SLAM), [VoxelMap++](https://github.com/uestc-icsp/VoxelMapPlus_Public), [GRIL-Cali](https://github.com/Taeyoung96/GRIL-Calib), [LINK3d](https://github.com/YungeCui/LinK3D), [i-Octree](https://github.com/zhujun3753/i-octree), [LIO-EKF](https://github.com/YibinWu/LIO-EKF), [Fast-LIO ROS2](https://github.com/Lee-JaeWon/FAST_LIO_ROS2), [HC-LIO](https://github.com/piluohong/hc_lio), [LIO-RF](https://github.com/YJZLuckyBoy/liorf), [PIN-SLAM](https://github.com/PRBonn/PIN_SLAM), [LOG-LIO2](https://github.com/tiev-tongji/LOG-LIO2), [Section-LIO](https://github.com/mengkai98/Section-LIO), [I2EKF-LO](https://github.com/YWL0720/I2EKF-LO), [Liloc](https://github.com/Yixin-F/LiLoc), [BMBL](https://github.com/YixFeng/Block-Map-Based-Localization), [Light-LOAM](https://github.com/BrenYi/Light-LOAM) and so on! ## Table of Contents @@ -80,9 +80,11 @@ We strongly recommend that the newly proposed SLAM algorithm be tested on our [M - **SLAM modules:** - [LinK3D: Linear Keypoints Representation for 3D LiDAR Point Cloud](https://arxiv.org/pdf/2206.05927v3) from RA-L2024 - [VoxelMap++: Mergeable Voxel Mapping Method for Online LiDAR(-inertial) Odometry](https://arxiv.org/pdf/2308.02799) from RA-L2023 - + + - **SLAM systems:** + - [Block-Map-Based Localization in Large-Scale Environment](https://arxiv.org/pdf/2404.18192) from ICRA2024 - [A High-Precision LiDAR-Inertial Odometry via Invariant Extended Kalman Filtering and Efficient Surfel Mapping](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10484977) from TIM2024 - [I2EKF-LO: A Dual-Iteration Extended Kalman Filter Based LiDAR Odometry](https://arxiv.org/pdf/2407.02190) from IROS2024