From 6b63c8344452227f8032a1c26e09e950cfa45462 Mon Sep 17 00:00:00 2001 From: Yin Jie <42110520+sjtuyinjie@users.noreply.github.com> Date: Thu, 16 May 2024 16:17:27 +0800 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 643e524..8ada8c9 100644 --- a/README.md +++ b/README.md @@ -77,7 +77,7 @@ Thanks Jialin Liu (Fudan University) for his work to test LVI-SAM on M2DGR. Foll 1. A rich pool of sensory information including vision, lidar, IMU, GNSS,event, thermal-infrared images and so on 2. Various scenarios in real-world environments including lifts, streets, rooms, halls and so on. 3. Our dataset brings great challenge to existing SLAM algorithms including LIO-SAM and ORB-SLAM3. If your proposed algorihm outperforms SOTA systems on M2DGR, your paper will be much more convincing and valuable. -4. A lot of excellent projects have been tested on M2DGR/M2DGE-plus, for examples, Ground-Fusion, Swarm-SLAM, DAMS-LIO, VoxelMap++, GRIL-Cali, and so on! +4. A lot of excellent projects have been tested on M2DGR/M2DGE-plus, for examples, [Ground-Fusion](https://github.com/SJTU-ViSYS/Ground-Fusion), [Swarm-SLAM](https://github.com/MISTLab/Swarm-SLAM), DAMS-LIO, [VoxelMap++](https://github.com/uestc-icsp/VoxelMapPlus_Public), [GRIL-Cali](https://github.com/SJTU-ViSYS/Ground-Fusion), and so on! ## ABSTRACT: