Skip to content

Latest commit

 

History

History
156 lines (132 loc) · 7.32 KB

README.md

File metadata and controls

156 lines (132 loc) · 7.32 KB

SLABIM:

A SLAM-BIM Coupled Dataset in HKUST Main Building

Haoming Huang, Zhijian Qiao, Zehuan Yu, Chuhao Liu, Shaojie Shen and Huan Yin

Submitted to 2025 IEEE International Conference on Robotics & Automation

News

  • 15 Sep 2024: We submit our paper to IEEE ICRA.

Abstract

SLABIM is an open-sourced SLAM dataset that couples with BIM (Building Information Modeling).

Features:

  • Large-scale Building Information Modeling: The BIM model of this dataset is a part of the digital twin project in HKUST, featuring various types of offices, classrooms, lounges, and corridors.
  • Multi-session & Multi-sensor Data: We collect 12 sessions across different floors and regions. These sessions encompass various indoor scenarios.
  • Dataset Validation: To demonstrate the practicality of SLABIM, we test three different tasks: (1) LiDAR-to-BIM registration, and (2) Robot pose tracking on BIM and (3) Semantic mapping.

Download

Download link will come soon

Dataset Structure

  1. BIM/ contains CAD files (.dxf) and mesh files (.ply) exported from the original BIM models, organized by storey and semantic tags. Users can sample the meshes at specific densities to obtain point clouds, offering flexibility for various robotic tasks.

  2. calibration files provide intrinsic camera parameters and the extrinsic parameters to the LiDAR.

  3. In sensor data/ directory, each session is named <X>F Region<Y>, with X=1,3,4,5 and Y=1,2,3 indicating the storey and region of collection, such as 3F Region1. This directory contains the images and points produced by camera and LiDAR.

  4. data <x>.bag, x=0,1,2... is the rosbag encoding the raw information, which can be parsed using ROS tools.

  5. sensor data/ also contains the maps generated by SLAM, including submap for the LiDAR-to-BIM registration and optimized map by the offline mapping system.

  6. pose_frame_to_bim.txt, pose_map_to_bim.txt and pose_submap_to_bim.txt contains the ground truth poses from LiDAR scans and maps to the BIM coordinate. These poses are finely tuned using a manually provided initial guess and local point cloud alignment.

SLABIM
├── BIM
│   └── <X>F
│       ├── CAD
│       │   └── <X>F.dxf
│       └── mesh
│           ├── columns.ply
│           ├── doors.ply
│           ├── floors.ply
│           └── walls.ply
├── calibration_files
│   ├── cam_intrinsics.txt
│   └── cam_to_lidar.txt
└── sensor_data
    └── <X>F_Region<Y>
        ├── images
        │   ├── data
        │   │   └── <frame_id>.png
        │   └── timestamps.txt
        ├── map
        │   ├── data
        │   │   ├── colorized.las
        │   │   └── uncolorized.ply
        │   └── pose_map_to_bim.txt
        ├── points
        │   ├── data
        │   │   └── <frame_id>.pcd
        │   ├── pose_frame_to_bim.txt
        │   └── timestamps.txt
        ├── rosbag
        │   └── data_<x>.bag
        └── submap
            ├── data
            │   └── <submap_id>.pcd
            └── pose_submap_to_bim.txt

Data Acquisition Platform

The handheld sensor suite is illustrated in the Figure 1. A more detailed summary of the characteristics can be found in the Table 1.

Qualitative Results on SLABIM

Global LiDAR-to-BIM Registration

Global LiDAR-to-BIM registration aims to estimate a transformation from scratch between the LiDAR submap and the BIM coordinate system. A robot can localize itself globally by aligning the online built submap to the BIM.

Robot Pose Tracking on BIM

Different from LiDAR-to-BIM, Pose tracking requires estimating poses given the initial state and sequential measurements.

Semantic Mapping

We deploy FM-Fusion[1] on SLABIM. For the ground truth, we convert the HKUST BIM into semantic point cloud maps using the semantic tags in BIM. Both maps contain four semantic categories: floor, wall, door, and column, the common elements in indoor environments

[1] C. Liu, K. Wang, J. Shi, Z. Qiao, and S. Shen, “Fm-fusion: Instance- aware semantic mapping boosted by vision-language foundation mod- els,” IEEE Robotics and Automation Letters, 2024

Acknowledgements

We sincerely thank Prof. Jack C. P. Cheng for generously providing the original HKUST BIM files and Skyland Innovation for the wonderful sensor suite.