🛠️ Download | 🎥 Video | 💻 Code | 📖 Paper (RA-L)
The dataset corresponding to the paper 'Learning Self-supervised Traversability with Navigation Experiences of Mobile Robots: A Risk-aware Self-training Approach' , accepted for publication in RA-L on Feb, 2024.
Our task of interests are: terrain mapping; ground obstacle detection; and the estimation of 'robot-specific' traversability.
While well-known datasets like KITTI provide extensive data for robotic perception, they often fall short in addressing the specific needs of learning robot-specific traversability. That is, KITTI and similar datasets such as Cityscapes and nuScenes, are mainly designed for general applications and may not capture the unique environmental and operational challenges faced by specific robots.
On the other hand, the robot's own navigation experiences provide rich contextual information which is crucial for navigating complex urban terrains.
Therefore, we've made an automated data collection pipeline, which labels the traversed terrain maps by using a simple manual driving of the robot. All we have to do is just driving the robot in the target environment, which is typically done when building a prior map with existing SLAM systems.
Here are some good reasons of using onboard sensors and the robot's own navigation experiecne for learning robot-specific traversability:
- Data Scalability: Leveraging the robot's own sensors and navigation experiences removes manual labeling efforts. Our approach enables continuous and automated data gathering, facilitating the creation of extensive datasets that capture diverse environmental conditions and scenarios.
- Robot-Environment Adaptability: Data collected directly from the robot ensures that the training data is highly relevant to the specific robot and its operating environment. This allows the learned model to rapidly adapt to the target environment, with joint consideration of the robot's locomotion capabilities, leading to more accurate learning and predictions of traversability.
-
Data Format: Our datasets are provided as the files with rosbag format. For more information about the rosbag, see rosbag/Tutorials and rosbag/API Documentation.
-
What's in our dataset?: In each file of our datasets, the following ROS messages are provided:
- LiDAR measurements (
velodyne_msgs/VelodyneScan
) - IMU measurements (
sensor_msgs::Imu
) - Odometry pose (
/tf
) - Extrinsic parameters of the sensors (
/tf_static
)
- LiDAR measurements (
[Note]: To reduce the size of datasets, only the packet messages of LiDAR sensor were recorded. This means that we have to unpack the lidar packets for playback the recorded point cloud measurements. For the purpose, there is a
vlp16packet_to_pointcloud.launch
file that handles the conversion of lidar packet to point clouds.
-
Robotic Platform: Two-wheeled differential-drive robot, ISR-M3, was used to collect the datasets. The robot was equipped with a single 3D LiDAR and IMU. During the experiments, 3D pose of the robot was estimated by the use of a Lidar-inertial odometry (LIO) system.
-
Environments: We mainly provide two datasets with distinct ground surface characteristics. The training (blue) / testing (red) trajectories of a robot are shown in the aerial images below.
- Urban campus: This main target environment spans approximately 510m x 460m, with a maximum elevation change of 17m. The maximum inclination of the terrain is 14 degrees. The environment mostly consists of asphalt terrain. Some damaged roads, cobblestoned pavements, and the roads with small debris are challenging.
- Rural farm road: We additionally validated our approach in the unstructured environments. Farm road areas were typically unpaved dirt or gravel and included various low-height ground obstacles.
Use the following links to download the datasets.
1. Urban Campus Dataset: [Google Drive]
- parking_lot.bag (-> 15 min)
- wheelchair_ramp.bag (-> 5 min)
- campus_south.bag (-> 40 min)
- campus_east.bag (-> 7 min)
- campus_north.bag
- campus_west.bag
- campus_full.bag (-> about 50 min)
- campus_full_long.bag (-> 2h 18 min)
- campus_road_camera.bag (-> 10 min)
- ku_innovation_hall_4F.bag (-> 7 min)
2. Farm Road Dataset: [Google Drive]
- country_road_training.bag (-> 4 min)
- country_road_testing.bag (-> 16 min)
- greenhouse.bag (-> 13 min)
We provide terrain_dataset_player
ROS package for playing the datasets with the basic rviz
visualization settings. Please follow the instructions below to play the recorded rosbag files.
In order to run terrain_dataset_player
package, please install the dependencies below:
- Ubuntu (tested on 20.04)
- ROS (tested on Noetic)
velodyne_pointcloud
(Velodyne ROS driver for unpacking LiDAR packet msgs)
Instaling the velodyne_pointcloud
binaries by using apt
should work through:
sudo apt install ros-noetic-velodyne-pointcloud
We recommend to use catkin_tools
to build the ROS packages (It is not mandatory).
Instaling the catkin_tools
package by using apt
should work through:
sudo apt install python3-catkin-tools
Use the following commands to download and build the terrain_dataset_player
package:
cd ~/your-ros-workspace/src
git clone https://github.com/Ikhyeon-Cho/urban-terrain-dataset.git
cd ..
catkin build terrain_dataset_player ## If not using catkin_tools, use catkin_make
- Locate the downloaded rosbag files (See Download the datasets) into
data
folder. - Run the command below to play the rosbag files in
data
folder:
## Example: parking_lot.bag
roslaunch terrain_dataset_player parking_lot.launch playback_speed:=4.0
Thank you for citing our paper if this helps your research projects:
Ikhyeon Cho, and Woojin Chung. 'Learning Self-Supervised Traversability with Navigation Experiences of Mobile Robots: A Risk-Aware Self-Training Approach', IEEE Robotics and Automation Letters, 2024.
@article{cho2024learning,
title={Learning Self-Supervised Traversability With Navigation Experiences of Mobile Robots: A Risk-Aware Self-Training Approach},
author={Cho, Ikhyeon and Chung, Woojin},
journal={IEEE Robotics and Automation Letters},
year={2024},
volume={9},
number={5},
pages={4122-4129},
doi={10.1109/LRA.2024.3376148}
}