Skip to content

Latest commit

 

History

History
executable file
·
163 lines (130 loc) · 5.21 KB

README.md

File metadata and controls

executable file
·
163 lines (130 loc) · 5.21 KB

Cobra

Cobra is a C++ library for metric-semantic-driven navigation in both unstructured and structured environments for mobile robots. Cobra is modular, ROS-enabled, and runs on CPU+GPU.

Cobra comprises three modules:

1. Installation

Clone the code

mkdir -p catkin_ws/src
cd catkin_ws/src
git clone [email protected]:gogojjh/cobra.git --recursive 
wstool merge cobra/cobra_https.rosinstall
wstool update
cd cobra

Build the docker environment (X86 PC): change the cuda version of Dockerfile_x86 (first line) for you GPU

docker build -t cobra_x86:ros_noetic-py3-torch-cuda -f docker/Dockerfile_x86 .

Build the docker environment (Jetson - ARM PC)

docker build -t cobra_jetson:ros_noetic-py3-torch-jetpackr35 -f docker/Dockerfile_jetson .

Create the docker container

nvidia-docker run -e DISPLAY -v ~/.Xauthority:/root/.Xauthority:rw --network host \
  -v /tmp/.X11-unix:/tmp/.X11-unix:rw \
  -v volume_path_to_host:volume_path_to_docker \
  --privileged --cap-add sys_ptrace \
  -it --name cobra cobra_x86:ros_noetic-py3-torch-cuda \
  /bin/bash

Compile the nvblox

cd src/glimpse_nvblox_ros1/nvblox/nvblox
mkdir build && cd build && cmake .. && make -j3

Complie other packages

catkin build pointcloud_image_converter nvblox_ros nvblox_rviz_plugin -DCMAKE_BUILD_TYPE=Release

2. Open-Source Datasets

We release an open-source dataset in Google Drive for real-world tests. The dataset provides:

  • 3D LiDAR
  • IMU data
  • Estimated Odometry
  • (Optional: Image)
  • (Optional: Estimated 2D Semantic Segmentation)

3. Results

NOTE: set max_mesh_update_time as the mesh publish frequency and save mesh to /tmp/mesh_nvblox.ply in launch files

Mapping: SemanticKITTI Sequence07 (LiDAR-based semantics)

roslaunch nvblox_ros nvblox_lidar_ros_semantickitti.launch bag_file:=semantickitti_sequence07.bag

Mapping: FusionPortable (With Image-based semantics)

roslaunch nvblox_ros nvblox_lidar_ros_semanticfusionportable.launch bag_file:=20230403_hkustgz_vegetation_sequence00_r3live_semantics_framecam00.bag
roslaunch nvblox_ros nvblox_lidar_ros_semanticfusionportable.launch bag_file:=20220226_campus_road_day_r3live_semantics_framecam00.bag

Navigation:

Citation

If you found any of the above modules useful, we would really appreciate if you could cite our work:

@article{jiao2024real,
  title={Real-Time Metric-Semantic Mapping for Autonomous Navigation in Outdoor Environments},
  author={Jiao, Jianhao and Geng, Ruoyu and Li, Yuanhang and Xin, Ren and Yang, Bowen and Wu, Jin and Wang, Lujia and Liu, Ming and Fan, Rui and Kanoulas, Dimitrios},
  journal={IEEE Transactions on Automation Science and Engineering},
  year={2024},
  publisher={IEEE}
}

Dataset:

@inproceedings{jiao2022fusionportable,
  title={FusionPortable: A Multi-Sensor Campus-Scene Dataset for Evaluation of Localization and Mapping Accuracy on Diverse Platforms},
  author={Jiao, Jianhao and Wei, Hexiang and Hu, Tianshuai and Hu, Xiangcheng and Zhu, Yilong and He, Zhijian and Wu, Jin and Yu, Jingwen and Xie, Xupeng and Huang, Huaiyang and others},
  booktitle={2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  pages={3851--3856},
  year={2022},
  organization={IEEE}
}

Acknowledgments

License

BSD License