This ROS2_iRobotCreate2 repository provides an integration of Object Detection and Monocular Depth Estimation in ROS2 Foxy ecosystem and support to visualize on RViz2. It has been optimized for the Jetson Xavier NX dev kit, the iCreate Robot 2, and the Logitech c920 Monocular Camera.
- Complete Integration with ROS2 Ecosystem: This repository provides seamless integration with the ROS ecosystem. It supports video feed streaming, the usage of jetson-inference library nodes, robot control, and teleoperation.
- Custom Docker Container: Included in the repository is a custom Docker container designed for running and visualizing Object Detection and Monocular Depth Estimation in ROS2 Foxy using jetson-inferece in modified container of ros_deep_learningthat has been specifically tailored to work with NVIDIA Jetson devices and TensorRT.
- RVIZ2 Configuration File: A configuration file for RVIZ2 is provided, allowing users to visualize the entire process.
Please feel free to contribute, make suggestions, or raise issues if you encounter any problems. We hope you find this project useful, and we look forward to seeing the innovative ways you use it in your own applications.
-
Ubuntu packages:
python3-rosdep
,python3-colcon-common-extensions
,g++
sudo apt install python3-rosdep python3-colcon-common-extensions g++
- Clone this repository to your local machine:
cd ~ git clone --recursive https://github.com/secretxs/ROS2_iRobotCreate2
- Install dependencies
cd ROS2_iRobotCreate2/create_ws/src rosdep update cd .. rosdep install --from-paths src -i
- Build the workspace:
colcon build
- Source the workspace:
source ~/ROS2_iRobotCreate2/create_ws/install/setup.bash
- *Optional if you don't want to source everytime:
echo "source ~/ROS2_iRobotCreate2/create_ws/install/setup.bash" >> ~/.bashrc
-
Connect computer to Create's 7-pin serial port
- If using Create 1, ensure that nothing is connected to Create's DB-25 port
-
In order to connect to Create over USB, ensure your user is in the dialout group
sudo usermod -a -G dialout $USER
- Logout and login for permission to take effect
This project supports streaming video feeds and images via a variety of interfaces and protocols that is supported by jetson-inference
-
Launch the robot simulation:
ros2 launch create_bringup create_2.launch
Launch file arguments
- config - Absolute path to a configuration file (YAML). Default:
create_bringup/config/default.yaml
- desc - Enable robot description (URDF/mesh). Default:
true
- config - Absolute path to a configuration file (YAML). Default:
-
Launch the joystick teleop node (to robot using your joystick) *optional
ros2 launch create_bringup joy_teleop.launch joy_config:=dualshock4
Warning! You may need to modify joy_teleop file according to https://github.com/pgold/teleop_tools/commit/13488fcad84955a31deb608dd1829e90ac831a04
If you want to learn more details about IRobot Create 2, you can go to my own fork of create_robot repository or original repository create_robot.
You can follow the next steps to use modified version of ros_deep_learning docker container that works in ROS ecosystem that uses
- Object Detection using Detectnet with "ssd-mobilenet-v2"
- Monocular Depth Estimation and Visualization with "fcn-mobilenet" If you you more customization, you can go ros_deep_learning for more customizable and purpose specific dockers that natively supports some of the image processing ROS nodes or write your own ROS node using jetson-inference.
- Run the ROS USB Camera Node:
ros2 run usb_cam usb_cam_node_exe
- Run the bash script to pull and run docker container.
docker/launch/run.sh
- Run the pre-configured Object Detection Script
python3 ROS2_iRobotCreate2/docker/scripts/ros_object.py
- Run the pre-configured Monocular Depth Estimation Script
python ROS2_iRobotCreate2/docker/scripts/ros_depth.py
You can launch pre-configured rviz2 with
ros2 run rviz2 rviz2 -d ~/ROS2_iRobotCreate2/config/rviz2_icreate2.rviz