Apollo created the LiDAR Obstacle Visualization Tool, an offline visualization tool to show LiDAR-based obstacle perception results (see How to Run Offline Perception Visualizer). However, the tool lacks the ability to visualize the radar-based obstacle perception results and the fusion results based on its two sensors.
Apollo has developed a second visualization tool, the Fusion Obstacle Visualization Tool, to complement the LiDAR Obstacle Visualization Tool. The Fusion Obstacle Visualization Tool shows obstacle perception results from these modules:
- LiDAR-based algorithm module
- Radar-based algorithm module
- Fusion algorithm module for debugging and testing the complete obstacle perception algorithms
All of the visualization is based on LiDAR data visualization. The source data from LiDAR (a set of 3D points to shape any object in the scene) is better than radar in depicting the visual features of an entire scene. Go to the Apollo web site to see the demo videos for the Fusion Obstacle Visualization Tool.
In general, you follow three steps to build and run the Fusion Obstacle Visualization Tool in Docker:
- Prepare the source data.
- Build the Fusion Obstacle Visualization Tool.
- Run the tool.
The next three sections provide the details for each of the three steps.
Before running the Fusion Obstacle Visualization Tool, you need to prepare the following for:
- Point Cloud Data (PCD) file
- Host vehicle pose
- Radar source obstacle data in the protocol buffers (protobuf) format
- Host vehicle pose
- Host vehicle velocity
To facilitate the data extraction, Apollo provides a tool named export_sensor_data
to export the data from a ROS bag.
- Build the data exporter using these commands:
cd /apollo
bazel build //modules/perception/tool/export_sensor_data:export_sensor_data
- Run the data exporter using this command:
/apollo/bazel-bin/modules/perception/tool/export_sensor_data/export_sensor_data
- Play the ROS bag.
The default directory of the ROS bag is /apollo/data/bag
.
In the following example, the file name of ROS bag is example.bag
.
Use these commands:
cd /apollo/data/bag
rosbag play --clock example.bag --rate=0.1
To ensure that you do not miss any frame data when performing callbacks to the ROS messages, it is recommended that you reduce the playing rate, which is set to 0.1
in the example above.
When you play the bag, all data files are dumped to the export directory, using the timestamp as the file name, frame by frame.
The default LiDAR data export directory is /apollo/data/lidar
.
The radar directory is /apollo/data/radar
.
The directories can be defined in /apollo/modules/perception/tool/export_sensor_data/conf/export_sensor_data.flag
using the flags, lidar_path
and radar_path
.
In the lidar_path
, two types of files are generated with the suffixes: .pcd
and .pose
.
In the radar_path
, three types of files are generated with the suffixes .radar
, .pose
, and .velocity
.
Apollo uses the Bazel tool to build the Fusion Obstacle Visualization Tool.
- Build the Fusion Obstacle Visualization Tool using these commands:
cd /apollo
bazel build -c opt //modules/perception/tool/offline_visualizer_tool:offline_sequential_obstacle_perception_test
The -c opt
option is used to build the program with optimized performance, which is important for the offline simulation and visualization of the perception module in real time.
- (Optional) If you want to run the perception module with GPU, use this command:
bazel build -c opt --cxxopt=-DUSE_GPU //modules/perception/tool/offline_visualizer_tool:offline_sequential_obstacle_perception_test
Before running the Fusion Obstacle Visualization Tool, you can set up the source data directories and the algorithm module settings in the configuration file: /apollo/modules/perception/tool/offline_visualizer_tool/conf/offline_sequential_obstacle_perception_test.flag
.
The default source data directories are /apollo/data/lidar
and /apollo/data/radar
for lidar_path
and radar_path
, respectively.
The visualization-enabling Boolean flag is true
, and the obstacle result type to be shown is fused
(the fusion obstacle results based on both LiDAR and RADAR sensors) by default. You can change fused
to lidar
or radar
to visualize the pure obstacle results generated by the single-sensor-based obstacle perception.
Run the Fusion Obstacle Visualization Tool using this command:
/apollo/bazel-bin/modules/perception/tool/offline_visualizer_tool/offline_sequential_obstacle_perception_test
You see results such as:
- A pop-up window showing the perception result with the point cloud, frame-by-frame
- The raw point cloud shown in grey
- Bounding boxes (with red arrows that indicate the headings) that have detected:
- Cars (green)
- Pedestrians (pink)
- Cyclists (blue)
- Unknown elements (purple)