Skip to content

By integrating the data of LiDAR and camera, create teacher data sets for monocular camera.

Notifications You must be signed in to change notification settings

grvk02/sensor_fusion

 
 

Repository files navigation

Sensor_Fusion

In this repository, I compiled the source code using ROS for the sensor fusion

Overview

By integrating the data of LiDAR and camera, create teacher data sets for monocular camera.

Requirements

Hardware Spec

  • PC
    • OS : Ubuntu16.04
    • Memory : 8GB
    • CPU : Intel® Core™ i7-7700
    • GPU : GeForce GTX 1050-Ti
  • TX2
  • Robot
    • Sensors
      • SQ-LiDAR(Meiji Univ)
      • ZED(Stereolabs)
      • AMU
    • Vehicle
      • Differetical drive

How to Build

$cd $HOME
$cd catkin_ws/src
$git clone [email protected]:Sadaku1993/sensor_fusion.git
$cd ..
$catkin_make
$cd $HOME
$cd catkin_ws/src
$git clone [email protected]:Sadaku1993/sensor_fusion.git
$cd ..
$catkin_make

Calibration SQ LiDAR and ZED

Watch calibration

Coloring LiDAR PointCloud Using ZED

Watch coloring

DepthImage Using LiDAR Points

Watch depthimage

About

By integrating the data of LiDAR and camera, create teacher data sets for monocular camera.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 80.2%
  • Shell 11.1%
  • Python 5.0%
  • CMake 3.6%
  • C 0.1%