Author: Luigi Freda
- slamplay
slamplay is a collection of powerful tools to start playing and experimenting with SLAM in C++. It automatically installs and makes available in a single cmake framework some of the most important
- back-end frameworks (g2o, gtsam, ceres, se-sync, etc.),
- front-end tools (OpenCV, pcl, etc.),
- algebra and geometry libs (eigen, sophus, cholmod, etc.),
- viz tools (pangolin, imgui, rerun, etc.),
- loop-closure frameworks (DBoW2, DBoW3, iBoW, etc.),
- deep learning tools (based on TensorRT, tensorflow_cc, libtorch, onnxruntime, etc.),
along with some nice examples in order to easily and quickly start with all these tools.
I created slamplay for a computer vision class I taught. I started developing it for fun, during my free time, taking inspiration from some repos available on the web.
This repository is structured in the following main folders (with self-explanatory names):
algebra_geometry
backend
core
data
dense_mapping
docs
frontend
(*)full_slam
io
loop_closure
scripts
semantics
(*)utils
viz
(*) With new C++ tools based on TensorRT, tensorflow_cc, onnxruntime, e.g, SuperPoint, SuperGlue, Depth-Anything, HFNet, Segment-Anthing-Model (SAM).
The following procedure has been successfully tested under Ubuntu 20.04, 22.04 and 24.04.
- Install required basic dependencies:
$ ./install_dependencies.sh
- Install OpenCV in a local folder:
$ ./install_local_opencv.sh
- Build the framework:
$ ./build.sh
This will take a while. Once everything is built, you can enter in the build
folder and test the different examples. In particular, to test the full_slam
apps:
- Configure
full_slam/config/kitti.yaml
(orfull_slam/config/euroc.yaml
) - Then run the VO (Visual Odometry) app:
$ cd build/full_slam/apps $ ./run_kitti_stereo # or ./run_euroc_stereo
The file config.sh
with its environment variable defines your working configuration. This will be automatically sourced by all the main install/build scripts.
For instance, if you want to skip the OpenCV install step then set the variable OpenCV_DIR
in config.sh
with your local OpenCV path. This is not recommended though since different dep libs can be mixed (with undefined behavior) and you may lose some of the available features too.
If you want to smoothly run the slamplay examples using default input data then dowload the provided testing images and videos (will be deployed in the data
folder):
$ ./install_data.sh
If you want to use the DL (Deep Learning) models then run the following command to download both neural network weights/checkpoints with their related data:
$ ./install_dl_models.sh
I recommend these tested configurations with CUDA
ecosystem.
If you want to install and test tensorflow C++ API (for HFNet), then run:
$ ./install_tensorflow_cc.sh
See the repository tensorflow_cc for further details. This step will take a long while and for this reason you are required to manually launch the script install_tensorflow_cc.sh
.
If you prefer to use docker for working with slamplay, you may want to use rosdocker with one of its provided images (w/ or w/o CUDA
support).
See the nice ascii quick reference.
- Quick reference
- Dense matrix manipulation
- Dense linear problems and decompositions
- Sparse linear algebra
- Geometry
A couple of notes about some of the available front-end features.
In frontend/feature_dl
you can find:
- A C++ implementation of SuperPoint and SuperGlue under TensorRT
- A C++ implementation of HFNet under TensorRT and Tensorflow.
Warning: The first time you run a TensorRT model, it will take some time to convert the input onnx model to its engine format.
In frontend/depth_dl
you can find:
- A C++ implementation of Depth-Anything-V2.0 based on TensorRT.
Warning: The first time you run a TensorRT model, it will take some time to convert the input onnx model to its engine format.
As explained above, if you want to install and test tensorflow C++ API (e.g. for HFNet), then run:
$ ./install_tensorflow_cc.sh
See tensorflow_cc for further details. Note that this step will take a long while and for this reason it is required you manually launch install_tensorflow_cc.sh
.
See this file for other notes about GPU support and the used CUDA
ecosystem.
A couple of notes about some of the available back-end frameworks.
Installed tag 4.2a9
Documentation
- https://gtsam.org/docs/
- https://gtsam.org/tutorials/intro.html
- See
docs
folder for further documentation.
Known issues
Apparently, in order to avoid double free or corruption errors with gtsam on exit, we need to disable the compile option -march=native
when building apps that use gtsam. This can be done locally by modifying the compile flags at the folder level and removing march native optimization for gtsam-related files. Further details are available in the following links:
- https://bitbucket.org/gtborg/gtsam/issues/414/compiling-with-march-native-results-in
- https://groups.google.com/g/gtsam-users/c/jdySXchYVQg
Installed tag 2.1.0
Documentation
- http://ceres-solver.org/tutorial.html
- See
docs
folder for further documentation.
Installed tag 20230223_git. See related examples.
Issues:
- The built g2o-dependant binaries link to the system g2o (instead of the locally compiled and installed g2o) and this brings to crashes (a different g2o version is linked). There are different solutions to this problem:
- Standard one (uncomfortable): use
LD_LIBRARY_PATH
to make the built binaries correctly link to the locally compiled and installed g2o. - Otherwise, we can use and set
RPATH
(instead ofRUNPATH
) at build time. In particular, this can be done by using some compiler options. This is what I set in the main cmake file:set(MY_FLAGS "${MY_FLAGS} -Wl,--disable-new-dtags")
https://stackoverflow.com/questions/47117443/dynamic-linking-with-rpath-not-working-under-ubuntu-17-10 This configuration is enabled/disabled by the cmake option flagSET_RPATH
.
- Standard one (uncomfortable): use
- If you get a double free or corruption error with g2o (on exit), then it is very likely you used
-march=native
option when compiling this project but you didn't use the same option for building g2o itself. This may cause some alignment inconsistencies between g2o and this project. Then, in that case, build g2o with-march=native
(i.e. use the cmake option-DBUILD_WITH_MARCH_NATIVE=ON
)
A couple of notes about the IO library utils and examples.
https://www.modernescpp.com/index.php/the-three-clocks
Differences amongst the three different clocks?
- std::chrono::sytem_clock: This is the system-wide real-time clock (wall-clock). The clock has the auxiliary functions to_time_t and from_time_t to convert time points into dates.
- std::chrono::steady_clock: Provides as only a clock the guarantee that you can not adjust it. Therefore, std::chrono::steady_clock is the preferred clock to wait for a time duration or until a time point.
- std::chrono::high_resolution_clock: This is the clock with the highest accuracy, but it can be a synonym for the clock's std::chrono::system_clock or std::chrono::steady_clock.
The C++ standard provides no guarantee about the clocks' accuracy, starting point, or valid time range. Typically, the starting point of std::chrono:system_clock is the 1.1.1970, the so-called UNIX-epoch. For std::chrono::steady_clock, typically the boot time of your PC.
What is the difference between steady_clock vs system_clock in layman terms?
If you're holding a system_clock in your hand, you would call it a watch, and it would tell you what time it is.
If you're holding a steady_clock in your hand, you would call it a stopwatch, and it would tell you how fast someone ran a lap, but it would not tell you what time it is.
Tracy is a great profiler that will be automatically installed by the main build script build.sh
.
Repository link: https://github.com/wolfpld/tracy. Documentation is here.
- Include
profiler/profiler_tracy.h
in every file you are interested in profiling. - Check
TRACY_ENABLE
is defined for the WHOLE project (slamplay
automatically takes care of that if you setUSE_TRACY=1
inconfig.sh
). - Add the macro
FrameMark
at the end of each frame loop. (Jump this: The latest version of tracy does not seem to support well this) - Add the macro
ZoneScoped
as the first line of your function definitions to include them in the profile. - Run tracy profiler:
./thirdparty/tracy/tracy-profiler
and hit the connect button. - Compile and run your application.
- You’re profiling your program! Go to tracy interface and hit the
statistics
button.
- This repo imported some of the C++ examples (updated, improved, and commented) of the repository https://github.com/gaoxiang12/slambook2. Thanks to the Author for his great work.
- Thanks to the Author of the repository https://github.com/nicolov/simple_slam_loop_closure/. I imported a couple of scripts (updated and improved) from his repository (for computing the confusion matrix).
- Thanks to yuefanhao for his repo https://github.com/yuefanhao/SuperPoint-SuperGlue-TensorRT .
- Thanks to the Authors of the repository https://github.com/LiuLimingCode/HFNet_SLAM. I integrated and adapted part of their C++ implementation of HFNet NN.
- Thanks to the Authors of the repositories https://github.com/spacewalk01/depth-anything-tensorrt and https://github.com/ojh6404/depth_anything_ros. I adapted and integrated some parts of their C++ classes and script for DepthAnything v2 NN.
slamplay
is released under GPLv3 license. slamplay
contains some modified libraries, each one coming with its license. Where nothing is specified, a GPLv3 license applies to the software.