I wanted to learn about ORBSLAM and take the opportunity to demonstrate some things I have become familiar with such as openCV (python and c++ with cmake), websockets/asyncio, and flutter.
As I make progress with my python code to run orbslam on sample videos, I will continue to add features to the app to demonstrate the results. I have also building out the same backend in c++ with vscode debugging and to get a deep understanding of the data structures.
So far, you can view the results of feature matching across consecutive frames with controls to pause, play and restart the video. Later I want to build and visualize a map of the environment with the ego position and orientation. Then I might add more controls to tweak the algorithm parameters from the UI and try different videos.
- Verify the websockets image stream is the way to go
- Connect the slam script to the python server
- Give the flutter app a basic UI
- Setup the app to receive the slam data and image feed
- Build an equivalent c++ version of the slam script so far
- Setup vscode c++ code debugging
- Hook up commands from the UI back to the server
- Make the app look fancier
- Make the server handle seperate socket connection for different widgets
- Do a deep dive into the theory of mapping and localization
- Continue to build out the slam script
- Stabilize the the UI
Activate virtual environment
source env/bin/activate
Install python dependencies (produced by pip freeze > requirements.txt
)
pip3 install -r requirements.txt
Follow the flutter setup instructions here
Launch the image processing script and websocket server
python3 slam.py
then launch the flutter app
flutter run -d chrome
Install the c++ extension for vscode Then just run and debug in vscode with the "Debug c++ slam" configuration.