Published in IEEE RAL-2024
EdgeFlowNet by Perception and Autonomouș̦ Robotics Group at the Department of Robotics Engineering, Worcester Polytechnic Institute.
Optical flow estimation is a critical task for tiny mobile robotics to enable safe and accurate navigation, obstacle avoidance, and other functionalities. However, optical flow estimation on tiny robots is challenging due to limited onboard sensing and computation capabilities. In this paper, we propose EdgeFlowNet , a high-speed, low-latency dense optical flow approach for tiny autonomous mobile robots by harnessing the power of edge computing. We demonstrate the efficacy of our approach by deploying EdgeFlowNet on a tiny quadrotor to perform static obstacle avoidance, flight through unknown gaps and dynamic obstacle dodging. EdgeFlowNet is about 20× faster than the previous state-of-the-art approaches while improving accuracy by over 20% and using only 1.08W of power enabling advanced autonomy on palm-sized tiny mobile robots.
You can download the MPI-sintel dataset from here and FlyingChairs2 dataset from here
We provide a nvidia-docker container to run the code for different datasets
- Install docker from here
- Also follow post-installation steps for linux if you dont want to run docker in
sudo
- Finally, follow the nvidia-docker install steps mentioned here
We use the following structure to run the code. dataset_path
is assumed to contain folder Sintel
and FlyingChairs2
datasets
python wrappers/run_test.py --dataset <dataset> --dataset_path <dataset_path>
To run the inference on the Sintel, download the sintel and provide the corresponding path
python wrappers/run_test.py --dataset sintel --dataset_path <sintel_dataset_path>
The run_test.py
script does the following
- installs the docker container if it doesnt exist (takes few minutes to download for the first time)
- mounts the current repository directory and dataset directory within the docker
- runs the evaluation on the provided dataset and prints the corresponding EPE number at the end
Similar to inference you can use the following command for training on FC2
python wrapers/run_train.py --dataset FC2 --dataset_path <FC2_dataset_path>
You can find the blender scenes and corresponding assets used in the paper in blender
folder
If you find our work useful please do cite us as follows:
@ARTICLE{Raju2024EdgeFlowNet,
author={Raju, Sai Ramana Kiran Pinnama and Singh, Rishabh and Velmurugan, Manoj and Sanket, Nitin J.},
journal={IEEE Robotics and Automation Letters},
title={EdgeFlowNet: 100FPS@1W Dense Optical Flow For Tiny Mobile Robots},
year={2024},
volume={},
number={},
pages={1-8},
doi={10.1109/LRA.2024.3496336}
}
Copyright (c) 2024 Perception and Autonomous Robotics (PeAR)