This is a set of tools developed to train an agent (and multiple agents) to find the optimal path to localize and track a target (and multiple targets).
The deep Reinforcement Learning (RL) algorithms implemented are:
The environment to train the agents is based on the OpenAI Particle.
The main objective is to find the optimal path that an autonomous vehicle (e.g. autonomous underwater vehicles (AUV) or autonomous surface vehicles (ASV)) should follow in order to localize and track an underwater target using range-only and single-beacon algorithms. The target estimation algorithms implemented are based on:
- Least Squares (LS)
- Particle Filter (PF)
An example of a trained agent can be seen below.
Legend: Blue dot = agent, Black dot = target, and Red dot = predicted target position using LS
The designed environment simulates the main characteristics of the marine world, and the acoustic underwater communications channel, such as:
- Ocean currents. Their direction and velocity are randomly chosen at the beginning of each episode based on the configuration file
- Distance measurement error. The distance measured between the agent and the target contains a random error of 1 m, and a sistematic error of 1% of the measurement conducted
- Agent-Target communication failure. Based on a dropping factor in the configuration file, a number of distance measurements are missed in each episode.
- Agent-Target maximum communication distance.
Follow the next instructions to set up a Windows computer to run the algorithms.
conda create -n <env-name> python=3.6
conda activate <env-name>
conda install git
conda install -c conda-forge ffmpeg
conda install -c conda-forge brotlipy
pip install gym==0.10.0
conda install pytorch==1.5.0 torchvision==0.6.0 cudatoolkit=9.2 -c pytorch
pip install tensorflow==2.1.0
pip install tensorboardX
pip install imageio --user
pip install progressbar
pip install pyglet==1.3.2
pip install cloudpickle
pip install tqdm
conda install matplotlib
Then type git clone
, and paste the project URL, to clone this repository in your local computer.
git clone https://github.com/imasmitja/RLforUTracking
Training the deep RL network:
python main.py <configuration file>
While the DRL is training, you can visualize the plots on TensorBoard by:
tensorboard --logdir=./log/<configuration file> --host=127.0.0.1
Then (Run in web):
http://localhost:6006/
See a trained agent:
python see_trained_agent.py <configuration file>
Note: <configuration file>
without extension
An example of the <configuration file>
can be found here
This repository is part of the Artificial Intelligence methods for Underwater target Tracking (AIforUTracking) project (ID: 893089) from a Marie Sklodowska-Curie Individual Fellowship. More info can be found here.
Acknowledgements
Anyone using DRLforUTracking data for a publication or project acknowledges and references this [forthcoming] publication.
“This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 893089.”
Collaborators