This repository contains JAX (Flax) implementations of Reinforcement Learning algorithms:
- Soft Actor Critic with learnable temperature
- Advantage Weighted Actor Critic
- Image Augmentation Is All You Need(only [K=1, M=1])
- Deep Deterministic Policy Gradient with Clipped Double Q-Learning
- Randomized Ensembled Double Q-Learning: Learning Fast Without a Model
- Behavioral Cloning
The goal of this repository is to provide simple and clean implementations to build research on top of. Please do not use this repository for baseline results and use the original implementations instead (SAC, AWAC, DrQ).
If you use JAXRL in your work, please cite this repository in publications:
@misc{jaxrl,
author = {Kostrikov, Ilya},
doi = {10.5281/zenodo.5535154},
month = {10},
title = {{JAXRL: Implementations of Reinforcement Learning algorithms in JAX}},
url = {https://github.com/ikostrikov/jaxrl},
year = {2021}
}
- Added an implementation of Randomized Ensembled Double Q-Learning: Learning Fast Without a Model
- Added an implementation of Deep Deterministic Policy Gradient with Clipped Double Q-Learning
- Added an implementation of Soft Actor Critic v1
- Added an implementation of data augmentation from Image Augmentation Is All You Need
conda install patchelf
pip install dm_control
pip install --upgrade git+https://github.com/ikostrikov/jaxrl
# For GPU support run
pip install --upgrade "jax[cuda]" -f https://storage.googleapis.com/jax-releases/jax_releases.html
If you want to run this code on GPU, please follow instructions from the official repository.
Please follow the instructions to build mujoco-py with fast headless GPU rendering.
If you want to modify the code, install following the instructions below.
conda install patchelf
pip install --upgrade -e .
If you experience out-of-memory errors, especially with enabled video saving, please consider reading docs on JAX GPU memory allocation. Also, you can try running with the following environment variable:
XLA_PYTHON_CLIENT_MEM_FRACTION=0.80 python ...
If you run your code on a remote machine and want to save videos for DeepMind Control Suite, please use EGL for rendering:
MUJOCO_GL=egl python train.py --env_name=cheetah-run --save_dir=./tmp/ --save_video
Launch tensorboard to see training and evaluation logs
tensorboard --logdir=./tmp/
Copy your MuJoCo key to ./vendor
cd remote
docker build -t ikostrikov/jaxrl . -f Dockerfile
sudo docker run -v <examples-dir>:/jaxrl/ ikostrikov/jaxrl:latest python /jaxrl/train.py --env_name=HalfCheetah-v2 --save_dir=/jaxrl/tmp/
# On GPU
sudo docker run --rm --gpus all -v <examples-dir>:/jaxrl/ --gpus=all ikostrikov/jaxrl:latest python /jaxrl/train.py --env_name=HalfCheetah-v2 --save_dir=/jaxrl/tmp/
When contributing to this repository, please first discuss the change you wish to make via issue. If you are not familiar with pull requests, please read this documentation.
Thanks to @evgenii-nikishin for helping with JAX. And @dibyaghosh for helping with vmapped ensembles.