Status: Archive (code is provided as-is, no updates expected)
This repository contains a set of competitive multi-agent environments used in the paper Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments.
RoboSumo depends on numpy
, gym
, and mujoco_py>=1.5
(if you haven't used MuJoCo before, please refer to the installation guide).
Running demos with pre-trained policies additionally requires tensorflow>=1.1.0
and click
.
The requirements can be installed via pip as follows:
$ pip install -r requirements.txt
To install RoboSumo, clone the repository and run pip install
:
$ git clone https://github.com/openai/robosumo
$ cd robosumo
$ pip install -e .
You can run demos of the environments using demos/play.py
script:
$ python demos/play.py
The script allows you to select different opponents as well as different policy architectures and versions for the agents. For details, please refer to the help:
$ python demos/play.py --help
Usage: play.py [OPTIONS]
Options:
--env TEXT Name of the environment. [default: RoboSumo-Ant-vs-Ant-v0]
--policy-names [mlp|lstm]... Policy names. [default: mlp, mlp]
--param-versions INTEGER... Policy parameter versions. [default: 1, 1]
--max_episodes INTEGER Number of episodes. [default: 20]
--help Show this message and exit.