Skip to content

Safe Multi-Agent MuJoCo benchmark for safe multi-agent reinforcement learning research.

License

Notifications You must be signed in to change notification settings

SafeRL-Lab/Safe-Multi-Agent-Mujoco

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Safe Multi-Agent Mujoco

We introduce a safe Multi-Agent Reinforcement Learning Benchmark, Safe Multi-Agent MuJoCo (Safe MAMujoco), a safety-aware modification of MAMuJoCo. Safe MAMuJoCo agents learn to not only skilfully manipulate a robot, but also to avoid dangerous obstacles and positions, Figure 1 shows example views of the environment. (This repository is under actively development. We appreciate any constructive comments and suggestions)

In particular, the background environment, agents, physics simulator, and the reward function are preserved. However, as oppose to its predecessor, Safe MAMuJoCo environments come with obstacles, like walls or bombs. Furthermore, with the increasing risk of an agent stumbling upon an obstacle, the environment emits cost.

Figure.1 Example views of robots in Safe MAMuJoCo. Body parts of different colours are controlled by differentagents. Agents jointly learn to manipulate the robot, while avoiding crashing into unsafe areas.

Installation

LD_LIBRARY_PATH=${HOME}/.mujoco/mujoco200/bin;
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libGLEW.so

Tasks

    ManyAgent Ant
    env_args = {"scenario": "manyagent_ant",
                  "agent_conf": "3x2",
                  "agent_obsk": 1,
                  "episode_limit": 1000}

    env_args = {"scenario": "manyagent_ant",
                  "agent_conf": "2x3",
                  "agent_obsk": 1,
                  "episode_limit": 1000}
                  
    env_args = {"scenario": "manyagent_ant",
                  "agent_conf": "6x1",
                  "agent_obsk": 1,
                  "episode_limit": 1000}

    env_args = {"scenario": "manyagent_ant",
                "agent_conf": "4x2",
                "agent_obsk": 1,
                "episode_limit": 1000}  
                
    HalfCheetah
    env_args = {"scenario": "HalfCheetah-v2",
                "agent_conf": "2x3",
                "agent_obsk": 1,
                "episode_limit": 1000}
                
    env_args = {"scenario": "HalfCheetah-v2",
                "agent_conf": "3x2",
                "agent_obsk": 1,
                "episode_limit": 1000}
                
    env_args = {"scenario": "HalfCheetah-v2",
                "agent_conf": "6x1",
                "agent_obsk": 1,
                "episode_limit": 1000}
                  
    Ant 
    env_args = {"scenario": "Ant-v2",
                "agent_conf": "2x4",
                "agent_obsk": 1,
                "episode_limit": 1000}

    env_args = {"scenario": "Ant-v2",
                "agent_conf": "8x1",
                "agent_obsk": 1,
                "episode_limit": 1000}

    env_args = {"scenario": "Ant-v2",
                "agent_conf": "2x4d",
                "agent_obsk": 1,
                "episode_limit": 1000}

    env_args = {"scenario": "Ant-v2",
                "agent_conf": "4x2",
                "agent_obsk": 1,
                "episode_limit": 1000}
                
   Coupled_half_cheetah
    env_args = {"scenario": "coupled_half_cheetah",
                "agent_conf": "1p1",
                "agent_obsk": 1,
                "episode_limit": 1000}
    Hopper
    env_args = {"scenario": "Hopper-v2",
                "agent_conf": "3x1",
                "agent_obsk": 1,
                "episode_limit": 1000}

Publication

If you find the repository useful, please cite the paper:

@article{gu2021multi,
  title={Multi-Agent Constrained Policy Optimisation},
  author={Gu, Shangding and Kuba, Jakub Grudzien and Wen, Munning and Chen, Ruiqing and Wang, Ziyan and Tian, Zheng and Wang, Jun and Knoll, Alois and Yang, Yaodong},
  journal={arXiv preprint arXiv:2110.02793},
  year={2021}
}

Acknowledgments

We thank the list of contributors from the following open source repositories: MAMujoco, safety-gym, CMBPO.

About

Safe Multi-Agent MuJoCo benchmark for safe multi-agent reinforcement learning research.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%