Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
-
Updated
Jun 13, 2024 - Python
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
JMLR: OmniSafe is an infrastructural framework for accelerating SafeRL research.
The repository is for safe reinforcement learning baselines.
NeurIPS 2023: Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark
NeurIPS 2023: Safe Policy Optimization: A benchmark repository for safe reinforcement learning algorithms
Multi-Agent Constrained Policy Optimisation (MACPO; MAPPO-L).
Open-source reinforcement learning environment for autonomous racing — featured as a conference paper at ICCV 2021 and as the official challenge tracks at both SL4AD@ICML2022 and AI4AD@IJCAI2022. These are the L2R core libraries.
Reading list for adversarial perspective and robustness in deep reinforcement learning.
The Source code for paper "Optimal Energy System Scheduling Combining Mixed-Integer Programming and Deep Reinforcement Learning". Safe reinforcement learning, energy management
Safe Pontryagin Differentiable Programming (Safe PDP) is a new theoretical and algorithmic safe differentiable framework to solve a broad class of safety-critical learning and control tasks.
[ICLR 2024] The official implementation of "Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model"
Code for "Constrained Variational Policy Optimization for Safe Reinforcement Learning" (ICML 2022)
Safe Multi-Agent Isaac Gym benchmark for safe multi-agent reinforcement learning research.
The Verifiably Safe Reinforcement Learning Framework
ICLR 2024: SafeDreamer: Safe Reinforcement Learning with World Models
Repository containing the code for the paper "Safe Model-Based Reinforcement Learning using Robust Control Barrier Functions". Specifically, an implementation of SAC + Robust Control Barrier Functions (RCBFs) for safe reinforcement learning in two custom environments
Safe Multi-Agent Reinforcement Learning to Make decisions in Autonomous Driving.
Implementation of PPO Lagrangian in PyTorch
LAMBDA is a model-based reinforcement learning agent that uses Bayesian world models for safe policy optimization
Implementations of SAILR, PDO, and CSC
Add a description, image, and links to the safe-reinforcement-learning topic page so that developers can more easily learn about it.
To associate your repository with the safe-reinforcement-learning topic, visit your repo's landing page and select "manage topics."