Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 95% Match Research Paper AI safety researchers,Reinforcement learning engineers,Robotics developers,Researchers in formal methods 2 weeks ago

Provably Optimal Reinforcement Learning under Safety Filtering

ai-safety › alignment
📄 Abstract

Abstract: Recent advances in reinforcement learning (RL) enable its use on increasingly complex tasks, but the lack of formal safety guarantees still limits its application in safety-critical settings. A common practical approach is to augment the RL policy with a safety filter that overrides unsafe actions to prevent failures during both training and deployment. However, safety filtering is often perceived as sacrificing performance and hindering the learning process. We show that this perceived safety-performance tradeoff is not inherent and prove, for the first time, that enforcing safety with a sufficiently permissive safety filter does not degrade asymptotic performance. We formalize RL safety with a safety-critical Markov decision process (SC-MDP), which requires categorical, rather than high-probability, avoidance of catastrophic failure states. Additionally, we define an associated filtered MDP in which all actions result in safe effects, thanks to a safety filter that is considered to be a part of the environment. Our main theorem establishes that (i) learning in the filtered MDP is safe categorically, (ii) standard RL convergence carries over to the filtered MDP, and (iii) any policy that is optimal in the filtered MDP-when executed through the same filter-achieves the same asymptotic return as the best safe policy in the SC-MDP, yielding a complete separation between safety enforcement and performance optimization. We validate the theory on Safety Gymnasium with representative tasks and constraints, observing zero violations during training and final performance matching or exceeding unfiltered baselines. Together, these results shed light on a long-standing question in safety-filtered learning and provide a simple, principled recipe for safe RL: train and deploy RL policies with the most permissive safety filter that is available.
Authors (4)
Donggeon David Oh
Duy P. Nguyen
Haimin Hu
Jaime F. Fisac
Submitted
October 20, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

Proves that enforcing safety with a sufficiently permissive safety filter in RL does not degrade asymptotic performance. Formalizes RL safety using SC-MDP and defines a filtered MDP, demonstrating that the perceived safety-performance tradeoff is not inherent.

Business Value

Enables the safe deployment of RL agents in safety-critical applications like autonomous vehicles, industrial automation, and healthcare, increasing trust and adoption of AI systems.