Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 95% Match Research Paper RL Researchers,Operations Research Scientists,Engineers in safety-critical domains,Decision Theorists 2 weeks ago

R2L: Reliable Reinforcement Learning: Guaranteed Return & Reliable Policies in Reinforcement Learning

reinforcement-learning
📄 Abstract

Abstract: In this work, we address the problem of determining reliable policies in reinforcement learning (RL), with a focus on optimization under uncertainty and the need for performance guarantees. While classical RL algorithms aim at maximizing the expected return, many real-world applications - such as routing, resource allocation, or sequential decision-making under risk - require strategies that ensure not only high average performance but also a guaranteed probability of success. To this end, we propose a novel formulation in which the objective is to maximize the probability that the cumulative return exceeds a prescribed threshold. We demonstrate that this reliable RL problem can be reformulated, via a state-augmented representation, into a standard RL problem, thereby allowing the use of existing RL and deep RL algorithms without the need for entirely new algorithmic frameworks. Theoretical results establish the equivalence of the two formulations and show that reliable strategies can be derived by appropriately adapting well-known methods such as Q-learning or Dueling Double DQN. To illustrate the practical relevance of the approach, we consider the problem of reliable routing, where the goal is not to minimize the expected travel time but rather to maximize the probability of reaching the destination within a given time budget. Numerical experiments confirm that the proposed formulation leads to policies that effectively balance efficiency and reliability, highlighting the potential of reliable RL for applications in stochastic and safety-critical environments.
Authors (1)
Nadir Farhi
Submitted
October 20, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

R2L introduces a novel formulation for Reinforcement Learning that focuses on maximizing the probability of exceeding a return threshold, providing performance guarantees. This 'reliable RL' problem is shown to be equivalent to a standard RL problem via a state-augmented representation, allowing existing RL algorithms to be used without modification, thus enabling risk-averse decision-making in critical applications.

Business Value

Enables the development of more robust and trustworthy AI systems for high-stakes applications where failure is costly, such as financial trading, critical infrastructure management, and autonomous systems, by providing quantifiable performance guarantees.