Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 98% Match Research Paper MARL Researchers,Game Theorists,AI Researchers,Robotics Engineers 2 weeks ago

Nash Policy Gradient: A Policy Gradient Method with Iteratively Refined Regularization for Finding Nash Equilibria

reinforcement-learning β€Ί multi-agent
πŸ“„ Abstract

Abstract: Finding Nash equilibria in imperfect-information games remains a central challenge in multi-agent reinforcement learning. While regularization-based methods have recently achieved last-iteration convergence to a regularized equilibrium, they require the regularization strength to shrink toward zero to approximate a Nash equilibrium, often leading to unstable learning in practice. Instead, we fix the regularization strength at a large value for robustness and achieve convergence by iteratively refining the reference policy. Our main theoretical result shows that this procedure guarantees strictly monotonic improvement and convergence to an exact Nash equilibrium in two-player zero-sum games, without requiring a uniqueness assumption. Building on this framework, we develop a practical algorithm, Nash Policy Gradient (NashPG), which preserves the generalizability of policy gradient methods while relying solely on the current and reference policies. Empirically, NashPG achieves comparable or lower exploitability than prior model-free methods on classic benchmark games and scales to large domains such as Battleship and No-Limit Texas Hold'em, where NashPG consistently attains higher Elo ratings.
Authors (6)
Eason Yu
Tzu Hao Liu
Yunke Wang
ClΓ©ment L. Canonne
Nguyen H. Tran
Chang Xu
Submitted
October 21, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

Proposes Nash Policy Gradient (NashPG), a novel policy gradient method for finding Nash equilibria in imperfect-information games. It achieves convergence to exact Nash equilibria in two-player zero-sum games by iteratively refining a reference policy with a fixed, large regularization strength, avoiding the instability of shrinking regularization.

Business Value

Enables the development of more robust and predictable AI agents in competitive or cooperative multi-agent environments, applicable to areas like autonomous vehicle coordination, resource allocation, and algorithmic trading.