Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Existing language agents often encounter difficulties in dynamic adversarial
games due to poor strategic reasoning. To mitigate this limitation, a promising
approach is to allow agents to learn from game interactions automatically,
without relying on costly expert-labeled data. Unlike static environments where
agents receive fixed feedback or rewards, selecting appropriate opponents in
dynamic adversarial games can significantly impact learning performance.
However, the discussion of opponents in adversarial environments remains an
area under exploration. In this paper, we propose a Step-level poliCy
Optimization method through Play-And-Learn, SCO-PAL. Leveraging SCO-PAL, we
conduct a detailed analysis of opponent selection by setting opponents at
different levels and find that self-play is the most effective way to improve
strategic reasoning in such adversarial environments. Utilizing SCO-PAL with
self-play, we increase the average win rate against four opponents by
approximately 30% compared to baselines and achieve a 54.76% win rate against
GPT-4 in six adversarial games.
Authors (6)
Yikai Zhang
Ye Rong
Siyu Yuan
Jiangjie Chen
Jian Xie
Yanghua Xiao
Submitted
October 19, 2025
Key Contributions
Proposes SCO-PAL (Step-level poliCy Optimization through Play-And-Learn), a method to enhance strategic reasoning in language agents within adversarial games. Crucially, it demonstrates that self-play, by dynamically selecting opponents of varying skill levels, is the most effective strategy for improving agent performance, addressing limitations in learning from static environments or fixed opponents.
Business Value
Enables the development of more sophisticated AI agents capable of strategic interaction, useful in competitive environments like gaming, negotiation, and complex simulations.