Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: A key challenge in offline multi-agent reinforcement learning (MARL) is
achieving effective many-agent multi-step coordination in complex environments.
In this work, we propose Oryx, a novel algorithm for offline cooperative MARL
to directly address this challenge. Oryx adapts the recently proposed
retention-based architecture Sable and combines it with a sequential form of
implicit constraint Q-learning (ICQ), to develop a novel offline autoregressive
policy update scheme. This allows Oryx to solve complex coordination challenges
while maintaining temporal coherence over long trajectories. We evaluate Oryx
across a diverse set of benchmarks from prior works -- SMAC, RWARE, and
Multi-Agent MuJoCo -- covering tasks of both discrete and continuous control,
varying in scale and difficulty. Oryx achieves state-of-the-art performance on
more than 80% of the 65 tested datasets, outperforming prior offline MARL
methods and demonstrating robust generalisation across domains with many agents
and long horizons. Finally, we introduce new datasets to push the limits of
many-agent coordination in offline MARL, and demonstrate Oryx's superior
ability to scale effectively in such settings.
Authors (13)
Claude Formanek
Omayma Mahjoub
Louay Ben Nessir
Sasha Abramowitz
Ruan de Kock
Wiem Khlifi
+7 more
Key Contributions
Proposes Oryx, a novel algorithm for offline cooperative MARL that combines a retention-based architecture with sequential implicit constraint Q-learning. This enables effective many-agent coordination and maintains temporal coherence over long trajectories.
Business Value
Enables more sophisticated and coordinated behavior in multi-agent systems, crucial for applications like swarm robotics, autonomous logistics, and complex simulations.