Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 85% Match Research Paper RL Researchers,ML Engineers,AI Developers 1 week ago

Mind the GAP! The Challenges of Scale in Pixel-based Deep Reinforcement Learning

reinforcement-learning › game-playing
📄 Abstract

Abstract: Scaling deep reinforcement learning in pixel-based environments presents a significant challenge, often resulting in diminished performance. While recent works have proposed algorithmic and architectural approaches to address this, the underlying cause of the performance drop remains unclear. In this paper, we identify the connection between the output of the encoder (a stack of convolutional layers) and the ensuing dense layers as the main underlying factor limiting scaling capabilities; we denote this connection as the bottleneck, and we demonstrate that previous approaches implicitly target this bottleneck. As a result of our analyses, we present global average pooling as a simple yet effective way of targeting the bottleneck, thereby avoiding the complexity of earlier approaches.
Authors (2)
Ghada Sokar
Pablo Samuel Castro
Submitted
May 23, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

This paper identifies the bottleneck between the encoder's output and dense layers as the primary cause of performance degradation in scaled pixel-based deep reinforcement learning. It proposes global average pooling as a simple and effective solution to target this bottleneck, avoiding the complexity of previous methods.

Business Value

Improved performance and scalability in RL applications can lead to more capable AI agents in areas like gaming, simulation, and robotics, reducing development time and costs.