Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 90% Match Research Paper Reinforcement Learning Researchers,Robotics Engineers,AI Researchers 1 week ago

$\beta$-DQN: Improving Deep Q-Learning By Evolving the Behavior

reinforcement-learning β€Ί robotics-rl
πŸ“„ Abstract

Abstract: While many sophisticated exploration methods have been proposed, their lack of generality and high computational cost often lead researchers to favor simpler methods like $\epsilon$-greedy. Motivated by this, we introduce $\beta$-DQN, a simple and efficient exploration method that augments the standard DQN with a behavior function $\beta$. This function estimates the probability that each action has been taken at each state. By leveraging $\beta$, we generate a population of diverse policies that balance exploration between state-action coverage and overestimation bias correction. An adaptive meta-controller is designed to select an effective policy for each episode, enabling flexible and explainable exploration. $\beta$-DQN is straightforward to implement and adds minimal computational overhead to the standard DQN. Experiments on both simple and challenging exploration domains show that $\beta$-DQN outperforms existing baseline methods across a wide range of tasks, providing an effective solution for improving exploration in deep reinforcement learning.
Authors (6)
Hongming Zhang
Fengshuo Bai
Chenjun Xiao
Chao Gao
Bo Xu
Martin MΓΌller
Submitted
January 1, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

This paper introduces $\beta$-DQN, a novel and efficient exploration method for Deep Q-Learning that augments standard DQN with a behavior function ($eta$) to estimate action probabilities. This allows $\beta$-DQN to generate diverse policies balancing exploration and bias correction, adaptively selecting the best policy per episode via a meta-controller, and achieving superior performance with minimal computational overhead.

Business Value

Enables more efficient training of RL agents for complex tasks like robotics control or game playing, reducing development time and improving performance.