Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Symmetry is pervasive in robotics and has been widely exploited to improve
sample efficiency in deep reinforcement learning (DRL). However, existing
approaches primarily focus on spatial symmetries, such as reflection, rotation,
and translation, while largely neglecting temporal symmetries. To address this
gap, we explore time reversal symmetry, a form of temporal symmetry commonly
found in robotics tasks such as door opening and closing. We propose Time
Reversal symmetry enhanced Deep Reinforcement Learning (TR-DRL), a framework
that combines trajectory reversal augmentation and time reversal guided reward
shaping to efficiently solve temporally symmetric tasks. Our method generates
reversed transitions from fully reversible transitions, identified by a
proposed dynamics-consistent filter, to augment the training data. For
partially reversible transitions, we apply reward shaping to guide learning,
according to successful trajectories from the reversed task. Extensive
experiments on the Robosuite and MetaWorld benchmarks demonstrate that TR-DRL
is effective in both single-task and multi-task settings, achieving higher
sample efficiency and stronger final performance compared to baseline methods.
Authors (4)
Yunpeng Jiang
Jianshu Hu
Paul Weng
Yutong Ban
Key Contributions
Proposes TR-DRL, a framework that leverages time reversal symmetry to significantly improve sample efficiency in deep reinforcement learning for robotic manipulation. It introduces trajectory reversal augmentation and time reversal guided reward shaping, addressing the underutilization of temporal symmetries in existing DRL approaches.
Business Value
Accelerates the development and deployment of robots for tasks involving repetitive or reversible actions, reducing training time and costs in industrial automation and service robotics.