Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 95% Match Research Paper RL Researchers,Robotics Engineers,AI Scientists 2 weeks ago

MOBODY: Model Based Off-Dynamics Offline Reinforcement Learning

reinforcement-learning › offline-rl
📄 Abstract

Abstract: We study off-dynamics offline reinforcement learning, where the goal is to learn a policy from offline source and limited target datasets with mismatched dynamics. Existing methods either penalize the reward or discard source transitions occurring in parts of the transition space with high dynamics shift. As a result, they optimize the policy using data from low-shift regions, limiting exploration of high-reward states in the target domain that do not fall within these regions. Consequently, such methods often fail when the dynamics shift is significant or the optimal trajectories lie outside the low-shift regions. To overcome this limitation, we propose MOBODY, a Model-Based Off-Dynamics Offline RL algorithm that optimizes a policy using learned target dynamics transitions to explore the target domain, rather than only being trained with the low dynamics-shift transitions. For the dynamics learning, built on the observation that achieving the same next state requires taking different actions in different domains, MOBODY employs separate action encoders for each domain to encode different actions to the shared latent space while sharing a unified representation of states and a common transition function. We further introduce a target Q-weighted behavior cloning loss in policy optimization to avoid out-of-distribution actions, which push the policy toward actions with high target-domain Q-values, rather than high source domain Q-values or uniformly imitating all actions in the offline dataset. We evaluate MOBODY on a wide range of MuJoCo and Adroit benchmarks, demonstrating that it outperforms state-of-the-art off-dynamics RL baselines as well as policy learning methods based on different dynamics learning baselines, with especially pronounced improvements in challenging scenarios where existing methods struggle.
Authors (4)
Yihong Guo
Yu Yang
Pan Xu
Anqi Liu
Submitted
June 10, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

MOBODY addresses the limitations of existing offline RL methods that penalize rewards or discard data in high dynamics shift regions. By optimizing policies using learned target dynamics transitions, MOBODY enables exploration of the target domain, leading to better performance in scenarios with significant dynamics shifts or optimal trajectories outside low-shift regions.

Business Value

Enables more robust and efficient training of RL agents in real-world scenarios where data collection is expensive or dynamics change over time, leading to better performance in applications like robotics and autonomous driving.