Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 95% Match Research Paper Reinforcement Learning Researchers,Robotics Engineers,AI Researchers 2 weeks ago

Inverse Q-Learning Done Right: Offline Imitation Learning in $Q^\pi$-Realizable MDPs

reinforcement-learning › offline-rl
📄 Abstract

Abstract: We study the problem of offline imitation learning in Markov decision processes (MDPs), where the goal is to learn a well-performing policy given a dataset of state-action pairs generated by an expert policy. Complementing a recent line of work on this topic that assumes the expert belongs to a tractable class of known policies, we approach this problem from a new angle and leverage a different type of structural assumption about the environment. Specifically, for the class of linear $Q^\pi$-realizable MDPs, we introduce a new algorithm called saddle-point offline imitation learning (\SPOIL), which is guaranteed to match the performance of any expert up to an additive error $\varepsilon$ with access to $\mathcal{O}(\varepsilon^{-2})$ samples. Moreover, we extend this result to possibly non-linear $Q^\pi$-realizable MDPs at the cost of a worse sample complexity of order $\mathcal{O}(\varepsilon^{-4})$. Finally, our analysis suggests a new loss function for training critic networks from expert data in deep imitation learning. Empirical evaluations on standard benchmarks demonstrate that the neural net implementation of \SPOIL is superior to behavior cloning and competitive with state-of-the-art algorithms.
Authors (3)
Antoine Moulin
Gergely Neu
Luca Viano
Submitted
May 26, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

Introduces SPOIL, a novel algorithm for offline imitation learning in linear Qπ-realizable MDPs, guaranteeing expert performance matching with improved sample complexity. It extends this to non-linear settings and proposes a new loss function for critic networks, advancing the state-of-the-art in offline imitation learning.

Business Value

Enables learning complex behaviors from pre-recorded data without requiring online interaction, which is crucial for safety-critical applications like autonomous driving or industrial robotics where online exploration is costly or dangerous.