Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 80% Match Research Paper AI Researchers,Robotics Engineers,Machine Learning Practitioners,Developers of Explainable AI Systems 2 days ago

Towards Automated Semantic Interpretability in Reinforcement Learning via Vision-Language Models

reinforcement-learning › multi-agent
📄 Abstract

Abstract: Semantic interpretability in Reinforcement Learning (RL) enables transparency and verifiability of decision-making. Achieving semantic interpretability in reinforcement learning requires (1) a feature space composed of human-understandable concepts and (2) a policy that is interpretable and verifiable. However, constructing such a feature space has traditionally relied on manual human specification, which often fails to generalize to unseen environments. Moreover, even when interpretable features are available, most reinforcement learning algorithms employ black-box models as policies, thereby hindering transparency. We introduce interpretable Tree-based Reinforcement learning via Automated Concept Extraction (iTRACE), an automated framework that leverages pre-trained vision-language models (VLM) for semantic feature extraction and train a interpretable tree-based model via RL. To address the impracticality of running VLMs in RL loops, we distill their outputs into a lightweight model. By leveraging Vision-Language Models (VLMs) to automate tree-based reinforcement learning, iTRACE loosens the reliance the need for human annotation that is traditionally required by interpretable models. In addition, it addresses key limitations of VLMs alone, such as their lack of grounding in action spaces and their inability to directly optimize policies. We evaluate iTRACE across three domains: Atari games, grid-world navigation, and driving. The results show that iTRACE outperforms other interpretable policy baselines and matches the performance of black-box policies on the same interpretable feature space.
Authors (6)
Zhaoxin Li
Zhang Xi-Jia
Batuhan Altundas
Letian Chen
Rohan Paleja
Matthew Gombolay
Submitted
March 20, 2025
arXiv Category
cs.AI
arXiv PDF

Key Contributions

Introduces iTRACE, an automated framework using VLMs for semantic feature extraction and training interpretable tree-based RL policies. It addresses the impracticality of running VLMs directly in RL loops by distilling their outputs, enabling transparent and verifiable RL decision-making.

Business Value

Enhances trust and safety in AI systems, particularly in safety-critical applications like autonomous driving and robotics, by making their decision-making processes understandable and verifiable.