Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 95% Match Research Paper AI Researchers,Computer Vision Engineers,NLP Engineers,Developers of video analysis tools 2 weeks ago

Open-o3 Video: Grounded Video Reasoning with Explicit Spatio-Temporal Evidence

large-language-models › multimodal-llms
📄 Abstract

Abstract: Most video reasoning models only generate textual reasoning traces without indicating when and where key evidence appears. Recent models such as OpenAI-o3 have sparked wide interest in evidence-centered reasoning for images, yet extending this ability to videos is more challenging, as it requires joint temporal tracking and spatial localization across dynamic scenes. We introduce Open-o3 Video, a non-agent framework that integrates explicit spatio-temporal evidence into video reasoning, and carefully collect training data and design training strategies to address the aforementioned challenges. The model highlights key timestamps, objects, and bounding boxes alongside its answers, allowing reasoning to be grounded in concrete visual observations. To enable this functionality, we first curate and build two high-quality datasets, STGR-CoT-30k for SFT and STGR-RL-36k for RL, with carefully constructed temporal and spatial annotations, since most existing datasets offer either temporal spans for videos or spatial boxes on images, lacking unified spatio-temporal supervision and reasoning traces. Then, we adopt a cold-start reinforcement learning strategy with multiple specially designed rewards that jointly encourage answer accuracy, temporal alignment, and spatial precision. On V-STAR benchmark, Open-o3 Video achieves state-of-the-art performance, raising mAM by 14.4% and mLGM by 24.2% on the Qwen2.5-VL baseline. Consistent improvements are also observed on a broad range of video understanding benchmarks, including VideoMME, WorldSense, VideoMMMU, and TVGBench. Beyond accuracy, the reasoning traces produced by Open-o3 Video also provide valuable signals for test-time scaling, enabling confidence-aware verification and improving answer reliability.
Authors (11)
Jiahao Meng
Xiangtai Li
Haochen Wang
Yue Tan
Tao Zhang
Lingdong Kong
+5 more
Submitted
October 23, 2025
arXiv Category
cs.CV
arXiv PDF

Key Contributions

Introduces Open-o3 Video, a framework for grounded video reasoning that integrates explicit spatio-temporal evidence (timestamps, objects, bounding boxes) into reasoning traces. It addresses challenges in temporal tracking and spatial localization and includes curated datasets for supervised fine-tuning and reinforcement learning.

Business Value

Enables more transparent and verifiable AI systems for video analysis, improving trust and utility in applications like content moderation, security, and autonomous systems.