Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: The frontier of visual reasoning is shifting toward models like OpenAI o3,
which can intelligently create and operate tools to transform images for
problem-solving, also known as thinking-\textit{with}-images in
chain-of-thought. Yet existing benchmarks fail to fully capture this advanced
capability. Even Visual Search, the most common benchmark for current
thinking-\textit{with}-images methods, tests only basic operations such as
localization and cropping, offering little insight into more complex, dynamic,
and tool-dependent reasoning. We introduce \textbf{TIR-Bench}, a comprehensive
benchmark for evaluating agentic thinking-with-images across 13 diverse tasks,
each requiring novel tool use for image processing and manipulation in
chain-of-thought. We evaluate 22 multimodal large language models (MLLMs), from
leading open-sourced and proprietary models to those with explicit tool-use
augmentation. Results show that TIR-Bench is universally challenging, and
strong performance requires genuine thinking-with-images capabilities. Finally,
we present a pilot study comparing direct versus agentic fine-tuning.
Authors (9)
Ming Li
Jike Zhong
Shitian Zhao
Haoquan Zhang
Shaoheng Lin
Yuxiang Lai
+3 more
Submitted
November 3, 2025
Key Contributions
Introduces TIR-Bench, a comprehensive benchmark for evaluating agentic thinking-with-images reasoning across 13 diverse tasks requiring novel tool use for image processing and manipulation. It evaluates 22 MLLMs, highlighting the universal challenge of these tasks and the need for advanced capabilities beyond basic operations.
Business Value
Provides a standardized and challenging evaluation framework for multimodal AI systems, enabling better development and comparison of models for applications requiring sophisticated visual understanding and manipulation, such as automated image editing or robotic vision.