Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Spatial reasoning is a key aspect of cognitive psychology and remains a
bottleneck for current vision-language models (VLMs). While extensive research
has aimed to evaluate or improve VLMs' understanding of basic spatial
relations, such as distinguishing left from right, near from far, and object
counting, these tasks cover only the most elementary layer of spatial reasoning
and are largely approaching saturation in the latest reasoning models. In this
work, we introduce OmniSpatial, a comprehensive and challenging benchmark for
spatial reasoning, grounded in cognitive psychology. OmniSpatial covers four
major categories: dynamic reasoning, complex spatial logic, spatial
interaction, and perspective-taking, with 50 fine-grained subcategories.
Through careful manual annotation, we construct over 8.4K question-answer
pairs. Extensive experiments show that both open- and closed-source VLMs
exhibit significant limitations in comprehensive spatial reasoning. We also
explore two strategies-PointGraph (explicit scene graph cues) and SpatialCoT
(novel-view chain-of-thought)-to bolster spatial reasoning.