Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 85% Match Dataset Paper Computer vision researchers,NLP researchers,Robotics engineers,AR/VR developers 1 month ago

PhraseStereo: The First Open-Vocabulary Stereo Image Segmentation Dataset

computer-vision › scene-understanding
📄 Abstract

Abstract: Understanding how natural language phrases correspond to specific regions in images is a key challenge in multimodal semantic segmentation. Recent advances in phrase grounding are largely limited to single-view images, neglecting the rich geometric cues available in stereo vision. For this, we introduce PhraseStereo, the first novel dataset that brings phrase-region segmentation to stereo image pairs. PhraseStereo builds upon the PhraseCut dataset by leveraging GenStereo to generate accurate right-view images from existing single-view data, enabling the extension of phrase grounding into the stereo domain. This new setting introduces unique challenges and opportunities for multimodal learning, particularly in leveraging depth cues for more precise and context-aware grounding. By providing stereo image pairs with aligned segmentation masks and phrase annotations, PhraseStereo lays the foundation for future research at the intersection of language, vision, and 3D perception, encouraging the development of models that can reason jointly over semantics and geometry. The PhraseStereo dataset will be released online upon acceptance of this work.

Key Contributions

Introduces PhraseStereo, the first dataset for phrase-region segmentation in stereo image pairs, extending phrase grounding to leverage rich geometric cues from stereo vision. This dataset enables research into multimodal learning that utilizes depth information for more precise and context-aware grounding.

Business Value

Enables more sophisticated visual understanding systems for applications like autonomous driving, robotics, and augmented reality by allowing them to precisely identify objects and regions described by natural language in 3D environments.