Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: We propose LangHOPS, the first Multimodal Large Language Model (MLLM) based
framework for open-vocabulary object-part instance segmentation. Given an
image, LangHOPS can jointly detect and segment hierarchical object and part
instances from open-vocabulary candidate categories. Unlike prior approaches
that rely on heuristic or learnable visual grouping, our approach grounds
object-part hierarchies in language space. It integrates the MLLM into the
object-part parsing pipeline to leverage its rich knowledge and reasoning
capabilities, and link multi-granularity concepts within the hierarchies. We
evaluate LangHOPS across multiple challenging scenarios, including in-domain
and cross-dataset object-part instance segmentation, and zero-shot semantic
segmentation. LangHOPS achieves state-of-the-art results, surpassing previous
methods by 5.5% Average Precision (AP) (in-domain) and 4.8% (cross-dataset) on
the PartImageNet dataset and by 2.5% mIOU on unseen object parts in ADE20K
(zero-shot). Ablation studies further validate the effectiveness of the
language-grounded hierarchy and MLLM driven part query refinement strategy. The
code will be released here.
Authors (6)
Yang Miao
Jan-Nico Zaech
Xi Wang
Fabien Despinoy
Danda Pani Paudel
Luc Van Gool
Submitted
October 29, 2025
Key Contributions
LangHOPS is the first MLLM-based framework for open-vocabulary object-part instance segmentation. It grounds object-part hierarchies in language space, leveraging MLLM capabilities for rich knowledge and reasoning to link multi-granularity concepts.
Business Value
Enables more sophisticated visual understanding systems that can interpret complex scenes and object compositions based on natural language descriptions, crucial for robotics and advanced AI applications.