Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: By pretraining to synthesize coherent images from perturbed inputs,
generative models inherently learn to understand object boundaries and scene
compositions. How can we repurpose these generative representations for
general-purpose perceptual organization? We finetune Stable Diffusion and MAE
(encoder+decoder) for category-agnostic instance segmentation using our
instance coloring loss exclusively on a narrow set of object types (indoor
furnishings and cars). Surprisingly, our models exhibit strong zero-shot
generalization, accurately segmenting objects of types and styles unseen in
finetuning (and in many cases, MAE's ImageNet-1K pretraining too). Our
best-performing models closely approach the heavily supervised SAM when
evaluated on unseen object types and styles, and outperform it when segmenting
fine structures and ambiguous boundaries. In contrast, existing promptable
segmentation architectures or discriminatively pretrained models fail to
generalize. This suggests that generative models learn an inherent grouping
mechanism that transfers across categories and domains, even without
internet-scale pretraining. Code, pretrained models, and demos are available on
our website.
Authors (2)
Om Khangaonkar
Hamed Pirsiavash
Key Contributions
Demonstrates that generative models like Stable Diffusion and MAE, when fine-tuned with an instance coloring loss, can achieve strong zero-shot generalization for instance segmentation, even for object types unseen during fine-tuning. This approach significantly outperforms existing promptable segmentation models and approaches supervised methods like SAM on unseen categories.
Business Value
Enables more flexible and adaptable vision systems that can segment novel objects without extensive retraining, reducing development costs and time for applications requiring object recognition.