Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 97% Match Research Paper Autonomous Driving Engineers,Robotics Researchers,Computer Vision Scientists,AI Safety Researchers 1 month ago

Training-Free Out-Of-Distribution Segmentation With Foundation Models

computer-vision › scene-understanding
📄 Abstract

Abstract: Detecting unknown objects in semantic segmentation is crucial for safety-critical applications such as autonomous driving. Large vision foundation models, including DINOv2, InternImage, and CLIP, have advanced visual representation learning by providing rich features that generalize well across diverse tasks. While their strength in closed-set semantic tasks is established, their capability to detect out-of-distribution (OoD) regions in semantic segmentation remains underexplored. In this work, we investigate whether foundation models fine-tuned on segmentation datasets can inherently distinguish in-distribution (ID) from OoD regions without any outlier supervision. We propose a simple, training-free approach that utilizes features from the InternImage backbone and applies K-Means clustering alongside confidence thresholding on raw decoder logits to identify OoD clusters. Our method achieves 50.02 Average Precision on the RoadAnomaly benchmark and 48.77 on the benchmark of ADE-OoD with InternImage-L, surpassing several supervised and unsupervised baselines. These results suggest a promising direction for generic OoD segmentation methods that require minimal assumptions or additional data.

Key Contributions

This paper proposes a simple, training-free approach for detecting out-of-distribution (OoD) regions in semantic segmentation using features from vision foundation models like InternImage. By applying K-Means clustering and confidence thresholding on raw decoder logits, the method can distinguish in-distribution from OoD regions without requiring any outlier supervision.

Business Value

Enhances the safety and reliability of autonomous driving systems and other safety-critical applications by enabling them to reliably identify and react to unseen or unexpected objects and environments.