Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Out-of-distribution (OOD) detection is crucial for ensuring the reliability
of deep learning models in real-world applications. Existing methods typically
focus on feature representations or output-space analysis, often assuming a
distribution over these spaces or leveraging gradient norms with respect to
model parameters. However, these approaches struggle to distinguish near-OOD
samples and often require extensive hyper-parameter tuning, limiting their
practicality.In this work, we propose GRadient-aware Out-Of-Distribution
detection (GROOD), a method that derives an OOD prototype from synthetic
samples and computes class prototypes directly from In-distribution (ID)
training data. By analyzing the gradients of a nearest-class-prototype loss
function concerning an artificial OOD prototype, our approach achieves a clear
separation between in-distribution and OOD samples. Experimental evaluations
demonstrate that gradients computed from the OOD prototype enhance the
distinction between ID and OOD data, surpassing established baselines in
robustness, particularly on ImageNet-1k. These findings highlight the potential
of gradient-based methods and prototype-driven approaches in advancing OOD
detection within deep neural networks.