Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
This paper proposes a novel and cost-effective method for Out-of-Distribution (OOD) detection by utilizing local background features from in-distribution (ID) images as simulated OOD features. By optimizing the model to reduce the L2-norm of these background features, it effectively trains networks to be more robust against OOD inputs without requiring auxiliary OOD datasets.
Improving the reliability and safety of AI systems by accurately detecting out-of-distribution inputs is paramount for deploying AI in safety-critical applications, reducing risks and building user trust.