Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Robustness of deep neural networks to input noise remains a critical
challenge, as naive noise injection often degrades accuracy on clean
(uncorrupted) data. We propose a novel training framework that addresses this
trade-off through two complementary objectives. First, we introduce a loss
function applied at the penultimate layer that explicitly enforces intra-class
compactness and increases the margin to analytically defined decision
boundaries. This enhances feature discriminativeness and class separability for
clean data. Second, we propose a class-wise feature alignment mechanism that
brings noisy data clusters closer to their clean counterparts. Furthermore, we
provide a theoretical analysis demonstrating that improving feature stability
under additive Gaussian noise implicitly reduces the curvature of the softmax
loss landscape in input space, as measured by Hessian eigenvalues.This thus
naturally enhances robustness without explicit curvature penalties. Conversely,
we also theoretically show that lower curvatures lead to more robust models. We
validate the effectiveness of our method on standard benchmarks and our custom
dataset. Our approach significantly reinforces model robustness to various
perturbations while maintaining high accuracy on clean data, advancing the
understanding and practice of noise-robust deep learning.