Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
This paper provides a novel matrix-theoretic explanation for the adversarial fragility of deep neural networks, showing that their robustness can degrade as input dimension increases, potentially being only $1/\sqrt{d}$ of optimal classifiers. This theoretical insight aligns with and strengthens previous feature-compression-based explanations, offering a deeper understanding of why neural networks are susceptible to adversarial perturbations.
Improving the robustness of AI systems against adversarial attacks is critical for deploying them in security-sensitive applications like autonomous driving, medical diagnosis, and financial fraud detection, thereby increasing trust and reliability.