Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 95% Match Research Paper AI security researchers,Computer vision engineers,Machine learning practitioners,Developers of safety-critical AI systems 3 weeks ago

NAPPure: Adversarial Purification for Robust Image Classification under Non-Additive Perturbations

ai-safety › robustness
📄 Abstract

Abstract: Adversarial purification has achieved great success in combating adversarial image perturbations, which are usually assumed to be additive. However, non-additive adversarial perturbations such as blur, occlusion, and distortion are also common in the real world. Under such perturbations, existing adversarial purification methods are much less effective since they are designed to fit the additive nature. In this paper, we propose an extended adversarial purification framework named NAPPure, which can further handle non-additive perturbations. Specifically, we first establish the generation process of an adversarial image, and then disentangle the underlying clean image and perturbation parameters through likelihood maximization. Experiments on GTSRB and CIFAR-10 datasets show that NAPPure significantly boosts the robustness of image classification models against non-additive perturbations.
Authors (5)
Junjie Nan
Jianing Li
Wei Chen
Mingkun Zhang
Xueqi Cheng
Submitted
October 15, 2025
arXiv Category
cs.CV
arXiv PDF

Key Contributions

Proposes NAPPure, an extended adversarial purification framework that effectively handles non-additive adversarial perturbations (e.g., blur, occlusion) in addition to additive ones. It achieves this by disentangling the underlying clean image and perturbation parameters through likelihood maximization, significantly boosting classifier robustness.

Business Value

Enhances the reliability and security of AI systems that rely on image recognition in real-world scenarios, such as autonomous driving or medical imaging, where perturbations are common.