Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 95% Match Research Paper AI Security Researchers,Computer Vision Engineers,Developers of Autonomous Systems,Cybersecurity Professionals 2 weeks ago

A Single Set of Adversarial Clothes Breaks Multiple Defense Methods in the Physical World

ai-safety › robustness
📄 Abstract

Abstract: In recent years, adversarial attacks against deep learning-based object detectors in the physical world have attracted much attention. To defend against these attacks, researchers have proposed various defense methods against adversarial patches, a typical form of physically-realizable attack. However, our experiments showed that simply enlarging the patch size could make these defense methods fail. Motivated by this, we evaluated various defense methods against adversarial clothes which have large coverage over the human body. Adversarial clothes provide a good test case for adversarial defense against patch-based attacks because they not only have large sizes but also look more natural than a large patch on humans. Experiments show that all the defense methods had poor performance against adversarial clothes in both the digital world and the physical world. In addition, we crafted a single set of clothes that broke multiple defense methods on Faster R-CNN. The set achieved an Attack Success Rate (ASR) of 96.06% against the undefended detector and over 64.84% ASRs against nine defended models in the physical world, unveiling the common vulnerability of existing adversarial defense methods against adversarial clothes. Code is available at: https://github.com/weiz0823/adv-clothes-break-multiple-defenses.
Authors (5)
Wei Zhang
Zhanhao Hu
Xiao Li
Xiaopei Zhu
Xiaolin Hu
Submitted
October 20, 2025
arXiv Category
cs.CV
arXiv PDF

Key Contributions

This paper demonstrates that a single set of adversarial clothes can break multiple existing defense methods against adversarial patch attacks in both digital and physical worlds. The research highlights the vulnerability of current defenses to large-coverage, natural-looking adversarial examples, showing that simply increasing patch size can render defenses ineffective, posing a significant threat to the robustness of object detection systems.

Business Value

Highlights critical security vulnerabilities in AI systems, particularly those used in safety-critical applications like autonomous driving and surveillance. Understanding these weaknesses is crucial for developing more secure and reliable AI.