Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 95% Match Research Paper AI security researchers,Computer vision engineers,Developers of safety-critical AI systems 3 weeks ago

Structured Universal Adversarial Attacks on Object Detection for Video Sequences

computer-vision › object-detection
📄 Abstract

Abstract: Video-based object detection plays a vital role in safety-critical applications. While deep learning-based object detectors have achieved impressive performance, they remain vulnerable to adversarial attacks, particularly those involving universal perturbations. In this work, we propose a minimally distorted universal adversarial attack tailored for video object detection, which leverages nuclear norm regularization to promote structured perturbations concentrated in the background. To optimize this formulation efficiently, we employ an adaptive, optimistic exponentiated gradient method that enhances both scalability and convergence. Our results demonstrate that the proposed attack outperforms both low-rank projected gradient descent and Frank-Wolfe based attacks in effectiveness while maintaining high stealthiness. All code and data are publicly available at https://github.com/jsve96/AO-Exp-Attack.
Authors (3)
Sven Jacob
Weijia Shao
Gjergji Kasneci
Submitted
October 16, 2025
arXiv Category
cs.CV
arXiv PDF Code

Key Contributions

This paper proposes a novel, minimally distorted universal adversarial attack specifically for video object detection. It leverages nuclear norm regularization to create structured, background-focused perturbations and employs an adaptive gradient method for efficient optimization, outperforming existing attacks in effectiveness and stealthiness.

Business Value

Enhances the understanding of security vulnerabilities in critical AI systems like autonomous vehicles and surveillance, driving the development of more robust detection models.

View Code on GitHub