Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 95% Match Research Paper AI Security Researchers,Developers of Generative Models,Companies using Diffusion Models,MLOps Engineers 1 day ago

Targeted Attack Improves Protection against Unauthorized Diffusion Customization

computer-vision › diffusion-models
📄 Abstract

Abstract: Diffusion models build a new milestone for image generation yet raising public concerns, for they can be fine-tuned on unauthorized images for customization. Protection based on adversarial attacks rises to encounter this unauthorized diffusion customization, by adding protective watermarks to images and poisoning diffusion models. However, current protection, leveraging untargeted attacks, does not appear to be effective enough. In this paper, we propose a simple yet effective improvement for the protection against unauthorized diffusion customization by introducing targeted attacks. We show that by carefully selecting the target, targeted attacks significantly outperform untargeted attacks in poisoning diffusion models and degrading the customization image quality. Extensive experiments validate the superiority of our method on two mainstream customization methods of diffusion models, compared to existing protections. To explain the surprising success of targeted attacks, we delve into the mechanism of attack-based protections and propose a hypothesis based on our observation, which enhances the comprehension of attack-based protections. To the best of our knowledge, we are the first to both reveal the vulnerability of diffusion models to targeted attacks and leverage targeted attacks to enhance protection against unauthorized diffusion customization. Our code is available on GitHub: https://github.com/psyker-team/mist-v2.
Authors (3)
Boyang Zheng
Chumeng Liang
Xiaoyu Wu
Submitted
October 7, 2023
arXiv Category
cs.CV
arXiv PDF

Key Contributions

Proposes using targeted adversarial attacks to improve protection against unauthorized diffusion model customization. This method significantly outperforms untargeted attacks by carefully selecting attack targets, leading to better poisoning of diffusion models and degradation of customized image quality.

Business Value

Helps protect intellectual property and brand integrity by preventing unauthorized use and modification of generative AI models. It provides a mechanism to deter misuse of powerful image generation tools.