Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Detecting falsified faces generated by Deepfake technology is essential for
safeguarding trust in digital communication and protecting individuals.
However, current detectors often suffer from a dual-overfitting: they become
overly specialized in both specific forgery fingerprints and particular
demographic attributes. Critically, most existing methods overlook the latter
issue, which results in poor fairness: faces from certain demographic groups,
such as different genders or ethnicities, are consequently more difficult to
reliably detect. To address this challenge, we propose a novel strategy called
misleading-learning, which populates the latent space with a multitude of
redundant environments. By exposing the detector to a sufficiently rich and
balanced variety of high-level information for demographic fairness, our
approach mitigates demographic bias while maintaining a high detection
performance level. We conduct extensive evaluations on fairness, intra-domain
detection, cross-domain generalization, and robustness. Experimental results
demonstrate that our framework achieves superior fairness and generalization
compared to state-of-the-art approaches.
Key Contributions
This paper proposes 'misleading-learning' to address demographic bias in deepfake detection. By populating the latent space with redundant, balanced semantic environments, the method mitigates demographic bias (e.g., gender, ethnicity) while maintaining high detection performance, tackling the dual-overfitting issue of current detectors.
Business Value
Enhances trust in digital media by providing deepfake detection systems that are fair and reliable across all demographic groups, crucial for combating misinformation and protecting individuals.