Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
This paper proposes a novel framework for AI-generated image detection by leveraging epistemic uncertainty. It posits that distributional shifts between natural and generated images lead to elevated epistemic uncertainty in models trained on natural data, using this as a proxy for detection.
Provides a more robust and generalizable method for detecting AI-generated content, crucial for maintaining trust in digital media and preventing malicious use of synthetic imagery.