Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Denoising diffusion models have recently achieved remarkable success in image
generation, capturing rich information about natural image statistics. This
makes them highly promising for image reconstruction, where the goal is to
recover a clean image from a degraded observation. In this work, we introduce a
conditional sampling framework that leverages the powerful priors learned by
diffusion models while enforcing consistency with the available measurements.
To further adapt pre-trained diffusion models to the specific degradation at
hand, we propose a novel fine-tuning strategy. In particular, we employ
LoRA-based adaptation using images that are semantically and visually similar
to the degraded input, efficiently retrieved from a large and diverse dataset
via an off-the-shelf vision-language model. We evaluate our approach on two
leading publicly available diffusion models--Stable Diffusion and Guided
Diffusion--and demonstrate that our method, termed Adaptive Diffusion for Image
Reconstruction (ADIR), yields substantial improvements across a range of image
reconstruction tasks.