Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: State-of-the-art text-to-image (T2I) diffusion models often struggle to
generate rare compositions of concepts, e.g., objects with unusual attributes.
In this paper, we show that the compositional generation power of diffusion
models on such rare concepts can be significantly enhanced by the Large
Language Model (LLM) guidance. We start with empirical and theoretical
analysis, demonstrating that exposing frequent concepts relevant to the target
rare concepts during the diffusion sampling process yields more accurate
concept composition. Based on this, we propose a training-free approach, R2F,
that plans and executes the overall rare-to-frequent concept guidance
throughout the diffusion inference by leveraging the abundant semantic
knowledge in LLMs. Our framework is flexible across any pre-trained diffusion
models and LLMs, and can be seamlessly integrated with the region-guided
diffusion approaches. Extensive experiments on three datasets, including our
newly proposed benchmark, RareBench, containing various prompts with rare
compositions of concepts, R2F significantly surpasses existing models including
SD3.0 and FLUX by up to 28.1%p in T2I alignment. Code is available at
https://github.com/krafton-ai/Rare-to-Frequent.