Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Unified generative models have shown remarkable performance in text and image
generation. For image synthesis tasks, they adopt straightforward text-to-image
(T2I) generation. However, direct T2I generation limits the models in handling
complex compositional instructions, which frequently occur in real-world
scenarios. Although this issue is vital, existing works mainly focus on
improving the basic image generation capability of the models. While such
improvements help to some extent, they still fail to adequately resolve the
problem. Inspired by Chain of Thought (CoT) solving complex problems step by
step, this work aims to introduce CoT into unified generative models to address
the challenges of complex image generation that direct T2I generation cannot
effectively solve, thereby endowing models with enhanced image generation
ability. To achieve this, we first propose Functionality-oriented eXperts
(FoXperts), an expert-parallel architecture in our model FoX, which assigns
experts by function. FoXperts disentangles potential conflicts in mainstream
modality-oriented designs and provides a solid foundation for CoT. When
introducing CoT, the first question is how to design it for complex image
generation. To this end, we emulate a human-like artistic workflow -- planning,
acting, reflection, and correction -- and propose the Multimodal Chain of
Thought (MCoT) approach, as the data involves both text and image. To address
the subsequent challenge -- designing an effective MCoT training paradigm -- we
develop a multi-task joint training scheme that equips the model with all
capabilities required for each MCoT step in a disentangled manner. This
paradigm avoids the difficulty of collecting consistent multi-step data tuples.
Extensive experiments show that FoX consistently outperforms existing unified
models on various T2I benchmarks, delivering notable improvements in complex
image generation.
Authors (16)
Yi Wang
Mushui Liu
Wanggui He
Hanyang Yuan
Longxiang Zhang
Ziwei Huang
+10 more
Key Contributions
This work introduces a multi-modal Chain of Thought (CoT) approach for unified generative models to enhance image generation, particularly for complex compositional instructions. It proposes an expert-parallel architecture (FoXperts) within the FoX model to address the limitations of direct text-to-image generation.
Business Value
Enables the creation of more sophisticated and controllable image generation tools, empowering artists, designers, and content creators to produce complex visuals with greater ease and precision.