Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: We propose Lavida-O, a unified Masked Diffusion Model (MDM) for multimodal
understanding and generation. Unlike existing multimodal MDMs such as MMaDa and
Muddit which only support simple image-level understanding tasks and
low-resolution image generation, Lavida-O presents a single framework that
enables image-level understanding, object grounding, image editing, and
high-resolution (1024px) text-to-image synthesis. Lavida-O incorporates a novel
Elastic Mixture-of-Transformers (Elastic-MoT) architecture that couples a
lightweight generation branch with a larger understanding branch, supported by
token compression, universal text conditioning and stratified sampling for
efficient and high-quality generation. Lavida-O further incorporates planning
and iterative self-reflection in image generation and editing tasks, seamlessly
boosting generation quality with its understanding capabilities. Lavida-O
achieves state-of-the-art performance on a wide range of benchmarks including
RefCOCO object grounding, GenEval text-to-image generation, and ImgEdit image
editing, outperforming existing autoregressive models and continuous diffusion
models such as Qwen2.5-VL and FluxKontext-dev, while offering considerable
speedup at inference. These advances establish Lavida-O as a new paradigm for
scalable multimodal reasoning and generation.