Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Recent advances in generative modeling have positioned diffusion models as
state-of-the-art tools for sampling from complex data distributions. While
these models have shown remarkable success across single-modality domains such
as images and audio, extending their capabilities to Modality Translation (MT),
translating information across different sensory modalities, remains an open
challenge. Existing approaches often rely on restrictive assumptions, including
shared dimensionality, Gaussian source priors, and modality-specific
architectures, which limit their generality and theoretical grounding. In this
work, we propose the Latent Denoising Diffusion Bridge Model (LDDBM), a
general-purpose framework for modality translation based on a latent-variable
extension of Denoising Diffusion Bridge Models. By operating in a shared latent
space, our method learns a bridge between arbitrary modalities without
requiring aligned dimensions. We introduce a contrastive alignment loss to
enforce semantic consistency between paired samples and design a
domain-agnostic encoder-decoder architecture tailored for noise prediction in
latent space. Additionally, we propose a predictive loss to guide training
toward accurate cross-domain translation and explore several training
strategies to improve stability. Our approach supports arbitrary modality pairs
and performs strongly on diverse MT tasks, including multi-view to 3D shape
generation, image super-resolution, and multi-view scene synthesis.
Comprehensive experiments and ablations validate the effectiveness of our
framework, establishing a new strong baseline in general modality translation.
For more information, see our project page:
https://sites.google.com/view/lddbm/home.
Authors (5)
Nimrod Berman
Omkar Joglekar
Eitan Kosman
Dotan Di Castro
Omri Azencot
Submitted
October 23, 2025
Key Contributions
The paper proposes the Latent Denoising Diffusion Bridge Model (LDDBM), a general-purpose framework for modality translation that operates in a shared latent space. This approach overcomes limitations of existing methods by not requiring aligned dimensions and introduces a contrastive alignment loss, enabling translation between arbitrary modalities.
Business Value
Enables richer and more integrated AI systems by allowing seamless translation between different data types (e.g., text to image, audio to video), opening possibilities for advanced content creation, data augmentation, and cross-modal search.