Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Diffusion models have emerged as a promising alternative to autoregressive
models in modeling discrete categorical data. However, diffusion models that
directly work on discrete data space fail to fully exploit the power of
iterative refinement, as the signals are lost during transitions between
discrete states. Existing continuous diffusion models for discrete data
underperform compared to discrete methods, and the lack of a clear connection
between the two approaches hinders the development of effective diffusion
models for discrete data. In this work, we propose a continuous diffusion model
for language modeling that incorporates the geometry of the underlying
categorical distribution. We establish a connection between the discrete
diffusion and continuous flow on the statistical manifold, and building on this
analogy, introduce a simple diffusion process that generalizes existing
discrete diffusion models. We further propose a simulation-free training
framework based on radial symmetry, along with a simple technique to address
the high dimensionality of the manifold. Comprehensive experiments on language
modeling benchmarks and other modalities show that our method outperforms
existing discrete diffusion models and approaches the performance of
autoregressive models. The code is available at
https://github.com/harryjo97/RDLM.
Authors (2)
Jaehyeong Jo
Sung Ju Hwang
Submitted
February 17, 2025
Key Contributions
This paper proposes a novel continuous diffusion model for language modeling that leverages the geometry of categorical distributions and establishes a connection between discrete and continuous diffusion via statistical manifolds. It introduces a simulation-free training framework and a generalized diffusion process that aims to overcome the limitations of existing discrete diffusion models and underperforming continuous variants.
Business Value
Offers a new paradigm for generative text models, potentially leading to more coherent, diverse, and controllable text generation compared to current autoregressive or discrete diffusion methods.