Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 92% Match Research Paper AI Researchers,Machine Learning Engineers,NLP Practitioners 4 weeks ago

Finish First, Perfect Later: Test-Time Token-Level Cross-Validation for Diffusion Large Language Models

large-language-models › model-architecture
📄 Abstract

Abstract: Diffusion large language models (dLLMs) have recently emerged as a promising alternative to autoregressive (AR) models, offering advantages such as accelerated parallel decoding and bidirectional context modeling. However, the vanilla decoding strategy in discrete dLLMs suffers from a critical limitation: once a token is accepted, it can no longer be revised in subsequent steps. As a result, early mistakes persist across iterations, harming both intermediate predictions and final output quality. To address this issue, we propose Tolerator (Token-Level Cross-Validation Refinement), a training-free decoding strategy that leverages cross-validation among predicted tokens. Unlike existing methods that follow a single progressive unmasking procedure, Tolerator introduces a two-stage process: (i) sequence fill-up and (ii) iterative refinement by remasking and decoding a subset of tokens while treating the remaining as context. This design enables previously accepted tokens to be reconsidered and corrected when necessary, leading to more reliable diffusion decoding outputs. We evaluate Tolerator on five standard benchmarks covering language understanding, code generation, and mathematics. Experiments show that our method achieves consistent improvements over the baselines under the same computational budget. These findings suggest that decoding algorithms are crucial to realizing the full potential of diffusion large language models. Code and data are publicly available.

Key Contributions

Tolerator is a training-free decoding strategy for diffusion LLMs (dLLMs) that addresses the issue of irreversible token acceptance. It employs a two-stage process involving sequence fill-up and iterative refinement via token-level cross-validation, enabling correction of early mistakes and improving final output quality.

Business Value

Improves the quality and reliability of text generated by diffusion LLMs, making them more suitable for applications requiring high fidelity and accuracy, such as content creation or code generation.