Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 90% Match Research AI researchers,NLP engineers,Developers of generative text models 2 weeks ago

Planned Diffusion

generative-ai › diffusion
📄 Abstract

Abstract: A central challenge in large language model inference is the trade-off between generation speed and output quality. Autoregressive models produce high-quality text but generate tokens sequentially. Diffusion models can generate tokens in parallel but often need many iterations to match the same quality. We propose planned diffusion, a hybrid method that combines the strengths of both paradigms. Planned diffusion works in two stages: first, the model creates a short autoregressive plan that breaks the output into smaller, independent spans. Second, the model generates these spans simultaneously using diffusion. This approach expands the speed-quality Pareto frontier and provides a practical path to faster, high-quality text generation. On AlpacaEval, a suite of 805 instruction-following prompts, planned diffusion achieves Pareto-optimal trade-off between quality and latency, achieving 1.27x to 1.81x speedup over autoregressive generation with only 0.87\% to 5.4\% drop in win rate, respectively. Our sensitivity analysis shows that the planning mechanism of planned diffusion is minimal and reliable, and simple runtime knobs exist to provide flexible control of the quality-latency trade-off.
Authors (7)
Daniel Israel
Tian Jin
Ellie Cheng
Guy Van den Broeck
Aditya Grover
Suvinay Subramanian
+1 more
Submitted
October 20, 2025
arXiv Category
cs.AI
arXiv PDF

Key Contributions

Planned Diffusion is a hybrid method that addresses the speed-quality trade-off in LLM inference by combining autoregressive planning with parallel diffusion generation. It first creates a short autoregressive plan to break output into independent spans, which are then generated simultaneously using diffusion, expanding the speed-quality Pareto frontier for faster, high-quality text generation.

Business Value

Enables significantly faster generation of high-quality text, which can lead to more responsive and cost-effective AI applications, such as real-time content creation, interactive storytelling, and faster chatbot responses.