Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 95% Match Research Paper ML engineers,NLP researchers,Developers working with LLMs 2 weeks ago

Fast Inference via Hierarchical Speculative Decoding

large-language-models › model-architecture
📄 Abstract

Abstract: Transformer language models generate text autoregressively, making inference latency proportional to the number of tokens generated. Speculative decoding reduces this latency without sacrificing output quality, by leveraging a small draft model to propose tokens that the larger target model verifies in parallel. In practice, however, there may exist a set of potential draft models- ranging from faster but less inaccurate, to slower yet more reliable. We introduce Hierarchical Speculative Decoding (HSD), an algorithm that stacks these draft models into a hierarchy, where each model proposes tokens, and the next larger model verifies them in a single forward pass, until finally the target model verifies tokens. We derive an expression for the expected latency of any such hierarchy and show that selecting the latency-optimal hierarchy can be done in polynomial time. Empirically, HSD gives up to 1.2x speed-up over the best single-draft baseline, demonstrating the practicality of our algorithm in reducing generation latency beyond previous techniques.
Authors (5)
Clara Mohri
Haim Kaplan
Tal Schuster
Yishay Mansour
Amir Globerson
Submitted
October 22, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

Introduces Hierarchical Speculative Decoding (HSD), an algorithm that stacks multiple draft models of varying sizes to accelerate inference in large language models. HSD allows for parallel verification of proposed tokens by progressively larger models, achieving significant speed-ups over single-draft baselines while maintaining output quality.

Business Value

Significantly reduces the cost and improves the user experience of deploying large language models by making their inference faster and more efficient, enabling real-time applications and reducing operational expenses.