Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 90% Match Research Paper ML Engineers,AI Researchers,System Architects,Developers working with LLMs 1 week ago

HeteroSpec: Leveraging Contextual Heterogeneity for Efficient Speculative Decoding

large-language-models › model-architecture
📄 Abstract

Abstract: Autoregressive decoding inherently limits the inference throughput of Large Language Model (LLM) due to its sequential dependency. Speculative decoding mitigates this by verifying multiple predicted tokens in parallel, but its efficiency remains constrained by what we identify as verification heterogeneity -- the uneven difficulty of verifying different speculative candidates. In practice, a small subset of high-confidence predictions accounts for most successful verifications, yet existing methods treat all candidates uniformly, leading to redundant computation. We present HeteroSpec, a heterogeneity-adaptive speculative decoding framework that allocates verification effort in proportion to candidate uncertainty. HeteroSpec estimates verification complexity using a lightweight entropy-based quantifier, partitions candidates via a data-driven stratification policy, and dynamically tunes speculative depth and pruning thresholds through coordinated optimization. Across five benchmarks and four LLMs, HeteroSpec delivers an average 4.24$\times$ decoding speedup over state-of-the-art methods such as EAGLE-3, while preserving exact output distributions. Crucially, HeteroSpec requires no model retraining and remains compatible with other inference optimizations, making it a practical direction for improving speculative decoding efficiency.
Authors (5)
Siran Liu
Yang Ye
Qianchao Zhu
Zane Cao
Yongchao He
Submitted
May 19, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

Introduces HeteroSpec, a heterogeneity-adaptive speculative decoding framework that allocates verification effort based on candidate uncertainty. It uses an entropy-based quantifier and data-driven stratification to optimize speculative depth and pruning, significantly improving inference throughput.

Business Value

Enables faster and more cost-effective deployment of LLMs by reducing inference latency and computational requirements, making real-time applications more feasible.