Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 95% Match Research paper ML engineers,AI researchers,NLP engineers,Developers working on LLM deployment 2 weeks ago

AdaSPEC: Selective Knowledge Distillation for Efficient Speculative Decoders

large-language-models › model-architecture
📄 Abstract

Abstract: Speculative Decoding (SD) accelerates large language model inference by employing a small draft model to generate predictions, which are then verified by a larger target model. The effectiveness of SD hinges on the alignment between these models, which is typically enhanced by Knowledge Distillation (KD). However, conventional KD methods aim to minimize the KL divergence between the draft and target models across all tokens, a goal that is misaligned with the true objective of SD, which is to maximize token acceptance rate. Therefore, draft models often struggle to fully assimilate the target model's knowledge due to capacity constraints, leading to suboptimal performance. To address this challenge, we propose AdaSPEC, a novel method that incorporates selective token filtering into the KD process. AdaSPEC utilizes a reference model to identify and filter out difficult-to-fit tokens, enabling the distillation of a draft model that better aligns with the target model on simpler tokens. This approach improves the overall token acceptance rate without compromising generation quality. We evaluate AdaSPEC across diverse tasks, including arithmetic reasoning, instruction-following, coding, and summarization, using model configurations of 31M/1.4B and 350M/2.7B parameters. Our results demonstrate that AdaSPEC consistently outperforms the state-of-the-art DistillSpec method, achieving higher acceptance rates across all tasks (up to 15\%). The code is publicly available at https://github.com/yuezhouhu/adaspec.
Authors (4)
Yuezhou Hu
Jiaxin Guo
Xinyu Feng
Tuo Zhao
Submitted
October 22, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

This paper introduces AdaSPEC, a novel method for efficient speculative decoding (SD) that improves knowledge distillation (KD) by incorporating selective token filtering. AdaSPEC uses a reference model to filter out difficult-to-distill tokens, enabling the draft model to better align with the target model on simpler tokens, thereby maximizing token acceptance rate and accelerating LLM inference.

Business Value

Reduces the computational cost and latency of LLM inference, making large models more practical and affordable for real-time applications and large-scale deployments.