Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 97% Match Research Paper ML Engineers,AI Researchers,LLM Developers,Infrastructure Engineers 2 weeks ago

GRIFFIN: Effective Token Alignment for Faster Speculative Decoding

large-language-models › model-architecture
📄 Abstract

Abstract: Speculative decoding accelerates inference in large language models (LLMs) by generating multiple draft tokens simultaneously. However, existing methods often struggle with token misalignment between the training and decoding phases, limiting their performance. To address this, we propose GRIFFIN, a novel framework that incorporates a token-alignable training strategy and a token-alignable draft model to mitigate misalignment. The training strategy employs a loss masking mechanism to exclude highly misaligned tokens during training, preventing them from negatively impacting the draft model's optimization. The token-alignable draft model introduces input tokens to correct inconsistencies in generated features. Experiments on LLaMA, Vicuna, Qwen and Mixtral models demonstrate that GRIFFIN achieves an average acceptance length improvement of over 8% and a speedup ratio exceeding 7%, outperforming current speculative decoding state-of-the-art methods. Our code and GRIFFIN's draft models are released publicly in https://github.com/hsj576/GRIFFIN.
Authors (6)
Shijing Hu
Jingyang Li
Xingyu Xie
Zhihui Lu
Kim-Chuan Toh
Pan Zhou
Submitted
February 16, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

Proposes GRIFFIN, a novel framework for faster speculative decoding in LLMs by addressing token misalignment. It introduces a token-alignable training strategy with loss masking and a token-alignable draft model, significantly improving acceptance length and speedup ratio compared to existing methods.

Business Value

Significantly reduces the computational cost and latency of LLM inference, making large models more practical and cost-effective for real-time applications and large-scale deployments.