Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 90% Match Research Paper ML Engineers,AI Researchers,LLM Developers 2 weeks ago

Taming the Fragility of KV Cache Eviction in LLM Inference

large-language-models › model-architecture
📄 Abstract

Abstract: Large language models have revolutionized natural language processing, yet their deployment remains hampered by the substantial memory and runtime overhead of the transformer's Key-Value cache. To mitigate this, recent methods employ a scoring-aggregation framework to evict unimportant cache entries, based on the stability assumption-that a fixed subset of entries remains consistently important during generation. However, prior work has largely focused on refining importance indicators for scoring, while defaulting to mean aggregation due to a faithful trust in the stability assumption. In this work, we argue that this underlying assumption is inherently fragile, making mean aggregation highly vulnerable in extreme cases. To counter this, we propose a simple yet elegant defensive aggregation strategy: a two-step, linear-time approach that controls worst-case risk, thereby defending against extreme cases with negligible computational overhead. Embodying this strategy, we propose a novel cache eviction method, DefensiveKV and its extension, Layer-DefensiveKV, which incorporates layer-wise budget allocation. Across seven task domains (18 datasets), our methods reduce generation quality loss by 2.3x and 4.3x respectively, versus the strongest baseline under a 20% cache size. These results set new performance benchmarks and pioneer a promising direction for optimizing cache eviction against underlying fragility through worst-case risk management. Our code is available at https://github.com/FFY0/DefensiveKV.

Key Contributions

This paper addresses the fragility of KV cache eviction strategies in LLM inference, which are often vulnerable in extreme cases due to a flawed stability assumption. The authors propose a novel, simple, and efficient defensive aggregation strategy that controls worst-case risk with negligible computational overhead, thereby improving the robustness and efficiency of LLM deployment.

Business Value

Optimizing LLM inference is crucial for reducing operational costs and improving response times, making LLM applications more scalable and accessible for businesses.