Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 90% Match Research Paper ML Engineers,AI Researchers,Developers working on LLM deployment,Hardware Optimization Specialists 1 day ago

FlashEVA: Accelerating LLM inference via Efficient Attention

large-language-models › model-architecture
📄 Abstract

Abstract: Transformer models have revolutionized natural language processing, achieving state-of-the-art performance and demonstrating remarkable scalability. However, their memory demands, particularly due to maintaining full context in memory, pose significant challenges for inference. In this paper, we present FlashEVA, an efficient implementation of EVA (Efficient Attention via Control Variates), and demonstrate how to finetune transformers to adapt to FlashEVA attention. Our method enables fine-tuning of Transformer models with as few as 1.5B tokens while preserving effectiveness across various downstream tasks. Notably, FlashEVA achieves up to 6.7x higher throughput and 5x lower peak GPU memory usage during inference compared to standard Transformer implementations. Despite these improvements, we observe limitations in retrieval-focused tasks. Our implementation offers control over the trade-off between throughput and accuracy through adjustable hyperparameters, providing flexibility for diverse use cases. This work represents a significant step towards more efficient and adaptable Transformer-based models for inference.
Authors (2)
Juan Gabriel Kostelec
Qinghai Guo
Submitted
November 1, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

FlashEVA presents an efficient implementation of EVA attention, enabling transformer models to achieve significantly higher throughput (up to 6.7x) and lower peak GPU memory usage (5x) during inference. It also demonstrates effective fine-tuning with a smaller dataset while preserving performance, though with limitations in retrieval tasks.

Business Value

Makes large transformer models more cost-effective and practical for deployment by reducing hardware requirements and increasing processing speed, enabling wider adoption in real-time applications.