Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 95% Match Research Paper ML Engineers,AI Researchers,Developers deploying LLMs,Mobile AI Developers 1 week ago

zFLoRA: Zero-Latency Fused Low-Rank Adapters

large-language-models › model-architecture
📄 Abstract

Abstract: Large language models (LLMs) are increasingly deployed with task-specific adapters catering to multiple downstream applications. In such a scenario, the additional compute associated with these apparently insignificant number of adapter parameters (typically less than 1% of the base model) turns out to be disproportionately significant during inference time (upto 2.5x times that of the base model). In this paper, we propose a new zero-latency fused low-rank adapter (zFLoRA) that introduces zero or negligible latency overhead on top of the base model. Experimental results on LLMs of size 1B, 3B and 7B show that zFLoRA compares favorably against the popular supervised fine-tuning benchmarks including low-rank adapters (LoRA) as well as full fine-tuning (FFT). Experiments are conducted on 18 different tasks across three different categories namely commonsense reasoning, math reasoning and summary-dialogue. Latency measurements made on NPU (Samsung Galaxy S25+) as well as GPU (NVIDIA H100) platforms show that the proposed zFLoRA adapters introduce zero to negligible latency overhead.
Authors (4)
Dhananjaya Gowda
Seoha Song
Harshith Goka
Junhyun Lee
Submitted
October 28, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

This paper introduces zFLoRA, a novel zero-latency fused low-rank adapter technique that significantly reduces or eliminates the inference latency overhead typically associated with task-specific adapters in LLMs. Experiments on various LLM sizes and tasks demonstrate that zFLoRA achieves comparable performance to existing methods while offering substantial latency improvements.

Business Value

Enables faster and more cost-effective deployment of LLMs for real-time applications, especially on edge devices, improving user experience and reducing operational costs.