Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Parameter-efficient fine-tuning (PEFT) methods, such as LoRA, offer compact
and effective alternatives to full model fine-tuning by introducing low-rank
updates to pre-trained weights. However, most existing approaches rely on
global low rank structures, which can overlook spatial patterns spread across
the parameter space. In this work, we propose Localized LoRA, a generalized
framework that models weight updates as a composition of low-rank matrices
applied to structured blocks of the weight matrix. This formulation enables
dense, localized updates throughout the parameter space without increasing the
total number of trainable parameters. We provide a formal comparison between
global, diagonal-local, and fully localized low-rank approximations, and show
that our method consistently achieves lower approximation error under matched
parameter budgets. Experiments on both synthetic and practical settings
demonstrate that Localized LoRA offers a more expressive and adaptable
alternative to existing methods, enabling efficient fine-tuning with improved
performance.