Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 95% Match Research Paper ML Engineers,AI Researchers,Developers of efficient AI models 1 week ago

1+1>2: A Synergistic Sparse and Low-Rank Compression Method for Large Language Models

large-language-models › training-methods
📄 Abstract

Abstract: Large Language Models (LLMs) have demonstrated remarkable proficiency in language comprehension and generation; however, their widespread adoption is constrained by substantial bandwidth and computational demands. While pruning and low-rank approximation have each demonstrated promising performance individually, their synergy for LLMs remains underexplored. We introduce \underline{S}ynergistic \underline{S}parse and \underline{L}ow-Rank \underline{C}ompression (SSLC) methods for LLMs, which leverages the strengths of both techniques: low-rank approximation compresses the model by retaining its essential structure with minimal information loss, whereas sparse optimization eliminates non-essential weights, preserving those crucial for generalization. Based on theoretical analysis, we first formulate the low-rank approximation and sparse optimization as a unified problem and solve it by iterative optimization algorithm. Experiments on LLaMA and Qwen2.5 models (7B-70B) show that SSLC, without any additional training steps, consistently surpasses standalone methods, achieving state-of-the-arts results. Notably, SSLC compresses Qwen2.5 by 50\% with no performance drop and achieves at least 1.63$\times$ speedup, offering a practical solution for efficient LLM deployment.
Authors (7)
Zeliang Zong
Kai Zhang
Zheyang Li
Wenming Tan
Ye Ren
Yiyan Zhai
+1 more
Submitted
October 30, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

This paper introduces Synergistic Sparse and Low-Rank Compression (SSLC) methods for LLMs, which combine low-rank approximation and sparse optimization into a unified problem solved iteratively. This approach aims to significantly reduce model size and computational demands while retaining performance.

Business Value

Enables the deployment of LLMs on resource-constrained devices and reduces operational costs for large-scale AI deployments.