Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 75% Match Research Paper Deep learning researchers,ML engineers,Researchers focused on efficient AI,Computer vision and NLP practitioners 3 weeks ago

Efficient Dynamic Structured Sparse Training with Learned Shuffles

generative-ai › autoregressive
📄 Abstract

Abstract: Structured sparsity accelerates training and inference on modern GPUs, yet it still trails unstructured dynamic sparse training (DST) in accuracy. The shortfall stems from a loss of expressivity: whereas a dense layer can realize every possible mask obtained by choosing any $w$ active weights out of $n$, a fixed block or N:M layout explores only a subset of those possibilities. We propose to close this gap by learning, for each layer, a single permutation matrix jointly with the structured weight matrix. Applied to three canonical structures -- block, N:M, and diagonals -- we show that permutation-augmented DST (PA-DST) matches unstructured baselines (RigL, SET) at 90--95\% sparsity on ImageNet-1K (ViT-B/16) and WikiText-103 (GPT-2), yet trains up to $1.21\times$ and infers up to $2.9\times$ faster. The results position structure + learned permutation as a sweet spot between accuracy and efficiency.
Authors (6)
Abhishek Tyagi
Arjun Iyer
Liam Young
William H Renninger
Christopher Kanan
Yuhao Zhu
Submitted
October 16, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

This paper proposes Permutation-Augmented Dynamic Sparse Training (PA-DST), which learns permutation matrices alongside structured weight matrices for each layer. This approach closes the expressivity gap between structured and unstructured sparsity, enabling structured sparse models to match the accuracy of unstructured ones while offering significant speedups in training and inference.

Business Value

Enables deployment of larger, more accurate models on resource-constrained devices by significantly improving training and inference efficiency without sacrificing accuracy.