Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 95% Match Research AI Researchers,ML Engineers,NLP Developers,Deep Learning Architects 1 week ago

MossNet: Mixture of State-Space Experts is a Multi-Head Attention

large-language-models › model-architecture
📄 Abstract

Abstract: Large language models (LLMs) have significantly advanced generative applications in natural language processing (NLP). Recent trends in model architectures revolve around efficient variants of transformers or state-space/gated-recurrent models (SSMs, GRMs). However, prevailing SSM/GRM-based methods often emulate only a single attention head, potentially limiting their expressiveness. In this work, we propose MossNet, a novel mixture-of-state-space-experts architecture that emulates a linear multi-head attention (MHA). MossNet leverages a mixture-of-experts (MoE) implementation not only in channel-mixing multi-layered perceptron (MLP) blocks but also in the time-mixing SSM kernels to realize multiple "attention heads." Extensive experiments on language modeling and downstream evaluations show that MossNet outperforms both transformer- and SSM-based architectures of similar model size and data budgets. Larger variants of MossNet, trained on trillions of tokens, further confirm its scalability and superior performance. In addition, real-device profiling on a Samsung Galaxy S24 Ultra and an Nvidia A100 GPU demonstrate favorable runtime speed and resource usage compared to similarly sized baselines. Our results suggest that MossNet is a compelling new direction for efficient, high-performing recurrent LLM architectures.
Authors (8)
Shikhar Tuli
James Seale Smith
Haris Jeelani
Chi-Heng Lin
Abhishek Patel
Vasili Ramanishka
+2 more
Submitted
October 30, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

MossNet proposes a novel mixture-of-state-space-experts architecture that effectively emulates multi-head attention. By applying MoE to both channel-mixing MLPs and time-mixing SSM kernels, it achieves superior performance over comparable Transformer and SSM-based architectures on language modeling and downstream tasks.

Business Value

Developing more efficient and performant LLM architectures can lead to reduced computational costs for training and inference, enabling wider adoption of advanced NLP capabilities across various industries.