Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: The expansion of large language models is increasingly limited by the
constrained memory capacity of modern GPUs. To mitigate this,
Mixture-of-Experts (MoE) architectures activate only a small portion of
parameters during inference, significantly lowering both memory demand and
computational overhead. However, conventional MoE inference approaches, which
select active experts independently at each layer, often introduce considerable
latency because of frequent parameter transfers between host and GPU memory. In
addition, current cross-layer prediction strategies, which are typically based
on fixed steps, lack adaptability across different hardware platforms and
workloads, thereby reducing their robustness and effectiveness.
To address these challenges, we present ExpertFlow, a runtime system for MoE
inference that combines adaptive expert prefetching and cache-aware routing.
ExpertFlow continuously adjusts its prediction horizon for expert activation by
leveraging runtime statistics such as transfer bandwidth, parameter
dimensionality, and model feedback signals. Furthermore, it incorporates a
hybrid cross-layer prediction scheme that fuses pregating information with
intermediate computational states to anticipate future expert needs. By
adaptively refining prefetching decisions and aligning them with actual usage
behavior, ExpertFlow effectively decreases cache misses and removes latency
caused by expert swap-ins. Our evaluation demonstrates that ExpertFlow reduces
model stall time to less than 0.1% of the baseline, highlighting its capability
to optimize MoE inference under stringent memory constraints.
Authors (6)
Zixu Shen
Kexin Chu
Yifan Zhang
Dawei Xiang
Runxin Wu
Wei Zhang
Submitted
October 30, 2025
Key Contributions
ExpertFlow introduces a novel runtime system for Mixture-of-Experts (MoE) inference that significantly reduces latency and memory demand by combining adaptive expert prefetching and cache-aware routing. This system dynamically adjusts expert activation and optimizes data transfers between host and GPU memory, overcoming the limitations of fixed cross-layer prediction strategies.
Business Value
Enables the deployment of larger and more powerful LLMs on existing hardware, reducing operational costs and making advanced AI capabilities more accessible for businesses.