Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 92% Match Research Paper GPU Architects,ML Engineers,HPC Researchers,AI Hardware Developers 17 hours ago

Optimizing Attention on GPUs by Exploiting GPU Architectural NUMA Effects

large-language-models › model-architecture
📄 Abstract

Abstract: The rise of disaggregated AI GPUs has exposed a critical bottleneck in large-scale attention workloads: non-uniform memory access (NUMA). As multi-chiplet designs become the norm for scaling compute capabilities, memory latency and bandwidth vary sharply across compute regions, undermining the performance of traditional GPU kernel scheduling strategies that assume uniform memory access. We identify how these NUMA effects distort locality in multi-head attention (MHA) and present Swizzled Head-first Mapping, a spatially-aware scheduling strategy that aligns attention heads with GPU NUMA domains to exploit intra-chiplet cache reuse. On AMD's MI300X architecture, our method achieves up to 50% higher performance over state-of-the-art attention algorithms using conventional scheduling techniques and sustains consistently high L2 cache hit rates of 80-97%. These results demonstrate that NUMA-aware scheduling is now fundamental to achieving full efficiency on next-generation disaggregated GPUs, offering a path forward for scalable AI training and inference.

Key Contributions

This paper addresses the critical bottleneck of Non-Uniform Memory Access (NUMA) in disaggregated AI GPUs for large-scale attention workloads. It introduces 'Swizzled Head-first Mapping,' a spatially-aware scheduling strategy that aligns attention heads with GPU NUMA domains to exploit cache reuse, achieving up to 50% higher performance on AMD MI300X.

Business Value

Significantly improves the efficiency and performance of AI hardware, enabling faster training and inference for large models, and reducing operational costs for AI deployments.