Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Large Language Models (LLMs) are powerful but often too slow and costly for
real-world use during inference. Looped transformers save on parameters by
reusing the same weights for multiple computational steps, or "loops." However,
this approach has a major flaw: the loops run one after another, causing
inference latency and memory requirements to increase with each added loop.
This makes them impractical for fast applications. To solve this problem, we
introduce the Parallel Loop Transformer (PLT). PLT is a new architecture that
delivers the performance benefits of a deep, looped model but with the low
latency of a standard, non-looped model. PLT works using two key techniques.
First, Cross-Loop Parallelism (CLP) breaks the sequential dependency by
computing different loops for different tokens at the same time, all within a
single pass. Second, to prevent memory costs from growing, we use an Efficient
Representation Enhancement strategy. This method shares the memory (KV cache)
from the first loop with all other loops. It then uses a Gated Sliding-Window
Attention (G-SWA) to combine this shared global information with local
information, maintaining high accuracy. Our experiments show that PLT achieves
the high accuracy of a traditional looped model but with almost no extra
latency or memory cost compared to a standard transformer.
Authors (12)
Bohong Wu
Mengzhao Chen
Xiang Luo
Shen Yan
Qifan Yu
Fan Xia
+6 more
Submitted
October 28, 2025
Key Contributions
The Parallel Loop Transformer (PLT) is a novel architecture designed to overcome the latency and memory limitations of traditional looped transformers. By introducing Cross-Loop Parallelism (CLP) to compute different loops concurrently for different tokens and using Efficient Representation Enhancement, PLT achieves the performance benefits of deep, looped models with the low latency of standard models.
Business Value
Enables the deployment of powerful, deep LLMs in latency-sensitive applications like real-time chatbots, interactive assistants, and on-device processing. This significantly expands the practical use cases for large models.