Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 95% Match Research Paper LLM Researchers,ML Engineers,AI Architects 1 week ago

Encoder-Decoder or Decoder-Only? Revisiting Encoder-Decoder Large Language Model

large-language-models › model-architecture
📄 Abstract

Abstract: Recent large language model (LLM) research has undergone an architectural shift from encoder-decoder modeling to nowadays the dominant decoder-only modeling. This rapid transition, however, comes without a rigorous comparative analysis especially \textit{from the scaling perspective}, raising concerns that the potential of encoder-decoder models may have been overlooked. To fill this gap, we revisit encoder-decoder LLM (RedLLM), enhancing it with recent recipes from decoder-only LLM (DecLLM). We conduct a comprehensive comparison between RedLLM, pretrained with prefix language modeling (LM), and DecLLM, pretrained with causal LM, at different model scales, ranging from $\sim$150M to $\sim$8B. Using RedPajama V1 (1.6T tokens) for pretraining and FLAN for instruction tuning, our experiments show that RedLLM produces compelling scaling properties and surprisingly strong performance. While DecLLM is overall more compute-optimal during pretraining, RedLLM demonstrates comparable scaling and context length extrapolation capabilities. After instruction tuning, RedLLM achieves comparable and even better results on various downstream tasks while enjoying substantially better inference efficiency. We hope our findings could inspire more efforts on re-examining RedLLM, unlocking its potential for developing powerful and efficient LLMs.
Authors (6)
Biao Zhang
Yong Cheng
Siamak Shakeri
Xinyi Wang
Min Ma
Orhan Firat
Submitted
October 30, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

This paper rigorously revisits and enhances encoder-decoder LLMs (RedLLM) with modern techniques, conducting a comprehensive, scale-aware comparison against dominant decoder-only LLMs (DecLLM). It demonstrates that RedLLM exhibits compelling scaling properties and strong performance, suggesting that the potential of encoder-decoder architectures may have been overlooked due to a lack of rigorous comparative analysis.

Business Value

Provides insights into optimal LLM architectures for different computational budgets and performance goals, potentially leading to more efficient development and deployment of LLM-based applications.