Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 95% Match Research Paper LLM Researchers,NLP Engineers,Deep Learning Theorists 4 weeks ago

From Compression to Expression: A Layerwise Analysis of In-Context Learning

large-language-models › model-architecture
📄 Abstract

Abstract: In-context learning (ICL) enables large language models (LLMs) to adapt to new tasks without weight updates by learning from demonstration sequences. While ICL shows strong empirical performance, its internal representational mechanisms are not yet well understood. In this work, we conduct a statistical geometric analysis of ICL representations to investigate how task-specific information is captured across layers. Our analysis reveals an intriguing phenomenon, which we term *Layerwise Compression-Expression*: early layers progressively produce compact and discriminative representations that encode task information from the input demonstrations, while later layers express these representations to incorporate the query and generate the prediction. This phenomenon is observed consistently across diverse tasks and a range of contemporary LLM architectures. We demonstrate that it has important implications for ICL performance -- improving with model size and the number of demonstrations -- and for robustness in the presence of noisy examples. To further understand the effect of the compact task representation, we propose a bias-variance decomposition and provide a theoretical analysis showing how attention mechanisms contribute to reducing both variance and bias, thereby enhancing performance as the number of demonstrations increases. Our findings reveal an intriguing layerwise dynamic in ICL, highlight how structured representations emerge within LLMs, and showcase that analyzing internal representations can facilitate a deeper understanding of model behavior.

Key Contributions

This paper introduces the 'Layerwise Compression-Expression' phenomenon in LLMs during in-context learning (ICL). It reveals that early layers compress task information from demonstrations, while later layers express it for prediction, offering a new understanding of ICL's internal workings and its dependence on model size and demonstration count.

Business Value

Enables more efficient and effective use of LLMs for various tasks by understanding how they learn from context, potentially reducing the need for extensive fine-tuning and improving performance in few-shot scenarios.