Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 93% Match Research Paper AI Researchers,ML Engineers,NLP Researchers,Students of Deep Learning 1 week ago

A Framework for Quantifying How Pre-Training and Context Benefit In-Context Learning

large-language-models › reasoning
📄 Abstract

Abstract: Pre-trained large language models have demonstrated a strong ability to learn from context, known as in-context learning (ICL). Despite a surge of recent applications that leverage such capabilities, it is by no means clear, at least theoretically, how the ICL capabilities arise, and in particular, what is the precise role played by key factors such as pre-training procedure as well as context construction. In this work, we propose a new framework to analyze the ICL performance, for a class of realistic settings, which includes network architectures, data encoding, data generation, and prompt construction process. As a first step, we construct a simple example with a one-layer transformer, and show an interesting result, namely when the pre-train data distribution is different from the query task distribution, a properly constructed context can shift the output distribution towards the query task distribution, in a quantifiable manner, leading to accurate prediction on the query topic. We then extend the findings in the previous step to a more general case, and derive the precise relationship between ICL performance, context length and the KL divergence between pre-train and query task distribution. Finally, we provide experiments to validate our theoretical results.
Authors (5)
Bingqing Song
Jiaxiang Li
Rong Wang
Songtao Lu
Mingyi Hong
Submitted
October 26, 2025
arXiv Category
cs.AI
arXiv PDF

Key Contributions

Proposes a new framework to theoretically analyze In-Context Learning (ICL) capabilities in LLMs, focusing on the roles of pre-training and context construction. It demonstrates how context can shift output distributions to improve accuracy when pre-training data differs from the query task, providing a quantifiable understanding of ICL.

Business Value

Provides a deeper understanding of LLM behavior, enabling more effective prompt engineering and model fine-tuning for specific tasks, leading to improved performance and reliability.