Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Current training data attribution (TDA) methods treat the influence one
sample has on another as static, but neural networks learn in distinct stages
that exhibit changing patterns of influence. In this work, we introduce a
framework for stagewise data attribution grounded in singular learning theory.
We predict that influence can change non-monotonically, including sign flips
and sharp peaks at developmental transitions. We first validate these
predictions analytically and empirically in a toy model, showing that dynamic
shifts in influence directly map to the model's progressive learning of a
semantic hierarchy. Finally, we demonstrate these phenomena at scale in
language models, where token-level influence changes align with known
developmental stages.
Authors (4)
Jin Hwa Lee
Matthew Smith
Maxwell Adam
Jesse Hoogland
Submitted
October 14, 2025
Key Contributions
This paper introduces a framework for stagewise data attribution, grounded in singular learning theory, to address the static nature of current methods. It predicts and validates that influence can change non-monotonically across distinct learning stages, aligning with the model's progressive learning of a semantic hierarchy. The phenomena are demonstrated at scale in language models, showing token-level influence changes that correspond to known developmental stages.
Business Value
Improved interpretability of large models can lead to better debugging, bias detection, and more trustworthy AI systems, crucial for high-stakes applications.