Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Large Language Models (LLMs) excel at capturing latent semantics and
contextual relationships across diverse modalities. However, in modeling user
behavior from sequential interaction data, performance often suffers when such
semantic context is limited or absent. We introduce LaMAR, a LLM-driven
semantic enrichment framework designed to enrich such sequences automatically.
LaMAR leverages LLMs in a few-shot setting to generate auxiliary contextual
signals by inferring latent semantic aspects of a user's intent and item
relationships from existing metadata. These generated signals, such as inferred
usage scenarios, item intents, or thematic summaries, augment the original
sequences with greater contextual depth. We demonstrate the utility of this
generated resource by integrating it into benchmark sequential modeling tasks,
where it consistently improves performance. Further analysis shows that
LLM-generated signals exhibit high semantic novelty and diversity, enhancing
the representational capacity of the downstream models. This work represents a
new data-centric paradigm where LLMs serve as intelligent context generators,
contributing a new method for the semi-automatic creation of training data and
language resources.
Authors (4)
Mahsa Valizadeh
Xiangjue Dong
Rui Tuo
James Caverlee
Submitted
October 20, 2025
Key Contributions
Introduces LaMAR, a framework that uses Large Language Models (LLMs) in a few-shot setting to semantically enrich sequential user interaction data. By inferring latent user intent and item relationships, LaMAR generates auxiliary contextual signals that augment original sequences, leading to consistent performance improvements in benchmark sequential recommendation tasks.
Business Value
Enhances user experience and engagement in recommendation platforms by providing more relevant and personalized suggestions, leading to increased sales and customer satisfaction.