Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: In-context learning (ICL) enables large language models (LLMs) to perform new
tasks using only a few demonstrations. However, in Named Entity Recognition
(NER), existing ICL methods typically rely on task-agnostic semantic similarity
for demonstration retrieval, which often yields less relevant examples and
leads to inferior results. We introduce DEER, a training-free ICL approach that
enables LLMs to make more informed entity predictions through the use of
label-grounded statistics. DEER leverages token-level statistics from training
labels to identify tokens most informative for entity recognition, enabling
entity-focused demonstrations. It further uses these statistics to detect and
refine error-prone tokens through a targeted reflection step. Evaluated on five
NER datasets across four LLMs, DEER consistently outperforms existing ICL
methods and achieves performance comparable to supervised fine-tuning. Further
analyses demonstrate that DEER improves example retrieval, remains effective on
both seen and unseen entities, and exhibits strong robustness in low-resource
settings.
Authors (4)
Fan Bai
Hamid Hassanzadeh
Ardavan Saeedi
Mark Dredze
Key Contributions
DEER introduces a novel training-free in-context learning approach for NER that leverages label-grounded token-level statistics to improve demonstration retrieval and entity prediction. This method addresses the limitations of task-agnostic similarity-based retrieval, leading to more relevant examples and enhanced performance, comparable to supervised fine-tuning.
Business Value
Improves the efficiency and accuracy of information extraction from text, enabling better data analysis and knowledge management in various industries.