Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: User queries in information retrieval are often ambiguous, making it
challenging for systems to identify a user's target from a single query. While
recent dialogue-based interactive retrieval systems can clarify user intent,
they are inefficient as they often lack an explicit strategy to ask the most
informative questions. To address this limitation, we propose SherlockLLM, a
dialogue-driven retrieval framework that learns an optimal questioning strategy
via Reinforcement Learning (RL) and avoids the need for large-scale annotated
dialogue data. In our framework, an agent is trained to generate a sequence of
binary questions to efficiently narrow down the search space. To validate our
approach, we introduce a benchmark with both structured and unstructured tasks.
Experimental results show that SherlockLLM is a robust and efficient solution.
On the structured tasks, its performance matches strong baselines and
approaches the theoretical optimal defined by binary search. On the challenging
unstructured task, our agent significantly outperforms these baselines,
showcasing its ability to learn a highly effective information-seeking dialogue
policy.
Authors (3)
Dong Yun
Marco Schouten
Dim Papadopoulos
Submitted
October 21, 2025
Key Contributions
SherlockLLM introduces a novel dialogue-driven retrieval framework that learns an optimal questioning strategy using Reinforcement Learning, eliminating the need for large-scale annotated dialogue data. This approach enables more efficient clarification of user intent by generating informative binary questions to narrow down the search space.
Business Value
Improves user experience in search and support systems by quickly and accurately understanding user needs, leading to faster access to information and higher customer satisfaction.