Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Retrieval-Augmented Generation (RAG) mitigates key limitations of Large
Language Models (LLMs)-such as factual errors, outdated knowledge, and
hallucinations-by dynamically retrieving external information. Recent work
extends this paradigm through agentic RAG systems, where LLMs act as agents to
iteratively plan, retrieve, and reason over complex queries. However, these
systems still struggle with challenging multi-hop questions, and their
intermediate reasoning capabilities remain underexplored. To address this, we
propose RAGCap-Bench, a capability-oriented benchmark for fine-grained
evaluation of intermediate tasks in agentic RAG workflows. We analyze outputs
from state-of-the-art systems to identify common tasks and the core
capabilities required for their execution, then construct a taxonomy of typical
LLM errors to design targeted evaluation questions. Experiments show that
"slow-thinking" models with stronger RAGCap performance achieve better
end-to-end results, underscoring the benchmark's validity and the importance of
enhancing these intermediate capabilities.
Authors (4)
Jingru Lin
Chen Zhang
Stephen Y. Liu
Haizhou Li
Submitted
October 15, 2025
Key Contributions
Introduces RAGCap-Bench, a capability-oriented benchmark for fine-grained evaluation of intermediate tasks in agentic Retrieval-Augmented Generation (RAG) systems. It analyzes common tasks, identifies required LLM capabilities, and constructs a taxonomy of errors to design targeted evaluation questions, finding that 'slow-thinking' models with stronger RAGCap performance achieve better end-to-end results.
Business Value
Enables developers to build more reliable and accurate RAG systems by identifying and addressing specific capability gaps, leading to better information retrieval and knowledge synthesis applications.