Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Large language models (LLMs) have revolutionized natural language processing
(NLP), particularly through Retrieval-Augmented Generation (RAG), which
enhances LLM capabilities by integrating external knowledge. However,
traditional RAG systems face critical limitations, including disrupted
contextual integrity due to text chunking, and over-reliance on semantic
similarity for retrieval. To address these issues, we propose CausalRAG, a
novel framework that incorporates causal graphs into the retrieval process. By
constructing and tracing causal relationships, CausalRAG preserves contextual
continuity and improves retrieval precision, leading to more accurate and
interpretable responses. We evaluate CausalRAG against regular RAG and
graph-based RAG approaches, demonstrating its superiority across several
metrics. Our findings suggest that grounding retrieval in causal reasoning
provides a promising approach to knowledge-intensive tasks.
Authors (5)
Nengbo Wang
Xiaotian Han
Jagdip Singh
Jing Ma
Vipin Chaudhary
Key Contributions
Proposes CausalRAG, a novel framework that integrates causal graphs into the RAG process to preserve contextual continuity and improve retrieval precision. By grounding retrieval in causal relationships, it addresses limitations of text chunking and semantic similarity reliance, leading to more accurate and interpretable LLM outputs.
Business Value
Improves the reliability and trustworthiness of LLM-generated information by ensuring factual accuracy and providing traceable reasoning, valuable for decision support and knowledge management.