Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 95% Match Research Paper NLP researchers,ML engineers,AI researchers interested in causality and knowledge representation 1 week ago

CausalRAG: Integrating Causal Graphs into Retrieval-Augmented Generation

large-language-models › reasoning
📄 Abstract

Abstract: Large language models (LLMs) have revolutionized natural language processing (NLP), particularly through Retrieval-Augmented Generation (RAG), which enhances LLM capabilities by integrating external knowledge. However, traditional RAG systems face critical limitations, including disrupted contextual integrity due to text chunking, and over-reliance on semantic similarity for retrieval. To address these issues, we propose CausalRAG, a novel framework that incorporates causal graphs into the retrieval process. By constructing and tracing causal relationships, CausalRAG preserves contextual continuity and improves retrieval precision, leading to more accurate and interpretable responses. We evaluate CausalRAG against regular RAG and graph-based RAG approaches, demonstrating its superiority across several metrics. Our findings suggest that grounding retrieval in causal reasoning provides a promising approach to knowledge-intensive tasks.
Authors (5)
Nengbo Wang
Xiaotian Han
Jagdip Singh
Jing Ma
Vipin Chaudhary
Submitted
March 25, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

Proposes CausalRAG, a novel framework that integrates causal graphs into the RAG process to preserve contextual continuity and improve retrieval precision. By grounding retrieval in causal relationships, it addresses limitations of text chunking and semantic similarity reliance, leading to more accurate and interpretable LLM outputs.

Business Value

Improves the reliability and trustworthiness of LLM-generated information by ensuring factual accuracy and providing traceable reasoning, valuable for decision support and knowledge management.