Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 95% Match Research Paper AI Researchers,NLP Engineers,Knowledge Graph Specialists,Developers of LLM applications 2 weeks ago

Deliberation on Priors: Trustworthy Reasoning of Large Language Models on Knowledge Graphs

large-language-models › reasoning
📄 Abstract

Abstract: Knowledge graph-based retrieval-augmented generation seeks to mitigate hallucinations in Large Language Models (LLMs) caused by insufficient or outdated knowledge. However, existing methods often fail to fully exploit the prior knowledge embedded in knowledge graphs (KGs), particularly their structural information and explicit or implicit constraints. The former can enhance the faithfulness of LLMs' reasoning, while the latter can improve the reliability of response generation. Motivated by these, we propose a trustworthy reasoning framework, termed Deliberation over Priors (DP), which sufficiently utilizes the priors contained in KGs. Specifically, DP adopts a progressive knowledge distillation strategy that integrates structural priors into LLMs through a combination of supervised fine-tuning and Kahneman-Tversky optimization, thereby improving the faithfulness of relation path generation. Furthermore, our framework employs a reasoning-introspection strategy, which guides LLMs to perform refined reasoning verification based on extracted constraint priors, ensuring the reliability of response generation. Extensive experiments on three benchmark datasets demonstrate that DP achieves new state-of-the-art performance, especially a Hit@1 improvement of 13% on the ComplexWebQuestions dataset, and generates highly trustworthy responses. We also conduct various analyses to verify its flexibility and practicality. The code is available at https://github.com/reml-group/Deliberation-on-Priors.
Authors (11)
Jie Ma
Ning Qu
Zhitao Gao
Rui Xing
Jun Liu
Hongbin Pei
+5 more
Submitted
May 21, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

Proposes Deliberation over Priors (DP), a trustworthy reasoning framework for LLMs on knowledge graphs that leverages structural information and constraints. It uses progressive knowledge distillation and a reasoning-introspection strategy to improve faithfulness and reliability, mitigating hallucinations by better integrating KG priors.

Business Value

Enhances the trustworthiness and accuracy of LLM-generated content and insights derived from knowledge graphs, making them more reliable for critical applications like financial analysis, legal research, and medical information systems.