Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 92% Match Research Paper Medical AI Researchers,Healthcare Professionals,NLP Engineers 2 weeks ago

From Retrieval to Generation: Unifying External and Parametric Knowledge for Medical Question Answering

large-language-models › reasoning
📄 Abstract

Abstract: Medical question answering (QA) requires extensive access to domain-specific knowledge. A promising direction is to enhance large language models (LLMs) with external knowledge retrieved from medical corpora or parametric knowledge stored in model parameters. Existing approaches typically fall into two categories: Retrieval-Augmented Generation (RAG), which grounds model reasoning on externally retrieved evidence, and Generation-Augmented Generation (GAG), which depends solely on the models internal knowledge to generate contextual documents. However, RAG often suffers from noisy or incomplete retrieval, while GAG is vulnerable to hallucinated or inaccurate information due to unconstrained generation. Both issues can mislead reasoning and undermine answer reliability. To address these challenges, we propose MedRGAG, a unified retrieval-generation augmented framework that seamlessly integrates external and parametric knowledge for medical QA. MedRGAG comprises two key modules: Knowledge-Guided Context Completion (KGCC), which directs the generator to produce background documents that complement the missing knowledge revealed by retrieval; and Knowledge-Aware Document Selection (KADS), which adaptively selects an optimal combination of retrieved and generated documents to form concise yet comprehensive evidence for answer generation. Extensive experiments on five medical QA benchmarks demonstrate that MedRGAG achieves a 12.5% improvement over MedRAG and a 4.5% gain over MedGENIE, highlighting the effectiveness of unifying retrieval and generation for knowledge-intensive reasoning. Our code and data are publicly available at https://anonymous.4open.science/r/MedRGAG
Authors (4)
Lei Li
Xiao Zhou
Yingying Zhang
Xian Wu
Submitted
October 21, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

Proposes MedRGAG, a unified retrieval-generation augmented framework that seamlessly integrates external (retrieved) and parametric (model internal) knowledge for medical question answering. This approach aims to overcome the limitations of pure RAG (noisy retrieval) and pure generation (hallucination) by leveraging the strengths of both.

Business Value

Enhances the accuracy and trustworthiness of medical information systems, aiding clinicians, researchers, and patients in accessing reliable medical knowledge.