Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: We introduce MiRAGE, an evaluation framework for retrieval-augmented
generation (RAG) from multimodal sources. As audiovisual media becomes a
prevalent source of information online, it is essential for RAG systems to
integrate information from these sources into generation. However, existing
evaluations for RAG are text-centric, limiting their applicability to
multimodal, reasoning intensive settings because they don't verify information
against sources. MiRAGE is a claim-centric approach to multimodal RAG
evaluation, consisting of InfoF1, evaluating factuality and information
coverage, and CiteF1, measuring citation support and completeness. We show that
MiRAGE, when applied by humans, strongly aligns with extrinsic quality
judgments. We additionally introduce automatic variants of MiRAGE and three
prominent TextRAG metrics -- ACLE, ARGUE, and RAGAS -- demonstrating the
limitations of text-centric work and laying the groundwork for automatic
evaluation. We release open-source implementations and outline how to assess
multimodal RAG.
Authors (8)
Alexander Martin
William Walden
Reno Kriz
Dengjia Zhang
Kate Sanders
Eugene Yang
+2 more
Submitted
October 28, 2025
Key Contributions
MiRAGE is a novel evaluation framework for retrieval-augmented generation (RAG) from multimodal sources, addressing the limitations of text-centric evaluations. It introduces InfoF1 (factuality and information coverage) and CiteF1 (citation support and completeness) metrics, demonstrating strong alignment with human judgments and highlighting the shortcomings of existing text-centric RAG metrics.
Business Value
Enables more robust and reliable multimodal AI systems, crucial for applications dealing with diverse online content. This leads to better information synthesis, more trustworthy AI-generated content, and improved user trust.