Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 95% Match Benchmark and Dataset AI researchers,Developers of multimodal systems,Engineers working on wearable technology 5 days ago

CRAG-MM: Multi-modal Multi-turn Comprehensive RAG Benchmark

large-language-models › multimodal-llms
📄 Abstract

Abstract: Wearable devices such as smart glasses are transforming the way people interact with their surroundings, enabling users to seek information regarding entities in their view. Multi-Modal Retrieval-Augmented Generation (MM-RAG) plays a key role in supporting such questions, yet there is still no comprehensive benchmark for this task, especially regarding wearables scenarios. To fill this gap, we present CRAG-MM -- a Comprehensive RAG benchmark for Multi-modal Multi-turn conversations. CRAG-MM contains a diverse set of 6.5K (image, question, answer) triplets and 2K visual-based multi-turn conversations across 13 domains, including 6.2K egocentric images designed to mimic captures from wearable devices. We carefully constructed the questions to reflect real-world scenarios and challenges, including five types of image-quality issues, six question types, varying entity popularity, differing information dynamism, and different conversation turns. We design three tasks: single-source augmentation, multi-source augmentation, and multi-turn conversations -- each paired with an associated retrieval corpus and APIs for both image-KG retrieval and webpage retrieval. Our evaluation shows that straightforward RAG approaches achieve only 32% and 43% truthfulness on CRAG-MM single- and multi-turn QA, respectively, whereas state-of-the-art industry solutions have similar quality (32%/45%), underscoring ample room for improvement. The benchmark has hosted KDD Cup 2025, attracting about 1K participants and 5K submissions, with winning solutions improving baseline performance by 28%, highlighting its early impact on advancing the field.
Authors (41)
Jiaqi Wang
Xiao Yang
Kai Sun
Parth Suresh
Sanat Sharma
Adam Czyzewski
+35 more
Submitted
October 30, 2025
arXiv Category
cs.CV
arXiv PDF

Key Contributions

CRAG-MM introduces the first comprehensive benchmark for Multi-Modal Retrieval-Augmented Generation (MM-RAG) specifically designed for wearable device scenarios. It includes a large dataset of egocentric images, multi-turn conversations, and diverse challenges like image quality issues and varying information dynamism.

Business Value

Facilitates the development and evaluation of more capable AI assistants for wearable devices, leading to improved user interaction and access to information in real-time, context-aware scenarios.