Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 95% Match Research Paper Researchers,Knowledge Workers,Data Scientists,AI Developers 1 week ago

Doc-Researcher: A Unified System for Multimodal Document Parsing and Deep Research

large-language-models › multimodal-llms
📄 Abstract

Abstract: Deep Research systems have revolutionized how LLMs solve complex questions through iterative reasoning and evidence gathering. However, current systems remain fundamentally constrained to textual web data, overlooking the vast knowledge embedded in multimodal documents Processing such documents demands sophisticated parsing to preserve visual semantics (figures, tables, charts, and equations), intelligent chunking to maintain structural coherence, and adaptive retrieval across modalities, which are capabilities absent in existing systems. In response, we present Doc-Researcher, a unified system that bridges this gap through three integrated components: (i) deep multimodal parsing that preserves layout structure and visual semantics while creating multi-granular representations from chunk to document level, (ii) systematic retrieval architecture supporting text-only, vision-only, and hybrid paradigms with dynamic granularity selection, and (iii) iterative multi-agent workflows that decompose complex queries, progressively accumulate evidence, and synthesize comprehensive answers across documents and modalities. To enable rigorous evaluation, we introduce M4DocBench, the first benchmark for Multi-modal, Multi-hop, Multi-document, and Multi-turn deep research. Featuring 158 expert-annotated questions with complete evidence chains across 304 documents, M4DocBench tests capabilities that existing benchmarks cannot assess. Experiments demonstrate that Doc-Researcher achieves 50.6% accuracy, 3.4xbetter than state-of-the-art baselines, validating that effective document research requires not just better retrieval, but fundamentally deep parsing that preserve multimodal integrity and support iterative research. Our work establishes a new paradigm for conducting deep research on multimodal document collections.
Authors (12)
Kuicai Dong
Shurui Huang
Fangda Ye
Wei Han
Zhi Zhang
Dexun Li
+6 more
Submitted
October 24, 2025
arXiv Category
cs.IR
arXiv PDF

Key Contributions

Presents Doc-Researcher, a unified system that overcomes LLM limitations to textual data by enabling deep research on multimodal documents. It features deep multimodal parsing preserving visual semantics and layout, a retrieval architecture supporting hybrid paradigms with dynamic granularity, and iterative multi-agent workflows.

Business Value

Significantly enhances the ability of organizations to extract and leverage knowledge from diverse document types (reports, manuals, presentations), accelerating research and decision-making.