Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Multimodal person re-identification (Re-ID) aims to match pedestrian images
across different modalities. However, most existing methods focus on limited
cross-modal settings and fail to support arbitrary query-retrieval
combinations, hindering practical deployment. We propose FlexiReID, a flexible
framework that supports seven retrieval modes across four modalities: rgb,
infrared, sketches, and text. FlexiReID introduces an adaptive
mixture-of-experts (MoE) mechanism to dynamically integrate diverse modality
features and a cross-modal query fusion module to enhance multimodal feature
extraction. To facilitate comprehensive evaluation, we construct CIRS-PEDES, a
unified dataset extending four popular Re-ID datasets to include all four
modalities. Extensive experiments demonstrate that FlexiReID achieves
state-of-the-art performance and offers strong generalization in complex
scenarios.
Authors (8)
Zhen Sun
Lei Tan
Yunhang Shen
Chengmao Cai
Xing Sun
Pingyang Dai
+2 more
Submitted
October 17, 2025
Key Contributions
FlexiReID is a flexible framework for multimodal person re-identification that supports arbitrary query-retrieval combinations across RGB, infrared, sketches, and text modalities. It utilizes an adaptive MoE mechanism and a cross-modal query fusion module, achieving state-of-the-art performance and strong generalization.
Business Value
Enhances security and surveillance capabilities by enabling accurate identification of individuals across different sensor types and even textual descriptions, improving operational efficiency.