Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Vision foundation models like DINOv2 demonstrate remarkable potential in
medical imaging despite their origin in natural image domains. However, their
design inherently works best for uni-modal image analysis, limiting their
effectiveness for multi-modal imaging tasks that are common in many medical
fields, such as neurology and oncology. While supervised models perform well in
this setting, they fail to leverage unlabeled datasets and struggle with
missing modalities, a frequent challenge in clinical settings. To bridge these
gaps, we introduce MM-DINOv2, a novel and efficient framework that adapts the
pre-trained vision foundation model DINOv2 for multi-modal medical imaging. Our
approach incorporates multi-modal patch embeddings, enabling vision foundation
models to effectively process multi-modal imaging data. To address missing
modalities, we employ full-modality masking, which encourages the model to
learn robust cross-modality relationships. Furthermore, we leverage
semi-supervised learning to harness large unlabeled datasets, enhancing both
the accuracy and reliability of medical predictions. Applied to glioma subtype
classification from multi-sequence brain MRI, our method achieves a Matthews
Correlation Coefficient (MCC) of 0.6 on an external test set, surpassing
state-of-the-art supervised approaches by +11.1%. Our work establishes a
scalable and robust solution for multi-modal medical imaging tasks, leveraging
powerful vision foundation models pre-trained on natural images while
addressing real-world clinical challenges such as missing data and limited
annotations.