Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Multimodal pathological images are usually in clinical diagnosis, but
computer vision-based multimodal image-assisted diagnosis faces challenges with
modality fusion, especially in the absence of expert-annotated data. To achieve
the modality fusion in multimodal images with label scarcity, we propose a
novel ``pretraining + fine-tuning" framework for multimodal semi-supervised
medical image classification. Specifically, we propose a synergistic learning
pretraining framework of consistency, reconstructive, and aligned learning. By
treating one modality as an augmented sample of another modality, we implement
a self-supervised learning pre-train, enhancing the baseline model's feature
representation capability. Then, we design a fine-tuning method for multimodal
fusion. During the fine-tuning stage, we set different encoders to extract
features from the original modalities and provide a multimodal fusion encoder
for fusion modality. In addition, we propose a distribution shift method for
multimodal fusion features, which alleviates the prediction uncertainty and
overfitting risks caused by the lack of labeled samples. We conduct extensive
experiments on the publicly available gastroscopy image datasets Kvasir and
Kvasirv2. Quantitative and qualitative results demonstrate that the proposed
method outperforms the current state-of-the-art classification methods. The
code will be released at: https://github.com/LQH89757/MICS.