Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Contrastive Language-Image Pre-training (CLIP) models have demonstrated
superior performance across various visual tasks including medical image
classification. However, fairness concerns, including demographic biases, have
received limited attention for CLIP models. This oversight leads to critical
issues, particularly those related to race and gender, resulting in disparities
in diagnostic outcomes and reduced reliability for underrepresented groups. To
address these challenges, we introduce AdFair-CLIP, a novel framework employing
adversarial feature intervention to suppress sensitive attributes, thereby
mitigating spurious correlations and improving prediction fairness. We conduct
comprehensive experiments on chest X-ray (CXR) datasets, and show that
AdFair-CLIP significantly enhances both fairness and diagnostic accuracy, while
maintaining robust generalization in zero-shot and few-shot scenarios. These
results establish new benchmarks for fairness-aware learning in CLIP-based
medical diagnostic models, particularly for CXR analysis.
Authors (8)
Chenlang Yi
Zizhan Xiong
Qi Qi
Xiyuan Wei
Girish Bathla
Ching-Long Lin
+2 more
Key Contributions
AdFair-CLIP introduces a novel framework using adversarial feature intervention to mitigate demographic biases (race, gender) in CLIP models applied to medical imaging. It significantly enhances fairness and diagnostic accuracy while maintaining robust generalization in zero-shot and few-shot scenarios, setting new benchmarks for fairness-aware learning in CLIP.
Business Value
Improves the trustworthiness and equity of AI diagnostic tools in healthcare, leading to more reliable and equitable patient care across diverse demographic groups.