Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Domain shift in histopathology, often caused by differences in acquisition
processes or data sources, poses a major challenge to the generalization
ability of deep learning models. Existing methods primarily rely on modeling
statistical correlations by aligning feature distributions or introducing
statistical variation, yet they often overlook causal relationships. In this
work, we propose a novel causal-inference-based framework that leverages
semantic features while mitigating the impact of confounders. Our method
implements the front-door principle by designing transformation strategies that
explicitly incorporate mediators and observed tissue slides. We validate our
method on the CAMELYON17 dataset and a private histopathology dataset,
demonstrating consistent performance gains across unseen domains. As a result,
our approach achieved up to a 7% improvement in both the CAMELYON17 dataset and
the private histopathology dataset, outperforming existing baselines. These
results highlight the potential of causal inference as a powerful tool for
addressing domain shift in histopathology image analysis.
Authors (3)
Kieu-Anh Truong Thi
Huy-Hieu Pham
Duc-Trong Le
Submitted
October 16, 2025
Key Contributions
This paper introduces CLEAR, a novel causal-inference-based framework for robust histopathology tumor detection under out-of-distribution shifts. By leveraging semantic features and explicitly incorporating mediators via the front-door principle, it mitigates the impact of confounders, leading to improved generalization compared to methods relying solely on statistical correlations.
Business Value
Enhances the reliability and generalizability of AI-powered diagnostic tools in pathology, leading to more accurate and consistent cancer detection across different hospitals and equipment, potentially improving patient outcomes.