Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 95% Match Research Paper AI researchers,ML engineers,Data scientists,Auditors of AI systems 1 month ago

ProtoMask: Segmentation-Guided Prototype Learning

ai-safety › interpretability
📄 Abstract

Abstract: XAI gained considerable importance in recent years. Methods based on prototypical case-based reasoning have shown a promising improvement in explainability. However, these methods typically rely on additional post-hoc saliency techniques to explain the semantics of learned prototypes. Multiple critiques have been raised about the reliability and quality of such techniques. For this reason, we study the use of prominent image segmentation foundation models to improve the truthfulness of the mapping between embedding and input space. We aim to restrict the computation area of the saliency map to a predefined semantic image patch to reduce the uncertainty of such visualizations. To perceive the information of an entire image, we use the bounding box from each generated segmentation mask to crop the image. Each mask results in an individual input in our novel model architecture named ProtoMask. We conduct experiments on three popular fine-grained classification datasets with a wide set of metrics, providing a detailed overview on explainability characteristics. The comparison with other popular models demonstrates competitive performance and unique explainability features of our model. https://github.com/uos-sis/quanproto

Key Contributions

This paper introduces ProtoMask, a novel model architecture that leverages image segmentation foundation models to improve the reliability and truthfulness of explanations in prototypical case-based reasoning methods. By restricting the computation area of saliency maps to predefined semantic image patches derived from segmentation masks, ProtoMask reduces uncertainty in visualizations and enhances the mapping between embedding and input spaces for better interpretability.

Business Value

Enhances trust and transparency in AI systems, particularly in high-stakes domains like healthcare or finance, by providing more reliable explanations for model predictions. This can facilitate adoption and debugging.