Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 95% Match Research Paper AI Researchers,Machine Learning Engineers,Data Scientists,AI Ethicists,Model Developers 2 days ago

Community Detection on Model Explanation Graphs for Explainable AI

ai-safety › interpretability
📄 Abstract

Abstract: Feature-attribution methods (e.g., SHAP, LIME) explain individual predictions but often miss higher-order structure: sets of features that act in concert. We propose Modules of Influence (MoI), a framework that (i) constructs a model explanation graph from per-instance attributions, (ii) applies community detection to find feature modules that jointly affect predictions, and (iii) quantifies how these modules relate to bias, redundancy, and causality patterns. Across synthetic and real datasets, MoI uncovers correlated feature groups, improves model debugging via module-level ablations, and localizes bias exposure to specific modules. We release stability and synergy metrics, a reference implementation, and evaluation protocols to benchmark module discovery in XAI.
Authors (1)
Ehsan Moradi
Submitted
October 31, 2025
arXiv Category
cs.SI
arXiv PDF

Key Contributions

Proposes Modules of Influence (MoI), a framework that constructs model explanation graphs from per-instance attributions, uses community detection to find jointly affecting feature modules, and quantifies their relation to bias, redundancy, and causality. It enables model debugging via module-level ablations and bias localization.

Business Value

Enhances the trustworthiness and reliability of AI models by providing deeper insights into their decision-making processes, aiding in debugging, fairness assessments, and regulatory compliance.