Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Extensive research has highlighted the vulnerability of graph neural networks
(GNNs) to adversarial attacks, including manipulation, node injection, and the
recently emerging threat of backdoor attacks. However, existing defenses
typically focus on a single type of attack, lacking a unified approach to
simultaneously defend against multiple threats. In this work, we leverage the
flexibility of the Mixture of Experts (MoE) architecture to design a scalable
and unified framework for defending against backdoor, edge manipulation, and
node injection attacks. Specifically, we propose an MI-based logic diversity
loss to encourage individual experts to focus on distinct neighborhood
structures in their decision processes, thus ensuring a sufficient subset of
experts remains unaffected under perturbations in local structures. Moreover,
we introduce a robustness-aware router that identifies perturbation patterns
and adaptively routes perturbed nodes to corresponding robust experts.
Extensive experiments conducted under various adversarial settings demonstrate
that our method consistently achieves superior robustness against multiple
graph adversarial attacks.
Authors (3)
Yuyuan Feng
Bin Ma
Enyan Dai
Submitted
October 17, 2025
Key Contributions
This paper proposes a unified framework using Mixture of Experts (MoE) to defend Graph Neural Networks (GNNs) against multiple adversarial attacks (backdoor, edge manipulation, node injection). It introduces an MI-based diversity loss to encourage expert specialization and a robustness-aware router to adaptively handle perturbed nodes, ensuring a subset of experts remains unaffected.
Business Value
Enhances the security and reliability of GNNs used in critical applications like fraud detection, network security, and recommendation systems, reducing risks associated with adversarial manipulation.