Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 95% Match Research Paper GNN Researchers,Data Privacy Engineers,ML Engineers,Graph Data Scientists 2 weeks ago

Graph Unlearning Meets Influence-aware Negative Preference Optimization

graph-neural-networks › graph-learning
📄 Abstract

Abstract: Recent advancements in graph unlearning models have enhanced model utility by preserving the node representation essentially invariant, while using gradient ascent on the forget set to achieve unlearning. However, this approach causes a drastic degradation in model utility during the unlearning process due to the rapid divergence speed of gradient ascent. In this paper, we introduce \textbf{INPO}, an \textbf{I}nfluence-aware \textbf{N}egative \textbf{P}reference \textbf{O}ptimization framework that focuses on slowing the divergence speed and improving the robustness of the model utility to the unlearning process. Specifically, we first analyze that NPO has slower divergence speed and theoretically propose that unlearning high-influence edges can reduce impact of unlearning. We design an influence-aware message function to amplify the influence of unlearned edges and mitigate the tight topological coupling between the forget set and the retain set. The influence of each edge is quickly estimated by a removal-based method. Additionally, we propose a topological entropy loss from the perspective of topology to avoid excessive information loss in the local structure during unlearning. Extensive experiments conducted on five real-world datasets demonstrate that INPO-based model achieves state-of-the-art performance on all forget quality metrics while maintaining the model's utility. Codes are available at \href{https://github.com/sh-qiangchen/INPO}{https://github.com/sh-qiangchen/INPO}.
Authors (9)
Qiang Chen
Zhongze Wu
Ang He
Xi Lin
Shuo Jiang
Shan You
+3 more
Submitted
October 22, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

Introduces INPO (Influence-aware Negative Preference Optimization), a framework for graph unlearning that addresses utility degradation. It slows divergence speed and improves robustness by unlearning high-influence edges and using an influence-aware message function to mitigate topological coupling.

Business Value

Enables organizations to comply with data deletion requests (e.g., GDPR's 'right to be forgotten') while maintaining the performance of their graph-based AI models, crucial for privacy-sensitive applications.