Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
Introduces INPO (Influence-aware Negative Preference Optimization), a framework for graph unlearning that addresses utility degradation. It slows divergence speed and improves robustness by unlearning high-influence edges and using an influence-aware message function to mitigate topological coupling.
Enables organizations to comply with data deletion requests (e.g., GDPR's 'right to be forgotten') while maintaining the performance of their graph-based AI models, crucial for privacy-sensitive applications.