Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: It is often desirable to remove (a.k.a. unlearn) a specific part of the
training data from a trained neural network model. A typical application
scenario is to protect the data holder's right to be forgotten, which has been
promoted by many recent regulation rules. Existing unlearning methods involve
training alternative models with remaining data, which may be costly and
challenging to verify from the data holder or a thirdparty auditor's
perspective. In this work, we provide a new angle and propose a novel
unlearning approach by imposing carefully crafted "patch" on the original
neural network to achieve targeted "forgetting" of the requested data to
delete. Specifically, inspired by the research line of neural network repair,
we propose to strategically seek a lightweight minimum "patch" for unlearning a
given data point with certifiable guarantee. Furthermore, to unlearn a
considerable amount of data points (or an entire class), we propose to
iteratively select a small subset of representative data points to unlearn,
which achieves the effect of unlearning the whole set. Extensive experiments on
multiple categorical datasets demonstrates our approach's effectiveness,
achieving measurable unlearning while preserving the model's performance and
being competitive in efficiency and memory consumption compared to various
baseline methods.
Authors (4)
Xuran Li
Jingyi Wang
Xiaohan Yuan
Peixin Zhang
Key Contributions
Proposes PRUNE, a novel patching-based framework for certifiable unlearning of neural networks. Instead of retraining, it applies lightweight 'patches' to the original model to achieve targeted forgetting with verifiable guarantees. This approach is more efficient and easier to verify than retraining from scratch.
Business Value
Enables organizations to comply with data privacy regulations (like GDPR's 'right to be forgotten') efficiently and with strong guarantees, reducing legal and reputational risks associated with data handling.