Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 96% Match Research Paper ML Engineers,Security Researchers,Data Scientists,Privacy Advocates 17 hours ago

Fast, Private, and Protected: Safeguarding Data Privacy and Defending Against Model Poisoning Attacks in Federated Learning

ai-safety › privacy
📄 Abstract

Abstract: Federated Learning (FL) is a distributed training paradigm wherein participants collaborate to build a global model while ensuring the privacy of the involved data, which remains stored on participant devices. However, proposals aiming to ensure such privacy also make it challenging to protect against potential attackers seeking to compromise the training outcome. In this context, we present Fast, Private, and Protected (FPP), a novel approach that aims to safeguard federated training while enabling secure aggregation to preserve data privacy. This is accomplished by evaluating rounds using participants' assessments and enabling training recovery after an attack. FPP also employs a reputation-based mechanism to mitigate the participation of attackers. We created a dockerized environment to validate the performance of FPP compared to other approaches in the literature (FedAvg, Power-of-Choice, and aggregation via Trimmed Mean and Median). Our experiments demonstrate that FPP achieves a rapid convergence rate and can converge even in the presence of malicious participants performing model poisoning attacks.

Key Contributions

Introduces Fast, Private, and Protected (FPP), a novel approach for federated learning that safeguards training against model poisoning attacks while preserving data privacy through secure aggregation. FPP also incorporates training recovery and a reputation mechanism to mitigate malicious participants.

Business Value

Enables organizations to leverage sensitive distributed data for ML model training without compromising user privacy or model integrity, crucial for industries like healthcare and finance.