Haku

FedClean:a defense mechanism against parameter poisoning attacks in federated learning

QR-koodi

FedClean:a defense mechanism against parameter poisoning attacks in federated learning

Abstract

In Federated learning (FL) systems, a centralized entity (server), instead of access to the training data, has access to model parameter updates computed by each participant independently and based solely on their samples. Unfortunately, FL is susceptible to model poisoning attacks, in which malicious or malfunctioning entities share polluted updates that can compromise the model’s accuracy. In this study, we propose FedClean, an FL mechanism that is robust to model poisoning attacks. The accuracy of the models trained with the assistance of FedClean is close to the one where malicious entities do not participate.

Tallennettuna:
Kysy apua / Ask for help

Sisältöä ei voida näyttää

Chat-sisältöä ei voida näyttää evästeasetusten vuoksi. Nähdäksesi sisällön sinun tulee sallia evästeasetuksista seuraavat: Chat-palveluiden evästeet.

Evästeasetukset