When the Curious Abandon Honesty: Federated Learning Is Not Private

Date:

Abstract: In federated learning (FL), data does not leave personal devices whenthey are jointly training a machine learning model. Instead, thesedevices share gradients, parameters, or other model updates, with acentral party (e.g., a company) coordinating the training. Becausedata never “leaves” personal devices, FL is presented as privacy-preserving. Yet, recently it was shown that this protection is but athin facade, as even a passive attacker observing gradients can recon-struct data of individual users contributing to the protocol.In this paper, we argue that prior workstilllargely underestimatesthe vulnerability of FL. This is because prior efforts exclusively con-sider passive attackers that are honest-but-curious. Instead, we in-troduce an active and dishonest attacker acting as the central party,who is able to modify the shared model’s weights before users com-pute model gradients. We call the modified weightstrap weights.Our active attacker is able to recover user dataperfectly. Recoverycomes with near zero costs: the attack requires no complex optimiza-tion objectives. Instead, our attacker exploits inherent data leakagefrom model gradients and simply amplifies this effect by maliciouslyaltering the weights of the shared model through the trap weights. Find the video here and the original paper here.