Journal of Xidian University ›› 2023, Vol. 50 ›› Issue (4): 111-120.doi: 10.19665/j.issn1001-2400.2023.04.011

• Special Issue on Cyberspace Security • Previous Articles     Next Articles

Differentially private federated learning framework with adaptive clipping

WANG Fangwei(),XIE Meiyun(),LI Qingru(),WANG Changguang()   

  1. Hebei Province Key Laboratory of Network and Information Security,College of Computer and Cyber Security,Hebei Normal University,Shijiazhuang 050024,China
  • Received:2023-01-08 Online:2023-08-20 Published:2023-10-17
  • Contact: Changguang WANG E-mail:fw_wang@hebtu.edu.cn;xmy-123@stu.hebtu.edu.cn;qingruli@hebtu.edu.cn;wangcg@hebtu.edu.cn

Abstract:

Federation learning allows the parties involved in training to achieve collaborative modeling without sharing their own data.Its data isolation strategy safeguards the privacy and security of user data to a certain extent and effectively alleviates the problem of data silos.However,the training process of federation learning involves a large number of parameter interactions among the participants and the server,and there is still a risk of privacy disclosure.So a differentially private federated learning framework ADP_FL based on adaptive cropping is proposed to address the privacy protection problem during data transmission.In this framework,each participant uses its own data to train the model by performing multiple iterations locally.The gradient is trimmed by adaptively selecting the trimming threshold in each iteration in order to limit the gradient to a reasonable range.Only dynamic Gaussian noise is added to the uploaded model parameters to mask the contribution of each participant.The server aggregates the received noise parameters to update the global model.The adaptive gradient clipping strategy can not only achieve a reasonable calibration of the gradient,but also control the noise scale by dynamically changing the sensitivity while considering the clipping threshold as a parameter in the sensitivity.The results of theoretical analysis and experiments show that the proposed framework can still achieve a great model accuracy under strong privacy constraints.

Key words: federated learning, differential privacy, privacy disclosure, adaptive clipping

CLC Number: 

  • TP391