TY - GEN
T1 - Differential Privacy Approach to Solve Gradient Leakage Attack in a Federated Machine Learning Environment
AU - Yadav, Krishna
AU - Gupta, B. B.
AU - Chui, Kwok Tai
AU - Psannis, Konstantinos
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - The growth of federated machine learning in recent times has dramatically leveraged the traditional machine learning technique for intrusion detection. Keeping the dataset for training at decentralized nodes, federated machine learning have kept the people’s data private; however, federated machine learning mechanism still suffers from gradient leakage attacks. Adversaries are now taking advantage of those gradients and can reconstruct the people’s private data with greater accuracy. Adversaries are using these private network data later on to launch more devastating attacks against users. At this time, it becomes essential to develop a solution that prevents these attacks. This paper has introduced differential privacy, which uses Gaussian and Laplace mechanisms to secure updated gradients during the communication. Our result shows that clients can achieve a significant level of accuracy with differentially private gradients.
AB - The growth of federated machine learning in recent times has dramatically leveraged the traditional machine learning technique for intrusion detection. Keeping the dataset for training at decentralized nodes, federated machine learning have kept the people’s data private; however, federated machine learning mechanism still suffers from gradient leakage attacks. Adversaries are now taking advantage of those gradients and can reconstruct the people’s private data with greater accuracy. Adversaries are using these private network data later on to launch more devastating attacks against users. At this time, it becomes essential to develop a solution that prevents these attacks. This paper has introduced differential privacy, which uses Gaussian and Laplace mechanisms to secure updated gradients during the communication. Our result shows that clients can achieve a significant level of accuracy with differentially private gradients.
KW - Differential privacy
KW - Federated learning
KW - Gradient leakage
KW - Intrusion detection
KW - Machine learning
UR - http://www.scopus.com/inward/record.url?scp=85101336910&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-66046-8_31
DO - 10.1007/978-3-030-66046-8_31
M3 - Conference contribution
AN - SCOPUS:85101336910
SN - 9783030660451
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 378
EP - 385
BT - Computational Data and Social Networks - 9th International Conference, CSoNet 2020, Proceedings
A2 - Chellappan, Sriram
A2 - Choo, Kim-Kwang Raymond
A2 - Phan, NhatHai
T2 - 9th International Conference on Computational Data and Social Networks, CSoNet 2020
Y2 - 11 December 2020 through 13 December 2020
ER -