Federated Learning's Dynamic Defense Against Byzantine Attacks: Integrating SIFT-Wavelet and Differential Privacy for Byzantine Grade Levels Detection

Authors

  • Sahithi Godavarthi 1aResearch Scholar, Dept. of CSE, GITAM School of Technology, GITAM(Deemed to be University), Visakhapatnam, AP, India. 1bAssistant Professor, Department of Emerging Technologies, CVR College of Engineering, Hyderabad, Telangana, India.
  • Dr. Venkateswara Rao G. 2Professor, Dept. of CSE, GITAM School of Technology, GITAM(Deemed to be University), Visakhapatnam, AP, India. https://orcid.org/0000-0001-6090-339X

DOI:

https://doi.org/10.22399/ijcesen.538

Keywords:

Federated Learning, Byzantine attacks, Distributed Learning, Neural Network, Robust and Dynamic Aggregation

Abstract

Federated learning, which enables decentralized training across multiple devices while maintaining data privacy, is susceptible to Byzantine poisoning attacks. This paradigm reduces the need for centralized data storage and transmission, thereby mitigating privacy risks associated with traditional data aggregation methods. However, FL introduces new challenges, notably susceptibility to Byzantine poisoning attacks, where rogue participants can tamper with model updates, threatening the consistency and security of the aggregated model. Our approach addresses this vulnerability by implementing robust aggregation methods, sophisticated pre-processing techniques, and a novel Byzantine grade-level detection mechanism. We introduce a federated aggregation operator designed to mitigate the impact of malicious clients. Our pre-processing includes data loading and transformation, data augmentation, and feature extraction using SIFT and wavelet transforms. Additionally, we employ differential privacy and model compression to improve the robustness and performance of the federated learning framework. Our approach is assessed using a tailored neural network model applied to the MNIST dataset, achieving 97% accuracy in detecting Byzantine attacks. Our results demonstrate that robust aggregation significantly improves the resilience and performance. This comprehensive approach ensures the integrity of the federated learning process, effectively filtering out adversarial influences and sustaining high accuracy even when faced with adversarial Byzantine clients.

References

M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li, (2018). Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. IEEE Symp. Security Privacy (SP), pp. 19–35.

P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings et al., (2019). Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977, 2019.

Lingjiao Chen, Hongyi Wang, Zachary Charles, and Dimitris Papailiopoulos. Draco: Byzantine-resilient distributed training via redundant gradients. In International Conference on Machine Learning, pages 903–912, 2018.

S. Prathiba, G. Raja, S. Anbalagan, S. Gurumoorthy, N. Kumar, and M. Guizani, (2022). Cybertwin-driven federated learning based personalized service provision for 6G-V2X. IEEE Trans. Veh. Technol., 71(5);4632–4641.

J. So, B. Guler, and A. S. Avestimehr, (2021). Turbo-aggregate: Breaking the quadratic aggregation barrier in secure federated learning. IEEE J. Sel. Areas Inf. Theory, 2(1);479489. DOI: 10.1109/JSAIT.2021.3054610

J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller, (2020). Inverting gradients – How easy is it to break privacy in federated learning?. arXiv preprint arXiv:2003.14053.

K. Pillutla, S. M. Kakade, and Z. Harchaoui, (2022). Robust aggregation for federated learning. IEEE Trans. Signal Process.70;1142– 1154.

Z. Wang, M. Song, Z. Zhang, Y. Song, Q. Wang, and H. Qi, (2019). ‘Beyond inferring class representatives: user-level privacy leakage from federated learning,’’ in Proc. IEEE Conf. Comput. Commun.,pp. 2512–2520

Y. Chen, L. Su, and J. Xu, (2017). Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proc. ACM Meas. Anal. Comput. Syst., 1(2);1–25. https://doi.org/10.48550/arXiv.1705.05491

P. Kalpana, P. Srilatha, G. S. Krishna, A. Alkhayyat and D. Mazumder, "Denial of Service (DoS) Attack Detection Using Feed Forward Neural Network in Cloud Environment," 2024 International Conference on Data Science and Network Security (ICDSNS), Tiptur, India, 2024, pp. 1-4, https://doi.org/10.1109/ICDSNS62112.2024.10691181

Nabi, S. A., Kalpana, P., Chandra, N. S., Smitha, L., Naresh, K., Ezugwu, A. E., & Abualigah, L. (2024). Distributed private preserving learning based chaotic encryption framework for cognitive healthcare IoT systems. Informatics in Medicine Unlocked, 49, 101547. https://doi.org/10.1016/j.imu.2024.101547.

L. Zhao, J. Jiang, B. Feng, Q. Wang, C. Shen, and Q. Li, (2022). SEAR: Secure and efficient aggregation for Byzantine-robust federated learning. IEEE Trans. Dependable Secure Comput., 19(5);3329–3342, doi: 10.1109/TDSC.2021.3093711.

Kalpana, P., Anandan, R. (2023). A capsule attention network for plant disease classification. Traitement du Signal, 40(5);2051-2062. https://doi.org/10.18280/ts.400523.

S. Li, Y. Cheng, W. Wang, Y. Liu, and T. Chen, (2020). Learning to detect malicious clients for robust federated learning arXiv preprint arXiv:2002.00211.

E. M. E. Mhamdi, R. Guerraoui, and S. Rouault, (2018). The hidden vulnerability of distributed learning in byzantium arXiv preprint arXiv:1802.07927.

C. Fung, C. J. M. Yoon, and I. Beschastnikh, (2018). Mitigating sybils in federated learning poisoning. CoRR, vol. abs/1808.04866.

D. Cao, S. Chang, Z. Lin, G. Liu, and D. Sun, (2019).Understanding distributed poisoning attack in federated learning,” in Proc. of IEEE ICPADS, pp. 233–239.

W. Wan, S. Hu, J. Lu, L. Y. Zhang, H. Jin, and Y. He, (2022). Shielding federated learning: Robust aggregation with adaptive client selection,” in Proc. of IJCAI, pp. 753–760.

J. So, B. Güler, and A. S. Avestimehr, (2021). Byzantine-resilient secure federated learning,” IEEE J. Sel. Areas Commun., 39(7);2168–2181.

X. Cao, M. Fang, J. Liu, and N. Z. Gong, (2020) “Fltrust: Byzantine-robust federated learning via trust bootstrapping,” arXiv:2012.13995.

Z. Sun, P. Kairouz, A. T. Suresh, and H. B. McMahan, (2019). Can you really backdoor federated learning?” arXiv:1911.07963

P. Blanchard, E. Mahdi, R. Guerraoui, and J. Stainer, (2017). Machine learning with adversaries: Byzantine tolerant gradient descent, in Proc. of NeurIPS, pp. 119–129.

Holzinger, (2016). A. Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Informatics, 3(2):119–131.

T. Gu, B. Dolan-Gavitt, and S. Garg, (2017). “Badnets: Identifying vulnerabilities in the machine learning model supply chain,” in Machine Learning and Computer Security Workshop.

M Sakib Osman Eshan, Md. Naimul Huda Nafi, Nazmus Sakib, Mehedi Hasan Emon, Tanzim Reza, Mohammad Zavid Parvez, Prabal Datta Barua, Subrata Chakraborty, (2023). “Byzantine-Resilient Federated Learning Leveraging Confidence Score to Identify Retinal Disease”, IEEE 2023.

Norman P. Jouppi, Cliff Young, and Nishant Patil et al. (2017). In-datacenter performance analysis of a tensor processing unit. In 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA), pages 1–12.

Qi Xia , Zeyi Tao , Zijiang Hao and Qun Li, (2019). FABA: An Algorithm for Fast Aggregation against Byzantine Attacks in Distributed Neural Networks”, Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19)

C. Wang, G. Liu, H. Huang, W. Feng, K. Peng, and L. Wang, (2019). Miasec: Enabling data indistinguishability against membership inference attacks in mlaas, IEEE Transactions on Sustainable Computing, 5(3);365–376.

Kalpana, P., Anandan, R., Hussien, A.G. et al. (2024). Plant disease recognition using residual convolutional enlightened Swin transformer networks. Sci Rep 14;8660. https://doi.org/10.1038/s41598-024-56393-8

A. Bittau, U. Erlingsson, P. Maniatis, I. Mironov, A. Raghunathan, ´ D. Lie, M. Rudominer, U. Kode, J. Tinnes, and B. Seefeld, (2017). Prochlo: Strong privacy for analytics in the crowd,” in Proceedings of the 26th Symposium on Operating Systems Principles, pp. 441–459.

Aruna, E. and Sahayadhas , A. (2024). Blockchain-Inspired Lightweight Dynamic Encryption Schemes for a Secure Health Care Information Exchange System. Engineering, Technology & Applied Science Research. 14;4, 15050–15055. DOI:https://doi.org/10.48084/etasr.7390.

F. Sattler, K.-R. Muller, T. Wiegand, and W. Samek, (2020). On the byzantine ¨ robustness of clustered federated learning,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 8861–8865.

Downloads

Published

2024-10-30

How to Cite

Godavarthi, S., & G., D. V. R. (2024). Federated Learning’s Dynamic Defense Against Byzantine Attacks: Integrating SIFT-Wavelet and Differential Privacy for Byzantine Grade Levels Detection. International Journal of Computational and Experimental Science and Engineering, 10(4). https://doi.org/10.22399/ijcesen.538

Issue

Section

Research Article