Security and Privacy Challenges in Deep Learning Models Hosted on Cloud Platforms
DOI:
https://doi.org/10.22399/ijcesen.3235Keywords:
Deep Learning, Cloud Security, Privacy Risks, Adversarial Attacks, Model Inversion, Secure AIAbstract
Deep learning's fast integration into cloud computing services enables businesses to perform scalable AI model training and real-time analysis in diverse sectors. The combination of deep learning with cloud platforms results in important security vulnerabilities that stem from adversarial threats and data breaches as well as model inversion vulnerabilities and unauthorized system intrusions. Data infringement combined with weakened model reliability and non-compliance requirements require cloud AI systems to adopt more robust security controls. Experts analyze security issues facing deep learning models in the cloud through an assessment of attacks which manipulate model inputs, pollute training data and exploit APIs and create insecurity across multiple cloud user environments. The research compares encryption protocols and federated learning capabilities and access control systems and differential privacy features of AWS, Google Cloud, Microsoft Azure, and IBM Cloud. The assessment evaluates regulatory compliance requirements of GDPR HIPAA and CCPA in order to detect security governance gaps for AI systems. Research outcomes show that Amazon Web Services along with Google Cloud deliver excellent encryption features as well as anomaly detection solutions yet Microsoft Azure stands out through its advanced federated learning functions. The security features aimed at AI operations are insufficient in IBM Cloud which demonstrates divergent approaches to security implementation across platforms. Homomorphic encryption and differential privacy have progressed but practical use remains restricted by high operational costs and regulatory uncertainty as well as attacks by adversaries. The distributed AI training method known as federated learning protects against poisoning attacks but still needs improved protection mechanisms to remain secure. The proposed solution for safe and privacy-compliant AI implementation uses a security system that joins sophisticated cryptographic methods with adversarial attack prevention mechanisms along with methods for protectively training AI. Future research needs to improve encryption speeds as well as strengthen federated learning resistance to attacks and create AI-based compliance systems which will address new cybersecurity threats against cloud-based AI platforms
References
[1] Mell, P., & Grance, T. (2011). The NIST definition of cloud computingNIST Special Publication. National Institute of Standards and Technology, Gaithersburg, Maryland, USA.
[2] Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural networks, 61, 85-117.
[3] Zhang, X., Liu, C., Nepal, S., Yang, C., Dou, W., & Chen, J. (2014). A hybrid approach for scalable sub-tree anonymization over big data using MapReduce on cloud. Journal of Computer and System Sciences, 80(5), 1008-1020.
[4] Sukender Reddy Mallreddy. (2023). Enhancing Cloud Data Privacy Through Federated Learning: A Decentralized Approach To Ai Model Training. Ijrdo -Journal of Computer Science Engineering, 9(8), 15-22. https://doi.org/10.53555/cse.v9i8.6131
[5] Rangaraju, S. (2023, December 1). Ai Sentry: Reinventing Cybersecurity Through Intelligent Threat Detection. Eph - International Journal of Science and Engineering, 9(3), 30–35. https://doi.org/10.53555/ephijse.v9i3.211
[6] Ristenpart, T., Tromer, E., Shacham, H., & Savage, S. (2009). Hey, you, get off of my cloud: exploring information leakage in third-party compute clouds. In Proceedings of the 16th ACM conference on Computer and communications security (pp. 199-212).
[7] Shokri, R., & Shmatikov, V. (2015). Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security (pp. 1310-1321).
[8] Papernot, N., McDaniel, P., Sinha, A., & Wellman, M. P. (2018). Sok: Security and privacy in machine learning. In 2018 IEEE European symposium on security and privacy (EuroS&P) (pp. 399-414). IEEE.
[9] Carlini, N., & Wagner, D. (2017). Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp) (pp. 39-57). Ieee.
[10] Voigt, P., & Von dem Bussche, A. (2017). The eu general data protection regulation (gdpr). A practical guide, 1st ed., Cham: Springer International Publishing, 10(3152676), 10-5555.
[11] Regulation, P. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council. Regulation (eu), 679, 2016.
[12] Act, A. (1996). Health insurance portability and accountability act of 1996. Public law, 104, 191.
[13] Illman, E., & Temple, P. (2019). California consumer privacy act. The Business Lawyer, 75(1), 1637-1646.
[14] Luqman, A., Mahesh, R., & Chattopadhyay, A. (2024). Privacy and security implications of cloud-based AI services: A survey. arXiv preprint arXiv:2402.00896.
[15] Aziz, R., Banerjee, S., Bouzefrane, S., & Le Vinh, T. (2023). Exploring homomorphic encryption and differential privacy techniques towards secure federated learning paradigm. Future internet, 15(9), 310.
[16] Yang, W., Bai, Y., Rao, Y., Wu, H., Xing, G., & Zhou, Y. (2024). Privacy-Preserving Federated Learning with Homomorphic Encryption and Sparse Compression. In 2024 4th International Conference on Computer Communication and Artificial Intelligence (CCAI) (pp. 192-198). IEEE.
[17] Adelakun, N. O. (2024). Exploring the Impact of Artificial Intelligence on Information Retrieval Systems. Information Matters, 4(5).
[18] Sébert, A. G., Checri, M., Stan, O., Sirdey, R., & Gouy-Pailler, C. (2023). Combining homomorphic encryption and differential privacy in federated learning. In 2023 20th Annual International Conference on Privacy, Security and Trust (PST) (pp. 1-7). IEEE.
[19] Rehan, H. (2024). AI-driven cloud security: The future of safeguarding sensitive data in the digital age. Journal of Artificial Intelligence General science (JAIGS) ISSN: 3006-4023, 1(1), 132-151.
[20] Cao, D., Wang, C., Sun, H., Cao, C., Kang, M., Zheng, H., ... & Tong, Q. (2023). Multiinstitutional Lung Image Classification Using Privacy-Preserving Horizontal Federated Learning with Homomorphic Encryption. In 2023 IEEE International Conference on E-health Networking, Application & Services (Healthcom) (pp. 131-136). IEEE.
[21] Ginanjar, M. G., Lubis, M., Ramadani, L., & Handayani, D. O. D. (2024). Enhancing Security and Privacy in Cloud Computing: Challenges and Solutions in the Digital Age. In 2024 3rd International Conference on Creative Communication and Innovative Technology (ICCIT) (pp. 1-6). IEEE.
[22] Park, J., & Lim, H. (2022). Privacy-preserving federated learning using homomorphic encryption. Applied Sciences, 12(2), 734.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Journal of Computational and Experimental Science and Engineering

This work is licensed under a Creative Commons Attribution 4.0 International License.