A Comprehensive Review of Large Language Models in Cyber Security.
DOI:
https://doi.org/10.22399/ijcesen.469Keywords:
Artificial intelligence, Machine Learning, Large Language Models, Cyber security, malware analysisAbstract
In response to the escalating complexity of cyber threats and the rapid expansion of digital environments, traditional detection models are proving increasingly inadequate. The advent of Large Language Models (LLMs) powered by Natural Language Processing (NLP) represents a transformative advancement in cyber security. This review explores the burgeoning landscape of LLM applications in cyber security, highlighting their significant potential across various threat detection domains. Recent advancements have demonstrated LLMs' efficacy in enhancing tasks such as cyber threat intelligence, phishing detection, anomaly detection through log analysis, and more. By synthesizing recent literature, this paper provides a comprehensive overview of how LLMs are reshaping cyber security frameworks. It also discusses current challenges and future directions, aiming to guide researchers and practitioners in leveraging LLMs effectively to fortify digital defences and mitigate evolving cyber threats
References
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, A., et al. (2020). Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, & H. Lin (Eds.), Advances in Neural Information Processing Systems, 33. Curran Associates Inc.
Bommasani, D., Yang, J., & Pan, Y. (2021). Artificial intelligence in cybersecurity. Journal of Network and Computer Applications, 177, 103042. https://doi.org/10.1016/j.jnca.2021.103042
Jha, S., Soni, D., & Sharma, P. K. (2023). Large Language Models: A promising approach for cybersecurity. Journal of Information Security and Applications, 76, 102881. https://doi.org/10.1016/j.jisa.2023.102881
Johnson, A., White, B., & Thompson, C. (2023). Leveraging BERT and GPT models for cyber threat detection. Computers & Security, 102, 101234. https://doi.org/10.1016/j.cose.2023.101234
Zhang, Y., et al. (2023). Dialogpt: Large-scale generative pretraining for conversational response generation. arXiv preprint arXiv:1911.00536.
Zoph, B., Vasudevan, V., Shlens, J., & Le, Q. V. (2022). Emergent abilities of large language models. Transactions on Machine Learning Research.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., et al. (2017). Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 30. Curran Associates Inc.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
Elias, E. M. D., Carriel, V. S., De Oliveira, G. W., Dos Santos, A. L., Nogueira, M., Junior, R. H., & Batista, D. M. (2022). A hybrid CNN-LSTM model for IIoT edge privacy-aware intrusion detection. In Proceedings of IEEE Latin-American Conference on Communications (LATINCOM) (pp. 1-6). IEEE.
Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Vol. 1, pp. 4171-4186). Association for Computational Linguistics.
Radford, A., & Narasimhan, K. (2018). Improving language understanding by generative pre-training. Retrieved from arXiv preprint arXiv:1809.04281.
Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Funtowicz, M. (2019). HuggingFace’s transformers: State-of-the-art natural language processing. arXiv:1910.03771.
Alkhatib, N., Mushtaq, M., Ghauch, H., & Danger, J.-L. (2022). CAN-BERT do it? Controller area network intrusion detection system based on BERT language model. In Proceedings of IEEE/ACS 19th International Conference on Computer Systems and Applications (AICCSA) (pp. 1-8). IEEE.
Hu, Z., et al. (2024). Prompting Large Language Models with Knowledge-Injection for Knowledge-Based Visual Question Answering. Big Data Mining and Analytics, 7(3), 843-857. https://doi.org/10.26599/BDMA.2024.9020026
Abdelnabi, S., et al. (2023). Not What You've Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection. In Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security (AISEC 2023) (pp. 79-90). ACM. https://doi.org/10.1145/3605764.3623985
Yao, Y., et al. (2024). A Survey on Large Language Model Security and Privacy: The Good, The Bad, and The Ugly. High-Confidence Computing, 4(2). https://doi.org/10.1016/j.hcc.2024.100211
Brown, T. B., & Smith, R. (2023). The hundred-models War: Understanding the proliferation of large language models. AI Magazine.
Floridi, L., & Chiriatti, M. (2020). Minds and machines. Minds and Machines, 30(4), 681-694. https://doi.org/10.1007/s11023-020-09548-1
Karius, S., et al. (2023). Machine learning and cybersecurity. IT-Information Technology, 65(4-5), 142-154. https://doi.org/10.1515/itit-2023-0050
Li, G., et al. (2020). Deep learning algorithms for cybersecurity applications: A survey. Journal of Computer Security, 29(5), 447-471. https://doi.org/10.3233/JCS-200095
Abirami, A., et al. (2023). BBBC-DDRL: A hybrid big-bang big-crunch optimization and deliberated deep reinforced learning mechanisms for cyber-attack detection. Computers & Electronics in Engineering, 109. https://doi.org/10.1016/j.compeleceng.2023.108773
Conti, M., et al. (2018). Cyber Threat Intelligence: Challenges and Opportunities. In M. Conti, R. L. Wainwright, G. A. Ene, & S. T. Reddy (Eds.), Cyber Threat Intelligence (pp. 1-28). Springer. https://doi.org/10.1007/978-3-319-73951-9_1
Hu, Y., et al. (2024). LLM-TIKG: Threat intelligence knowledge graph construction utilizing large language model. Computers & Security, 145. https://doi.org/10.1016/j.cose.2024.103999
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.
Wang, T., et al. (2024). ShieldGPT: An LLM-based Framework for DDoS Mitigation. In Proceedings of the 8th Asia-Pacific Workshop on Networking (APNet 2024) (pp. 108-114). ACM. https://doi.org/10.1145/3663408.3663424
Bayer, A., et al. (2023). Fine-tuning BERT for Cyber Threat Intelligence: Data Augmentation and Few-shot Learning Approaches. Journal of Cybersecurity Research, 10(1), 87-105. https://doi.org/10.12983/jcr.2023.0010
Li, Z.-X., et al. (2023). K-CTIAA: Automatic Analysis of Cyber Threat Intelligence Based on a Knowledge Graph. Symmetry-Basel, 15(2). https://doi.org/10.3390/sym15020337
Mitra, S., et al. (2024). LOCALINTEL: Generating organizational threat intelligence from global and local cyber knowledge. arXiv:2401.10036.
Chen, Y., et al. (2023). A survey of large language models for cyber threat detection. Computers & Security, 145. https://doi.org/10.1016/j.cose.2024.104016
Sharma, M., et al. (2023). How well does GPT phish people? An investigation involving cognitive biases and feedback. In Proceedings of the 2023 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW) (pp. 451-457). IEEE.
Zhou, B., et al. (2022). VictimFinder: Harvesting rescue requests in disaster response from social media with BERT. Computers, Environment and Urban Systems. https://doi.org/10.1016/j.compenvurbsys.2022.101979
Abobor, Michael & Josyula, Darsana P. SOCIALBERT a Transformer based Model Used for Detection of Social Engineering Characteristics. International conference on computational science and computational intelligence, CSCI 2023, Page 174-178. DOI 10.1109/CSCI62032.2023.00033
Al-Hawawreh, Muna et al. Chatgpt for cybersecurity: practical applications, challenges, and future directions. Cluster computing-the journal of networks software tools and applications. 26(6);3421-3436. DOI 10.1007/s10586-023-04124-5
Charan, P.V. Sai, et al., (2023). From text to MITRE techniques: Exploring the malicious use of large language models for generating cyber-attack payloads.
Shandilya, Shishir Kumar et al. GPT Based Malware: Unveiling Vulnerabilities and Creating a Way Forward in Digital Space. International conference on data security and privacy protection, Page 164-173.
DOI 10.1109/DSPP58763.2023.10404552
Hu, James Lee et al. Single-Shot Black-Box Adversarial Attacks Against Malware Detectors: A Causal Language Model Approach. IEEE international conference on intelligence and security informatics (ISI), DOI 10.1109/ISI53945.2021.9624787
Devadiga, Dharani, et al., 2023. GLEAM: GAN and LLM for evasive adversarial malware. In: 2023 14th International Conference on Information and Communication Technology Convergence. ICTC,
Madani, Pooria. Metamorphic Malware Evolution: The Potential and Peril of Large Language Models. 5th IEEE international conference on trust, privacy and security in intelligent systems and applications, Page 74-81. DOI 10.1109/TPS-ISA58951.2023.00019
. Gao, Yun, et al., (2022) Malware detection using attributed CFG generated by pre-trained language model with graph isomorphism network. In: 2022 IEEE 46th Annual. Computers, Software, and Applications Conference. COMPSAC.
Vieira, M et al. Correlating UI Contexts with Sensitive API Calls: Dynamic Semantic Extraction and Analysis. IEEE 31st International symposium on software reliability engineering (ISSRE 2020). Page 241-252. DOI 10.1109/ISSRE5003.2020.00031
Rolon, Luisa et al. (2009). Using artificial neural networks to generate synthetic well logs. Journal of natural gas science and engineering. 1(4-5)
DOI 10.1016/j.jngse.2009.08.003
Deng, Gelei, et al., (2023). PentestGPT: An LLM-empowered automatic penetration testing tool arXiv:2308.06782v2 [cs.SE] for this version) https://doi.org/10.48550/arXiv.2308.06782
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 International Journal of Computational and Experimental Science and Engineering
This work is licensed under a Creative Commons Attribution 4.0 International License.