Malware analysis education meets LLMs: understanding student use of LLMs in malware analysis education

Çetin, Orçun and Bıyıklı, Nazlı (2026) Malware analysis education meets LLMs: understanding student use of LLMs in malware analysis education. International Journal of Information and Education Technology, 16 (3). pp. 779-788. ISSN 2010-3689

PDF (Open Access (© 2026 by the authors))
Malware.pdf
Available under License Creative Commons Attribution.

Download (3MB)

Abstract

Large Language Models (LLMs) are increasingly used in cybersecurity, both as practical tools in real-world tasks like penetration testing and reverse engineering, and as educational aids for students learning complex analysis techniques. While recent research highlights their potential to automate code analysis, deobfuscation, and threat detection, less is known about how students actually use these models during malware analysis courses. To address this gap, we conducted a survey of 37 students enrolled in university-level malware analysis courses. Our findings show that all participants reported using LLMs, primarily for assignments and labs (70.2%) and to better understand course content (59.4%). Students primarily analyzed outputs from Interactive Disassembler Pro (IDA Pro) (83.7%), followed by OllyDbg and Wireshark (43.2%). They mainly used LLMs for advanced static analysis, especially for disassembled code interpretation (37.8%). While 59.4% of students reported no major issues when using LLMs, 27% encountered refusals to respond, primarily due to ethical safeguards built into the models, and others noted inaccurate or overly generic responses and token-size limits. In terms of satisfaction, 67.5% of students reported positive experiences with LLMs, and 81% indicated they were likely to continue using them for cybersecurity-related tasks in the future. These findings suggest the need for responsible integration of LLMs into cybersecurity education through lecturer guidance, ethical transparency, and effective assessment design. Overall, they highlight both the strengths and limitations of LLMs in supporting advanced technical learning.
Item Type: Article
Additional Information: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
Uncontrolled Keywords: Artificial Intelligence (AI) in cyber security; education; generative AI; large language model; malware analysis
Divisions: Faculty of Engineering and Natural Sciences
Depositing User: Orçun Çetin
Date Deposited: 30 Apr 2026 14:43
Last Modified: 30 Apr 2026 14:43
URI: https://research.sabanciuniv.edu/id/eprint/53948

Actions (login required)

View Item
View Item