Explainable deep learning based adaptive malware detection framework to improve interpretability
DOI:
https://doi.org/10.22399/ijcesen.4215Keywords:
Explainable AI, Deep learning, Malware detection, Adaptive framework, Interpretability, CybersecurityAbstract
The development of advanced evasion strategies that make conventional signature-based approaches useless presents great difficulties for modern malware detection. This paper suggests an explainable deep learning-based adaptive malware detection system meant to improve detection accuracy and offer interpretable insights on its decision-making process at once. Our method uses a hybrid neural architecture combining convolutional and recurrent layers to extract both stationary and behavioural aspects from possibly dangerous executables. Through constant learning systems that change the detection settings as new malware variants develop, the model responds to growing threats. Our integrated explainability layer—which uses local interpretable model-agnostic explanations (LIME) and attention visualization approaches to clarify the particular traits and patterns that impact categorization decisions—is the key novelty. Experimental results reveal that our technique achieves a 97.3% detection rate on zero-day samples while maintaining a false positive rate below 0.5%. The given explanations help security experts to grasp detection rationales, confirm results, and create more successful countermeasures. This interpretability feature helps to solve the "black-box" issue sometimes connected with deep learning solutions in cybersecurity and promotes more confidence and acceptance in corporate security settings.
References
[1] Raff, E., Sylvester, J., & Nicholas, C. (2018). Learning the PE header, malware detection with minimal domain knowledge. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security.
[2] Rusak, G., Al-Dujaili, A., & O'Reilly, U. M. (2018). CLEAR: Explainable and interpretable malware classification with attention-based neural architectures. 2018 IEEE Security and Privacy Workshops (SPW).
[3] Suarez-Tangil, G., Dash, S. K., Ahmadi, M., Kinder, J., Giacinto, G., & Cavallaro, L. (2019). DroidSieve: Fast and accurate classification of obfuscated android malware. ACM Transactions on Privacy and Security.
[4] Raman, K., Nataraj, L., Manjunath, B. S., & De Cock, M. (2019). Explaining the explainable: Understanding malware classifiers. IEEE Security & Privacy.
[5] Hu, W., & Tan, Y. (2019). Black-box attacks against deep reinforcement learning-based malware detection algorithms. IEEE Transactions on Network Science and Engineering.
[6] Singla, A., Bertino, E., & Verma, D. (2020). Explaining deep learning models for malware detection. Journal of Information Security and Applications.
[7] Huang, J., Qian, J., Sun, Z., & Wang, Y. (2020). ALAAD: Adversarial learning augmented adaption for automated malware detection. IEEE Access.
[8] Gibert, D., Mateu, C., Planes, J., & Vicens, R. (2020). Using convolutional neural networks for classification of malware represented as images. Journal of Computer Virology and Hacking Techniques.
[9] Kim, D., Shin, D., Baek, J., & Lee, S. (2020). Interpretable malware detection using convolutional neural networks and gradient-weighted class activation mapping. In Proceedings of the 2020 IEEE Conference on Communications and Network Security.
[10] Gupta, A., Mohanty, S., & Ragunathan, V. (2021). XMAL: Explaining malware detections through explainable machine learning techniques. IEEE Transactions on Dependable and Secure Computing.
[11] Fang, Z., Wang, J., Li, B., Wu, S., Zhou, Y., & Huang, H. (2021). Evading malware detection via interpretable feature engineering. Proceedings of the 24th International Symposium on Research in Attacks, Intrusions and Defenses.
[12] Alshahrani, H., Mansouri, A., & Tsiropoulou, E. E. (2021). A trust-based adaptive malware detection framework with explainable classification. In IEEE INFOCOM 2021-IEEE Conference on Computer Communications Workshops.
[13] Naseer, S., Saleem, Y., Khalid, S., & Bashir, M. K. (2022). EMDETECT: Enhanced malware detection via explainable deep learning. Security and Communication Networks.
[14] Singh, R., Kumar, R., & Verma, A. K. (2022). ADEPT: Adversarially trained explainable ensemble system for malware detection. IEEE Transactions on Information Forensics and Security.
[15] Zhao, M., Tang, M., Tong, Y., & Jin, Z. (2022). Towards explainable malware detection: A hierarchical attention network approach. IEEE International Conference on Communications.
[16] Barradas, D., Santos, N., & Rodrigues, L. (2023). MalXplain: A survey on explainable malware detection. ACM Computing Surveys.
[17] Li, H., Wang, R., Chen, J., & Wang, X. (2023). Adaptive malware detection with self-attention and concept-based explanations. In Annual Computer Security Applications Conference (ACSAC).
[18] Ahmed, T., Kumar, M., & Bhushan, B. (2023). FLAME: Feature-Level attention mechanism for explainable malware detection. IEEE International Conference on Artificial Intelligence and Knowledge Engineering.
[19] Mishra, P., Patel, V., & Gianvecchio, S. (2024). Dynamic attribution networks for explainable malware detection in enterprise environments. Journal of Cybersecurity and Privacy.
[20] Zhang, L., Qian, C., & Li, W. (2024). XAID: Cross-architecture interpretable detection of evolving malware using transfer learning. IEEE Transactions on Neural Networks and Learning Systems.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Journal of Computational and Experimental Science and Engineering

This work is licensed under a Creative Commons Attribution 4.0 International License.