Journal of Scientific Innovation and Advanced Research (JSIAR)

Peer-reviewed | Open Access | Multidisciplinary

Journal of Scientific Innovation and Advanced Research (JSIAR) Published: November 2025 Volume: 1, Issue: 8 Pages: 446-455

Interpretable Deep Learning Framework for Anomaly Detection in High-Dimensional Network Traffic Data

Original Research Article
Vishesh Sharma1
1Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Arihant Rai2
2Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Yash Dixit3
3Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Yash Tomar4
4Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Rishabh Rai5
5Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Tanishq Sharma6
6Department of Computer Science and Engineering, Noida International University, Greater Noida, India
*Author for correspondence: Vishesh Sharma
Department of Computer Science and Engineering, Noida International University, Greater Noida, India
E-mail ID: visheshsharma976029@gmail.com

ABSTRACT

In modern digital infrastructures, the rapid escalation of network complexity has made the detection of anomalous traffic patterns increasingly challenging. High-dimensional data, generated by large-scale networks, often obscures critical indicators of intrusion or misuse when analyzed through conventional machine learning techniques. While deep learning models have demonstrated remarkable capability in identifying such anomalies, their opaque decision-making processes hinder trust, accountability, and operational transparency in security-sensitive environments. This paper proposes an interpretable deep learning framework designed to detect anomalies in high-dimensional network traffic data with enhanced clarity and precision. The framework integrates feature reduction techniques with explainable components that reveal the reasoning behind each prediction, allowing analysts to visualize and interpret network behaviors that deviate from normal patterns. Experimental evaluations conducted on benchmark network intrusion datasets demonstrate that the proposed model achieves superior detection accuracy and robustness compared to traditional classifiers while maintaining a high degree of interpretability. The results underscore that explainability not only strengthens model reliability but also bridges the gap between automated decision-making and human expertise. This research contributes to the development of trustworthy artificial intelligence systems capable of safeguarding complex network environments while ensuring interpretability remains central to the detection process.

Keywords: Explainable Artificial Intelligence (XAI), Deep Learning, Network Anomaly Detection, High-Dimensional Data, Model Interpretability, Cybersecurity, Transparent Machine Learning