Peer-reviewed | Open Access | Multidisciplinary
The rapid expansion of digital infrastructures and cloud-connected services has led to a substantial increase in network traffic volume and structural complexity, thereby intensifying the frequency and sophistication of cyberattacks across modern computing environments. Conventional intrusion detection systems (IDS), particularly those dependent on static signatures or manually engineered rules, often exhibit limited adaptability to emerging threat patterns and provide minimal insight into the reasoning behind generated alerts. Although deep learning-based detection models have demonstrated strong capability in identifying complex and previously unseen attack behaviors, their lack of interpretability continues to present operational challenges for cybersecurity professionals responsible for making timely and accountable decisions. In mission-critical domains such as finance, healthcare, and cloud infrastructure management, the absence of transparent decision logic can delay response actions, complicate incident investigation, and reduce confidence in automated defense mechanisms. To address these limitations, this study introduced an explainable artificial intelligence (XAI)-driven intrusion detection framework designed to combine high predictive accuracy with interpretable decision support in real time. The proposed architecture integrates complementary deep learning techniques, including Long Short-Term Memory (LSTM) networks for sequential traffic analysis, Convolutional Neural Networks (CNN) for hierarchical feature extraction, Gated Recurrent Units (GRU) for efficient temporal learning, and Autoencoder-based anomaly detection for identifying deviations from established behavioral patterns. The system was rigorously evaluated using representative cybersecurity benchmark datasets such as NSL-KDD, CICIDS2017, and UNSW-NB15, ensuring exposure to diverse attack categories including distributed denial-of-service events, brute-force intrusions, reconnaissance probes, and unauthorized access attempts. Comprehensive preprocessing procedures—comprising feature normalization, categorical transformation, and class balancing—were implemented to enhance model stability and ensure reliable performance across heterogeneous traffic conditions. Empirical evaluation revealed that the proposed explainable detection framework achieved consistently strong classification outcomes across all tested datasets, demonstrating a high level of predictive reliability and operational robustness. The model produced an overall detection accuracy approaching 97%, accompanied by balanced precision and recall values exceeding 0.96, indicating its ability to correctly identify malicious activities while minimizing missed detections. Notably, the system maintained a substantially reduced false positive rate of approximately 0.03, reflecting improved discrimination between legitimate and anomalous network behavior. Beyond numerical performance gains, the integration of explainability mechanisms—specifically SHAP and LIME—enabled transparent identification of influential network attributes such as flow duration, packet size, protocol distribution, and connection frequency. These interpretable insights supported more informed security decision-making and reduced the cognitive burden associated with excessive alert investigation. Collectively, the results demonstrate that combining interpretable deep learning with explainable artificial intelligence provides a practical and trustworthy solution for real-time intrusion detection in modern cybersecurity infrastructures.
Keywords: Explainable Artificial Intelligence (XAI), Intrusion Detection System (IDS), Deep Learning, Real-Time Cybersecurity, Network Anomaly Detection, Model Interpretability, Cyber Threat Detection