Peer-reviewed | Open Access | Multidisciplinary
As artificial intelligence (AI) becomes deeply embedded in cybersecurity systems, it simultaneously introduces new vulnerabilities, particularly through adversarial machine learning (AML). These vulnerabilities allow malicious actors to subtly manipulate inputs, leading to erroneous outcomes in otherwise reliable AI models. This paper investigates the evolving landscape of AI-powered cyberattacks and focuses on the development and experimental evaluation of defense mechanisms against such adversarial threats. While adversarial attacks have been extensively studied in image recognition, their implications in security-sensitive domains such as intrusion detection, malware classification, and network anomaly detection remain less explored. This research presents a systematic examination of multiple adversarial attack strategies—including Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and DeepFool—applied to cybersecurity datasets. The study further evaluates the robustness of various defense approaches, including adversarial training, defensive distillation, feature squeezing, and input reconstruction using autoencoders. Experimental trials were conducted on benchmark datasets like NSL-KDD and CIC-IDS2017 to measure performance metrics such as accuracy, detection rate, and resilience under attack. Findings indicate significant differences in defense effectiveness across models and attack types, revealing that no single technique provides universal protection. The study emphasizes the importance of context-aware, layered defense strategies and highlights the need for adaptable models capable of withstanding evolving adversarial tactics. By combining empirical results with analytical insights, this work contributes to strengthening the defensive posture of AI systems in cybersecurity, encouraging further research into resilient AI architectures.
Keywords: Adversarial Machine Learning, Cybersecurity, AI-Powered Attacks, Defense Mechanisms, Intrusion Detection Systems, Robustness Evaluation