Journal of Scientific Innovation and Advanced Research

Peer-reviewed | Open Access | Multidisciplinary

Journal of Scientific Innovation and Advanced Research (JSIAR) Published: April 2025 Volume: 1, Issue: 1 Pages: 17-22

Adversarial Machine Learning for Security: Experimental Techniques for Defending Against AI-Powered Cyberattacks

Original Research Article
Priyanshu Sharma1
1Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Simran2
2Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Abhiraj Govind3
3Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Sunny Raj4
4Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Jyoti Mahur5
5Department of Computer Science and Engineering, Noida International University, Greater Noida, India
*Author for correspondence: Priyanshu Sharma
Department of Computer Science and Engineering, Noida International University, Greater Noida, India
E-mail ID: priyanshu.sharmaedu@gmail.com

ABSTRACT

As artificial intelligence (AI) becomes deeply embedded in cybersecurity systems, it simultaneously introduces new vulnerabilities, particularly through adversarial machine learning (AML). These vulnerabilities allow malicious actors to subtly manipulate inputs, leading to erroneous outcomes in otherwise reliable AI models. This paper investigates the evolving landscape of AI-powered cyberattacks and focuses on the development and experimental evaluation of defense mechanisms against such adversarial threats. While adversarial attacks have been extensively studied in image recognition, their implications in security-sensitive domains such as intrusion detection, malware classification, and network anomaly detection remain less explored. This research presents a systematic examination of multiple adversarial attack strategies—including Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and DeepFool—applied to cybersecurity datasets. The study further evaluates the robustness of various defense approaches, including adversarial training, defensive distillation, feature squeezing, and input reconstruction using autoencoders. Experimental trials were conducted on benchmark datasets like NSL-KDD and CIC-IDS2017 to measure performance metrics such as accuracy, detection rate, and resilience under attack. Findings indicate significant differences in defense effectiveness across models and attack types, revealing that no single technique provides universal protection. The study emphasizes the importance of context-aware, layered defense strategies and highlights the need for adaptable models capable of withstanding evolving adversarial tactics. By combining empirical results with analytical insights, this work contributes to strengthening the defensive posture of AI systems in cybersecurity, encouraging further research into resilient AI architectures.

Keywords: Adversarial Machine Learning, Cybersecurity, AI-Powered Attacks, Defense Mechanisms, Intrusion Detection Systems, Robustness Evaluation