Journal of Scientific Innovation and Advanced Research (JSIAR)

Peer-reviewed | Open Access | Multidisciplinary

Journal of Scientific Innovation and Advanced Research (JSIAR) Published: December 2025 Volume: 1, Issue: 9 Pages: 506-514

Adversarial Honeypots: AI-Generated Deceptive Environments to Trap Evolving AI-Powered Threat Actors

Original Research Article
Khushi Kumari1
1Department of Information Technology, Noida Institute of Engineering and Technology, Greater Noida, India
Kanishka Vats2
2Department of Information Technology, Noida Institute of Engineering and Technology, Greater Noida, India
Janhavi Singh3
3Department of Information Technology, Noida Institute of Engineering and Technology, Greater Noida, India
Madhurima Chatterjee4
4Department of Information Technology, Noida Institute of Engineering and Technology, Greater Noida, India
Jahnavee Jaiswal5
5Department of Information Technology, Noida Institute of Engineering and Technology, Greater Noida, India
Ikshit Jaiswal6
6Department of Information Technology, Noida Institute of Engineering and Technology, Greater Noida, India
Manorath Singh7
7Department of Information Technology, Noida Institute of Engineering and Technology, Greater Noida, India
*Author for correspondence: Khushi Kumari
Department of Information Technology, Noida Institute of Engineering and Technology, Greater Noida, India
E-mail ID: khushisingh6790@gmail.com

ABSTRACT

The increasing adoption of artificial intelligence by threat actors has introduced a new class of cyberattacks that are dynamic, adaptive, and capable of evading conventional security defenses. Traditional honeypots, while effective against basic intrusion techniques, lack the sophistication required to engage and analyze AI-powered adversaries. This paper presents a novel approach to cybersecurity defense through the design and deployment of Adversarial Honeypots—intelligent, AI-generated deceptive environments capable of misleading and capturing evolving AI-driven threats. The proposed system employs generative models to construct convincing system behaviors and user interactions, while integrating adversarial machine learning techniques to deliberately introduce deceptive elements that disrupt or confuse attacker AI agents. Our methodology involves the simulation of realistic network services, combined with behavioral mimicry and adversarial input generation, to create an environment that appears both authentic and vulnerable. Through a series of controlled experiments and threat engagement simulations, we demonstrate the system’s effectiveness in identifying and deceiving autonomous attack agents. Experimental evaluation shows that the proposed framework achieves an Attack Detection Rate (ADR) of 94.2%, an average attacker Engagement Time (ET) of 257.6 seconds, and a Deception Success Rate (DSR) of 87.5%, while maintaining efficient resource usage with CPU utilization limited to 37.9%. The results indicate significant improvements in attacker engagement duration, detection accuracy, and the richness of threat intelligence captured compared to traditional static honeypots. This research underscores the potential of leveraging AI not only for defensive automation but also for active deception, offering a robust mechanism to stay ahead in the evolving landscape of intelligent cyber threats.

Keywords: Adversarial Honeypots, AI Security, Cyber Deception, Threat Intelligence, Generative AI, Intrusion Detection