Peer-reviewed | Open Access | Multidisciplinary
The increasing adoption of artificial intelligence by threat actors has introduced a new class of cyberattacks that are dynamic, adaptive, and capable of evading conventional security defenses. Traditional honeypots, while effective against basic intrusion techniques, lack the sophistication required to engage and analyze AI-powered adversaries. This paper presents a novel approach to cybersecurity defense through the design and deployment of Adversarial Honeypots—intelligent, AI-generated deceptive environments capable of misleading and capturing evolving AI-driven threats. The proposed system employs generative models to construct convincing system behaviors and user interactions, while integrating adversarial machine learning techniques to deliberately introduce deceptive elements that disrupt or confuse attacker AI agents. Our methodology involves the simulation of realistic network services, combined with behavioral mimicry and adversarial input generation, to create an environment that appears both authentic and vulnerable. Through a series of controlled experiments and threat engagement simulations, we demonstrate the system’s effectiveness in identifying and deceiving autonomous attack agents. Experimental evaluation shows that the proposed framework achieves an Attack Detection Rate (ADR) of 94.2%, an average attacker Engagement Time (ET) of 257.6 seconds, and a Deception Success Rate (DSR) of 87.5%, while maintaining efficient resource usage with CPU utilization limited to 37.9%. The results indicate significant improvements in attacker engagement duration, detection accuracy, and the richness of threat intelligence captured compared to traditional static honeypots. This research underscores the potential of leveraging AI not only for defensive automation but also for active deception, offering a robust mechanism to stay ahead in the evolving landscape of intelligent cyber threats.
Keywords: Adversarial Honeypots, AI Security, Cyber Deception, Threat Intelligence, Generative AI, Intrusion Detection