In the relatively short time since ChatGPT was publicly released, artificial intelligence (AI), especially generative AI, has become a weapon of choice for cyberattackers targeting US companies, local governments, and federal agencies. In a survey by security vendor Darktrace of cybersecurity and IT professionals, 74% of participants agreed that AI-powered threats now pose a significant challenge for their organization.
Bad actors’ use of AI for cyberattacks is already common, but will soon be ubiquitous and increasingly efficacious as the AI landscape changes. High-performing generative AI models can now be trained at a significantly lower cost than in the past, a trend that will broaden usage both benevolent and malicious. In the race to develop AI, organizations in multiple countries are building highly capable models that may not be safe. For example, one highly publicized model released at the beginning of 2025 was both far cheaper to build than earlier models and more suggestible to users asking for harmful information, like how to code malware.
Below, we review key factors driving a dramatic increase in cyberattacks that leverage AI and outline actions enterprises can take to defend against them.
Why generative AI is a cyberattack superpower for criminals
With so much knowledge and capability baked into them, generative AI models enable novice hackers to generate sophisticated code and technical exploits — previously time-intensive work restricted to experts — by simply asking an AI system for help. Guardrails on mainstream generative AI tools to protect against malicious uses have proven imperfect, and cybercriminals sell altered models to help with cyberattacks for well under $100.
In addition to making cyberattacks more accessible, generative AI is increasing their speed and scale. This is in part because AI can help automate cyberattacks. In just six months, that capability has contributed to a sevenfold increase in cyberattack attempts on one tech giant, to 750 million a day.
The speed that generative AI grants attackers also means more of the attempted attacks will be successful, both because of the huge volume and because attacks can now rapidly be recalibrated. By leveraging generative AI to quickly analyze the outcomes of failed attacks and to process data gathered on targets (such as legacy system code), attackers can reengineer their strategies and tailor their attacks with unprecedented efficiency.
The cyber threat landscape is evolving rapidly with generative AI
Empowered by the boost in skills, speed, and scale from generative AI, cyberattackers can efficiently carry out attacks on organizations across a range of methods.
Phishing and social engineering: AI can closely mimic human text, video, and audio to trick employees into taking unauthorized action or providing sensitive information. With generative AI, bad actors can now write tailored, natural-sounding phishing lures at scale in almost any language. In an experiment, researchers from the Government Technology Agency of Singapore demonstrated that generative AI technology from a major provider helped write phishing emails that were opened by targets significantly more frequently than ones written by humans.
Deepfake Scams: Generative AI can be used to create ultra-realistic audio and video as well as context-aware text that security systems and humans alike find hard to differentiate from authentic content. While company employees have been trained to recognize phishing, far fewer are trained to recognize deepfake attempts. In one incident, a staff member of a multinational engineering group was persuaded by deepfakes of the company’s chief financial officer and other employees to transfer about $25 million in company funds to foreign accounts.
Mutating malware and ransomware: Attackers can leverage AI to quickly write mutating programs that evade traditional detection methods and corrupt data or lock systems down until ransom payments are made. As part of its protective DNS research, security vendor HYAS has demonstrated an autonomous malware proof of concept leveraging generative AI that reads its target environment, determines attack vectors, and generates and tests malware until successful.
Software vulnerability exploitation: AI can comb through and piece together extensive public data on known vulnerabilities, find these vulnerabilities in companies’ systems, and generate the code to exploit them. Multiple nation-state hacking groups have been observed using generative AI to conduct vulnerability research, including efforts to better understand publicly reported vulnerabilities.
SQL injections: Attackers can use generative AI to rapidly write and revise code targeting SQL databases (that is, traditional structured databases of tables), allowing them to alter, delete, or steal sensitive information.
Denial of service: By generating high-volume yet seemingly realistic traffic patterns, for instance, AI can increase the effectiveness of networks of infected computers — often called botnets — at overloading the capacity of business-critical systems, preventing them from providing service.
The imperative to revamp or reinforce cyber defenses
In the face of these advanced AI-driven threats, traditional cyber defenses such as signature-based antivirus software and rule-based intrusion detection systems are insufficient. It is imperative to determine and deploy a more adaptive and intelligent cybersecurity framework with pillars including:
AI-powered threat detection: Organizations must invest in advanced threat detection systems that use AI, including machine learning and deep learning, to identify and mitigate evolving cyber threats. These systems can analyze vast amounts of data from many different sources in real time, detecting patterns and anomalies that may indicate potential attacks.
Advanced behavioral analysis and anomaly detection: Leveraging AI tools, advanced behavioral analysis and anomaly detection techniques focus on learning the normal behavior of systems and users and promptly identifying deviations that may indicate a cyber threat. This dynamic, proactive approach tailored to an organization’s environment enhances the chances of early threat detection and containment.
Continuous monitoring of cyber threats and incident response: Because rapid detection and response to cyber threats are critical in minimizing damage, cybersecurity strategies should move toward continuous monitoring and real-time incident response capabilities with AI-driven systems to automate these processes. The reduced response time — paired with an incident response playbook ensuring teams understand their roles during an incident — can improve the overall resilience of an organization’s cybersecurity posture.
Adaptive access controls in cybersecurity measures: Traditional access controls based on static rules may not be sufficient to counter the dynamic nature of AI-driven threats. Firms should implement AI systems that dynamically alter user permissions based on user identity, user purpose, and contextual information that informs the level of security risk (such as authentication strength or user physical location). Such systems are more likely to restrict malicious actors' access level and power if they do succeed in infiltration.
Cybersecurity awareness training: According to Verizon’s 2025 Data Breach Investigation Report, around 60% of data breaches involve a human element — that is, mistakes made within an organization that could be positively affected by training and awareness. Providing comprehensive security awareness training to employees about the risks associated with AI-generated content, such as convincing phishing emails or deepfake voice and video imitations, is critical for enabling them to make informed decisions and avoid falling victim to cyberattacks.
AI-driven attacks are already prevalent, and the cyber threat landscape is poised to keep evolving rapidly. AI agents that can autonomously execute tasks could automate even more of the cyberattack lifecycle. Soon, practical quantum computing and quantum emulation could be wielded to break defenses like encryption, rendering your organization and its customers an open book. By adopting a more intelligent, adaptive cybersecurity strategy today, you won’t be starting the race against a new era of advanced cyberthreats several steps behind.