Dell PowerEdge R770 Review A Fluid New 2U Server
February 18, 2026

When are attacks truly driven by AI?

As reporting around the Fortinet FortiGate device breaches continues to circulate, the attacks are frequently described as automated, highly automated, or outright AI-enabled.

Based on public reporting and our own incident response investigations, there’s clear evidence of automation, but no evidence that these campaigns are AI-driven.

 

Despite this, the language often collapses these categories into one. But doing so obscures what’s actually happening and, more important, what defenders should learn from it.

Automation has been around for years

The critical distinction between the automated attacks of the past decade and emerging AI-enabled threats comes down to decision-making and adaptability, not just speed or scale.

Traditional automated attacks are essentially scripted workflows. They scan, check for a condition, exploit it if met, collect data, and move on. Everything is predefined: the attacker decides the logic in advance, and the system simply executes it at scale. There’s no adaptability after deployment.

It's not a new pattern. We have seen it repeatedly, from large-scale wormable attacks like WannaCry to IoT-driven malware campaigns such as Mirai, and recent scanning for exposed VPNs, firewalls, and remote management interfaces. These campaigns were devastating because they were ruthlessly efficient and operated at a scale that overwhelmed traditional defenses, but speed and reach alone do not make an attack intelligent.

Contrast this with the Anthropic incident, where reporting suggested a meaningful shift. Much of the activity that would normally require human involvement, such as crafting messages, adjusting language, or iterating on interaction strategies, were handled by an AI system.

In this case, automation went beyond speed or scale to replacing human judgment at critical decision points. As a result, the offensive behavior evolved during the intrusion rather than executing a fixed sequence defenders could anticipate.

Precision in terminology is more than semantic: it shapes how organizations respond.

Precision allows for better defense

Mislabeling every automated campaign as AI-enabled distracts from the operational reality that automated attacks behave differently from interactive intrusions.

They move quickly and disappear before traditional alerting and escalation workflows can respond. Treating them as “intelligent” threats can lead teams to overinvest in behavioral analysis while underinvesting in basics like exposure management and patch velocity.

Clear classification lets defenders align controls with reality. Automated attacks require speed, visibility, and hygiene. Adaptive attacks require resilience, deception, and human-in-the-loop oversight.

When we describe attacks as nearly 90% AI-enabled, we should reserve that label for systems in which AI meaningfully participates in deciding what to do next and adapting based on outcomes. Without precision, organizations risk fighting yesterday’s battles, or tomorrow’s, at the wrong time.

Why defenders still struggle

This confusion persists not because defenders lack understanding, but because media incentives, vendor positioning, and board-level anxiety reward labeling familiar failures as novel, AI-driven threats rather than confronting long-standing gaps in exposure and patching discipline.

For better understanding, we can frame today's threat landscape across three layers:

  • Automated attacks: These are fully rule-based and deterministic. Attacks like WannaCry or Mirai are emblematic of this stage: impactful, but not intelligent.
  • Highly-automated attacks: While still rule-driven, these reflect a more mature operational approach. Multiple steps are chained together, error handling has been improved, and attackers aim to minimize dwell time. The recent Fortinet activity fits squarely here. While attackers may use AI tools to assist in building scripts, the attack itself still executes fixed workflows. It’s traditional automation, not true AI.
  • AI-enabled attacks: This represents an emerging capability in which AI systems take on tasks that previously required human judgment, such as adapting tactics in real-time or learning from defender responses. Most observed activity still falls into the prior category, where AI assists in construction rather than execution.

Our industry must prepare for autonomous, AI-powered attacks. However, it should not lose focus on the threats already here, which are fast, scalable, automated exploitation of known weaknesses. Attackers do not need AI to succeed. They only need defenders to fall behind. Defenders gain little by debating terminology if the operational response does not change.

The window between disclosure and exploitation continues to shrink. Organizations should assume their perimeter devices are under continuous automated scanning and probing for exposed or unpatched services. Defensive priorities must shift toward rapid patching, reducing exposure, and treating firewalls and VPNs as high-value assets rather than overlooked utilities.

Labeling every automated campaign as AI-enabled may sound forward-looking, but it often distracts from the fact that attackers are succeeding by exploiting the same structural weaknesses the industry has struggled with for the past 20 years.