With an outpouring of innovation in AI technology over the last decade, including the rise of GenAI, AI has subtly woven itself into the fabric of our daily lives. Cybersecurity is no exception.
Attackers and defenders alike are now leveraging AI to amplify the speed and sophistication of their plays, driving a relentless escalation where defenders must stay ahead of the curve. Security executives are clearly concerned, as they recently ranked AI-powered malicious attacks and misinformation campaigns as their top two emerging enterprise risks in a Gartner survey.
On one side of the field, attackers are using AI as their personal coach to learn the game and suggest plays. By lowering the barrier to entry, threat actors with minimal expertise can pull off sophisticated, large scale attacks. They’re even able to generate polymorphic malware capable of changing its own code to avoid detection, capitalizing on the defender’s blind side.
On the other side, defenders are deploying machine learning (ML) models to identify suspicious patterns in real time and strategically anticipate the attacker's next moves. To stay in the game, organizations need defenses that can quickly adapt to the pace of these shifting threats.
AI’s potential for harm is no longer theoretical. It’s here—and, as the Gartner survey bears out, enterprises are already feeling the impact of this new frontier with sprawling disinformation, job displacements and a clear need for governance to ensure responsible AI use. As AI tools become more accessible in 2025, organizations without robust defenses may also find themselves struggling to keep up with the rapidly growing volume of threats.
Even inexperienced attackers can now orchestrate complex breaches with devastating results thanks to large language models (LLMs) lowering the barrier to entry for cybercriminals. This has made campaigns faster, smarter and harder to defend against.
Here’s how attackers are weaponizing AI:
AI allows attackers to scale their phishing campaigns at a rate and precision that not long ago were unimaginable. Personalized messages make the deceit harder to detect, increasing the risk of damage across the board.
Social engineering attacks are now more convincing than ever thanks to AI tools that mimic trusted sources with uncanny detail. When you’re constantly scrutinizing the authenticity of every communication, nothing feels safe anymore.
Just as you’ve spotted an attack, with a swift rewrite of its code, polymorphic malware slips from your grasp. As a result, mitigation becomes a mind-splitting headache with AI enabling malware to continuously evolve, turning it into a moving target.
AI-driven systems can scan vast networks for vulnerabilities in record time. This reduces the time and effort attackers need to identify their target’s weak points—and the last thing we want is more free time for those guys.
The potential of dealing with any of these vexing attacks may seem alarming, but the good news is that AI doesn’t play favorites. Defenders are leveraging it to their advantage too.
To level the playing field, defenders are combining AI’s speed and precision with human supervision. And 2025 is our year. Google’s 2025 Cybersecurity Forecast predicts a new era where security workflows are done by the system itself, allowing human teams to accomplish more. As Sunil Potti, VP/GM of Google Cloud Security, notes, “2025 is the year where we’ll genuinely see the second phase of AI in action with security.”
With AI and ML as their sweepers (shout out to soccer nerds!) , defenders are effectively shifting the battlefield. By leveraging AI for autonomous, real-time threat detection and enhanced predictive analytics, some organizations are enabling their systems to shut down active threats before they reach their goal. This minimizes downtime and saves companies potentially millions.
Although full autonomy in cybersecurity is still a way off, AI is becoming more ubiquitous as a way to make existing security tools more useful. Today, organizations can begin integrating GenAI to drive small yet impactful changes to their workflows. Unlike with OpenAI, which can be susceptible to data security risks, GenAI equips defenders to streamline operations and oversee more efficient investigations by:
Able to focus on the most pressing threats, security teams can make faster, more informed decisions and remediate the risks that matter the most. An advantage like AI could help defenders stay ahead in a game where the goalposts are always shifting.
GenAI can be enormously useful, but behavior analytics algorithms, many of which are commonly used in AI models, can also be used to actively and automatically foil attacks before they’re able to do damage. For instance, growing numbers of defenders are using AI-type algorithms to stop an emerging cyber scourge: living off the land (LOTL) attacks, where threat actors use operating system features or legitimate tools to launch attacks. From 2021 to 2023, nearly half of all ransomware attacks used LOTL tools, with legitimate software accounting for six out of 10 of the tools most commonly used in all ransomware attacks.
One way to stop LOTL attacks is to prevent anomalous use of “the land”--in other words, the legitimate software leveraged by attackers. Enter Adaptive Protection, available only from Broadcom. This powerful protection monitors an organization’s typical day-to-day usage and then blocks behaviors that fall outside of that typical use profile. Adaptive Protection automatically learns and suggests exceptions to cover the usage it does observe, and block items outside normal usage without impacting recognized workflows and behaviors. The result is a block-allow policy that is unique to that organizations or workgroup. Adaptive Protection shrinks the attack surface by identifying anomalous behaviors that may indicate an LOTL attack.
When it comes to “drafting” your stack, go for private AI systems with capabilities that will improve your security workflows and empower your teams to do more on the field. Not only are they generally more secure than public AI platforms, they’re often less expensive too.
This year, look for exciting enhancements to SymantecAI, our in-house AI platform that aims to make your security team’s lives even easier. We're just getting started, too. To stay at the forefront of innovation, more AI enhancements are on the roster–so even when attackers move the goalposts, we’re still ahead.
As Technovera Co., we officially partner with well-known vendors in the IT industry to provide solutions tailored to our customers’ needs. Technovera makes the purchase and guarantee of all these vendors, as well as the installation and configuration of the specified hardware and software.
We believe in providing technical IT solutions based on experience.