Agentic AI and the Future of Cybersecurity

4–6 minutes

When talking to customers about AI these days, we usually get two very different reactions.

Some lean forward, excited about the promise of fewer alerts, faster response, maybe even lights out SOC operations someday. Others lean back, uneasy about the risks, does this mean attackers will be able to run thousands of hacks at once, automatically?

The truth is, both reactions are justified.

We are at the start of a shift toward agentic AI. This is not the same as today’s AI copilots that wait for prompts. Agentic AI acts like an independent operator. It can plan, adapt, and execute a sequence of steps on its own, adjusting if it runs into resistance.

Phase 1 – Today (2025) – Smarter Tools, Human in the Loop 

Right now, AI is being used to accelerate specific tasks. Attackers use it to write convincing phishing emails, clone voices for social engineering scams, or scan for network vulnerabilities. These are powerful accelerators, but there is always a human at the wheel. Organisations are using AI in much the same way: copilots that summarize alerts, parse logs, or draft reports. Endpoint, network, cloud, and identity protections each carry their weight, but the coordination is still managed by people. It is still a human-led fight. 

Phase 2 – The Next 1-2 Years – Autonomous Campaigns 

The real shift will begin when agentic AI can run entire attack campaigns. Instead of simply creating phishing emails, it will manage the whole lifecycle. It will deliver the phish, capture credentials, test them across cloud apps, pivot through the network, escalate privileges, and establish persistence. If blocked in one direction, it will instantly adjust and try another. It will not get tired or forget a step. 

For organisations, this will make traditional, manual SOC operations obsolete. We cannot out staff an adversary that scales infinitely. AI-driven defence will become the norm. Systems will need to isolate devices, reset accounts, block malicious flows, and coordinate responses in seconds, often before a human even sees the alert. Analysts will become supervisors of AI, providing oversight, strategy, and context rather than firefighting every incident. 

Phase 3 – The Next 3-5 Years – Adaptive Adversaries 

The next evolution will be adaptability. Agentic AI will learn from failure and self-correct. If a stolen credential does not work, it will pivot to probing a cloud misconfiguration. If an endpoint is patched, it will look for lateral movement opportunities. If one vector is shut down, it will generate another. Imagine a tireless adversary checking every door and window at once, never discouraged, never slowing down. 

This is where defenders will need seamless integration across endpoint, network, cloud, and identity. Signals will have to flow instantly between layers. Defensive AI will need the full context to anticipate the attacker’s next move and cut it off before it happens. In this future, no part of security can afford to live in a silo. 

What Still Needs to Happen for Agentic AI to Arrive 

We are not there yet, and that is important to remember. To reach this level, AI development still has hurdles. First, models need stronger memory and persistence to manage long-term goals without drifting. Second, they need better reasoning to chain together actions in the right sequence across different systems. Third, they need more autonomy in interacting with software, APIs, and environments reliably without constant guardrails. Finally, they require the ability to safely self-improve, learning from failed attempts without veering into unpredictable behaviour. 

These steps are being worked on today in research labs, and progress is accelerating. We already see demonstrations of AI agents that can code, debug, and iterate toward a goal. Translating that into full, autonomous cyber campaigns is not far-fetched, but it still requires a few key breakthroughs. 

What Will Tip the Scales 

Whether this favours attackers or defenders will depend on who integrates and deploys agentic AI more effectively. If adversaries get there first, we may see autonomous ransomware gangs, AI-driven phishing factories, and adaptive malware campaigns running at global scale. If organisations seize the opportunity, SOCs can become AI-first operations where human expertise scales infinitely through automation. 

For attackers, the advantage will come from using agentic AI to lower the barrier to entry. Complex campaigns that once required skilled teams could be run by a single bad actor with access to the right tools. For organisations, the advantage will come from integration. Endpoint, network, cloud, and identity need to share signals in real time. That is where security platforms like WatchGuard already play an essential role in bringing these layers together into one coordinated defence fabric. 

Why Deepfakes Aren’t the Whole Story 

It is easy to get distracted by deepfakes and AI-powered scams. Those are real issues, but they are not the main event. The bigger shift is when agentic AI starts stitching everything together-stealing identities, exploiting cloud misconfigurations, moving laterally across networks, persisting on endpoints, all at once, automatically, and at speed. Focusing only on spotting fake videos or voices is like guarding the front door while the attacker slips in through the side window. 

Where This Leaves Us 

We are not yet at the point of self-replicating AI hackers, but the trajectory is clear. Phase 1 is here already. Phase 2 is emerging in proof-of-concept attacks and Phase 3 will not be far behind. The lesson is that organisations cannot wait. Automation must be built into detection and response now, and every layer of security must speak the same language. 

The future fight will not be human against human. It will be AI against AI. The organizations that integrate their defences and adopt agentic tools responsibly will be the ones still standing when the rules of engagement change