The Blindspot Inside Your Infrastructure
The FBI's Cyber Division noted in its February 2026 threat advisory that AI-assisted intrusions increased 340% year-over-year in 2025. But the headlines about autonomous malware miss the more insidious threat: Groups including Akira, Qilin, and Scattered Spider have integrated AI agents into their attack pipelines, enabling them to conduct highly targeted spear-phishing, autonomous vulnerability scanning, and adaptive malware generation — all without continuous human operator involvement.
This is no longer a prediction—it's operational reality. Yet there's a parallel truth organizations are ignoring: Databricks customers are already deploying AI agents that query databases, call external APIs, execute code, and coordinate with other agents.
The problem is architectural. There's a scenario that should concern security teams even more: an attacker who doesn't need to run through the kill chain at all, because they've compromised an AI agent that already lives inside your environment. One that already has the access, the permissions, and a legitimate reason to move across your systems every day.
Where Visibility Dies
Sixty-three percent of employees who used AI tools in 2025 pasted sensitive company data, including source code and customer records, into personal chatbot accounts. The average enterprise has an estimated 1,200 unofficial AI applications in use, with 86% of organizations reporting no visibility into their AI data flows. Shadow AI breaches cost an average of $670,000 more than standard security incidents, driven by delayed detection and difficulty determining the scope of exposure.
More than 3 in 4 (76%) organizations now cite shadow AI as a definite or probable problem, up from 61% in 2025, a 15-point year-over-year increase and one of the largest shifts in the dataset.
But the real vulnerability isn't in shadow AI—it's in the legitimate agents security teams haven't even catalogued.
How Attackers Weaponize Agent Capabilities
Success rates for AI-generated spear phishing are 3-5x higher than template-based approaches, per Proofpoint's 2026 Human Factor Report. Once inside a network, AI agents scan for unpatched CVEs, misconfigurations, and lateral movement opportunities autonomously.
They can generate custom exploit code on-the-fly for identified vulnerabilities — a capability previously requiring senior offensive security expertise. Before deploying ransomware payloads, agents identify and prioritize high-value data: financial records, PII datasets, intellectual property. They use semantic understanding to locate the most damaging files rather than brute-force copying everything — maximizing extortion leverage while minimizing detection exposure.
This represents a fundamental shift. In September 2025, Anthropic disclosed that a state-sponsored threat actor used an AI coding agent to execute an autonomous cyber espionage campaign against 30 global targets. The AI handled 80-90% of tactical operations on its own, performing reconnaissance, writing exploit code, and attempting lateral movement at machine speed.
The Governance Crisis
According to Marta Janus, principal security researcher at HiddenLayer, "As soon as agents can browse the web, execute code and trigger real-world workflows, prompt injection is no longer just a model flaw. It becomes an operational security risk with direct paths to system compromise."
Defenses haven't kept pace. The Databricks AI Security Framework (DASF) now covers Agentic AI as its 13th system component, adding 35 new technical security risks and 6 new mitigation controls to help organizations deploy autonomous agents with confidence. This extension addresses the unique risks of agent memory, planning, and tool use, including threats introduced by the Model Context Protocol (MCP), the emerging standard for connecting agents to enterprise tools.
Yet security controls, authentication and monitoring have not kept pace with this growth, leaving many organizations exposed by default.
What Actually Works: Defense-in-Depth for Agentic Systems
By correlating behavioral signals across endpoint, SaaS, and identity telemetry, organizations can spot when attackers are abusing AI and stop them before data is exfiltrated.
Agents need granular permissions scoped to their immediate task, limiting the blast radius the same way RBAC and ABAC limit a human's. This means:
- Least-privilege tooling: Agents should only access what they need, when they need it
- Continuous behavioral baselining: Know what "normal" agent activity looks like for your environment
- Supply chain vetting: Evaluate not just the agent code but the external APIs and data sources it connects to
- Incident response readiness: Plan for agent compromise as fact, not possibility
According to Kirsty Paine, field CTO at Splunk and fellow at WEF: "In 2026, cyber resilience will depend on out-learning, not just out-blocking, the adversary."
The Real Inflection Point
We're not at the point where fully autonomous, end-to-end agentic cyberattacks dominate. The UK's NCSC is slightly more reserved: "The development of fully automated, end-to-end advanced cyberattacks is unlikely [before] 2027. Skilled cyber actors will need to remain in the loop."
But that timeline is worse than it sounds. Michael Freeman, head of threat intelligence at Armis, predicts, "By mid-2026, at least one major global enterprise will fall to a breach caused or significantly advanced by a fully autonomous agentic AI system." These systems "use reinforcement learning and multi-agent coordination to autonomously plan, adapt, and execute an entire attack lifecycle: from reconnaissance and payload generation to lateral movement and exfiltration. They continuously adjust their approach based on real-time feedback."
The edge case becomes inevitable. Organizations deploying agentic AI for automation are simultaneously deploying agentic AI for attackers—once compromise occurs, the attacker inherits trusted access, legitimate API keys, and established workflows.
The Governance Conversation You Need to Start Now
This isn't a technical problem alone. According to the HiddenLayer 2026 AI Threat Landscape Report, 1 in 8 companies reported AI breaches are now linked to agentic systems, signaling that security frameworks and governance controls are lagging.
The question isn't whether your organization will deploy agentic AI. The question is whether you understand the agents already inside your perimeter, what they can access, and what happens when one gets compromised.
[1] FBI Cyber Division, February 2026 Threat Advisory - AI-assisted intrusions increased 340% year-over-year
[2] Proofpoint 2026 Human Factor Report - AI-generated phishing success rates 3-5x higher than traditional approaches
[3] HiddenLayer 2026 AI Threat Landscape Report - 1 in 8 companies report AI breaches linked to agentic systems
[4] Databricks AI Security Framework v3.0 - 35 new agentic AI security risks identified

