The Gravitee State of AI Agent Security 2026 Report, based on a survey of 900 executives and technical practitioners across the United States and United Kingdom, documents that 88% of organizations have confirmed or suspected AI agent security or data privacy incidents within the last year. In healthcare specifically, where AI agents are embedded in clinical workflows, EHR systems, diagnostic platforms, billing infrastructure, and supply chains, that figure reaches 92.7%—the highest of any sector surveyed. The report indicates that large firms in these countries have deployed 3 million AI agents combined, with nearly half—1.5 million—running without any active monitoring or security controls, leaving them vulnerable to unauthorized actions at machine speed.
The findings reveal a fundamental identity crisis underlying these incidents. According to the report, 45.6% of teams rely on shared API keys for agent-to-agent authentication—a foundational credential security failure that MITRE ATT&CK classifies under T1552 (Unsecured Credentials). Only 21.9% of technical teams treat AI agents as independent, identity-bearing entities with their own credential scope and behavioral baseline. Furthermore, 82% of executives believe existing policies protect them from unauthorized agent actions, while only 21% have actual visibility into what their agents can access, which tools they call, or what data they touch.
Healthcare organizations face particularly severe consequences from these security gaps. Healthcare breach costs average $9.77 million—the highest of any industry for the 13th consecutive year—with shadow AI incidents adding an average of $670,000 per incident. The IBM 2026 X-Force Threat Intelligence Index documented a 44% increase in attacks beginning with exploitation of public-facing applications, largely driven by missing authentication controls. At HIMSS 2026—healthcare's largest technology conference—experts raised concerns that AI agents from Epic, Google, Microsoft, and others are being deployed without sufficient clinical testing or governance validation, as reported by STAT News.
The Gravitee report documents that current security frameworks designed for deterministic software are structurally incapable of governing autonomous systems that reason, adapt, and act dynamically. Frameworks such as NIST AI RMF and ISO 42001 provide organizational governance structures but do not address the specific technical controls required for agentic deployments: tool call parameter validation, real-time scope enforcement, pre-execution identity trust scoring, or kill-chain contextual fusion. Runtime monitoring can observe an agent doing something it should not but cannot stop an agent from doing it.
VectorCertain LLC claims its SecureAgent platform would have blocked every documented failure class before reaching patient records, databases, or clinical systems. The company states that its four-gate pre-execution governance pipeline has been validated across four frameworks: the CRI Profile v2.1's 278 cybersecurity diagnostic statements (including HIPAA-mapped PROTECT and DETECT controls), the U.S. Treasury FS AI RMF's 230 control objectives, MITRE ATT&CK ER7++ sprint results (11,268 tests, 0 failures), and MITRE ATT&CK ER8 self-evaluation (14,208 trials, TES 98.2%). According to VectorCertain, SecureAgent achieves a false positive rate of 1 in 160,000—53,333 times lower than the EDR industry average.
The report's implications extend beyond immediate security concerns to fundamental questions about AI governance in critical infrastructure. With AI agents now embedded in core components of distributed systems and behaving as autonomous infrastructure that inherits the same security expectations as any production service, the primary risk is no longer that an agent might be incorrect but that it is too efficient at performing actions it was never intended to do. The full Gravitee report is available at https://www.gravitee.io/state-of-ai-agent-security.


