January 2025 – March 2026 · 14-Month Intelligence Window
142 AI security incidents scored across 14 months. Q1 2026 alone produced more incidents than all of 2025 — a clear inflection point in the AI threat landscape.
EXECUTIVE SUMMARY
The 14-month window from January 2025 through March 2026 reveals a threat landscape undergoing structural transformation — not incremental growth.
AI fraud grew from 15% of incidents in 2025 to 50% of all Q1 2026 incidents — 45 of 90 events. The category that was background noise became the dominant threat vector in a single quarter.
11 exploits targeting the Model Context Protocol appeared in a category that didn't exist before Q1 2026. From privilege escalation to DNS rebinding, attackers are probing this nascent protocol faster than the ecosystem can harden it. Average TSS: 70.6.
Mean TSS jumped from 59.9 in 2025 to 68.4 in Q1 2026 — a 14% severity escalation. Five incidents reached CRITICAL tier; zero did in all of 2025. The entire distribution shifted rightward in a single quarter.
15 agent-compromise incidents in Q1 2026 vs 5 in all of 2025 — a 3x single-quarter surge. Average TSS of 73.4 — the third-highest severity across all categories. From deleted databases to hijacked workflows.
19 supply-chain incidents in Q1 2026 (21% of the quarter) vs 6 in all of 2025 — a 217% single-quarter surge. Compromised AI frameworks and poisoned dependencies are becoming primary attack vectors as the ecosystem scales.
The EU AI Act was triggered in 69% of all Q1 incidents. GDPR appeared in 47%. As enforcement begins in August 2026, organizations face a narrowing window to align AI deployments with regulatory expectations.
This report covers 142 scored AI security incidents spanning January 2025 through March 2026 — a 14-month intelligence window. The acceleration is stark: 52 incidents across all of 2025, then 90 in Q1 2026 alone. The average composite Threat Severity Score across Q1 2026 reached 68.4 out of 100, up from 59.9 in 2025. Five incidents reached "Critical" status (TSS ≥ 85) in Q1 2026; zero did across all of 2025. The $54.5 billion in quantified composite loss exposure — driven by three macro-scale events in March 2026 — represents the most significant financial concentration in AI security history. Our analysts assess that the true aggregate cost across all 142 incidents is substantially higher, as 84% of incidents had undisclosed or unquantifiable financial consequences.
THREAT SEVERITY DISTRIBUTION
2025 produced zero Critical-tier incidents across 52 events. Q1 2026 introduced five in a single quarter — a qualitative threshold crossing. Every incident is scored using our proprietary Threat Severity Score (TSS).
INCIDENT CLASSIFICATION
The category distribution of AI security threats completely flipped between 2025 and Q1 2026. What was noise became signal; what was signal became dominant.
Deepfakes, synthetic media, voice cloning, and AI-generated scam content. Grew from 8 incidents in all of 2025 to 45 in Q1 2026 alone — a 463% single-quarter surge. Average TSS: 63.8.
Misconfigured AI deployments, inadequate guardrails, and access control failures. Doubled from 10 in 2025 to 20 in Q1 2026. Average TSS: 73.0 — the fifth-highest severity category.
Compromised AI frameworks, poisoned dependencies, and third-party model risks. Tripled from 6 in 2025 to 19 in Q1 2026 — 217% acceleration. Average TSS: 75.3 — highest severity among categories with 10+ incidents.
Autonomous AI agents executing unauthorized actions, from database deletions to workflow hijacking. Tripled from 5 in 2025 to 15 in Q1 2026. Average TSS: 73.4. A rapidly emerging category as agentic AI enters production.
YEAR-OVER-YEAR SHIFT
The shift from 2025 to Q1 2026 is not incremental evolution — it is structural transformation. Mean severity jumped 14%. The dominant threat category flipped entirely. And for the first time in our database, incidents reached CRITICAL tier.
SPOTLIGHT ANALYSIS
The Model Context Protocol (MCP) — an open standard for connecting AI models to external tools and data sources — became a new attack surface in Q1 2026. Eleven incidents represent a category that simply didn't exist before this quarter.
CRITICAL INCIDENTS
These incidents scored 85 or above on our Threat Severity Score — representing the highest-impact AI security events of Q1 2026. Zero incidents reached this tier across all of 2025.
The U.S. government conducted military air strikes using Anthropic AI systems within hours of the President announcing a federal ban on Anthropic AI tools — highlighting critical governance gaps between AI policy announcements and operational enforcement. The incident exposed systemic failures in AI accountability at the highest levels of government.
Three AI laboratories — DeepSeek, Moonshot, and MiniMax — conducted systematic distillation attacks generating over 16 million exchanges with Claude to extract and replicate its capabilities. Hundreds of millions in R&D value compromised in one of the largest intellectual property extraction campaigns ever recorded against an AI system.
A large-scale disinformation campaign using AI-generated fake videos and images depicting false war scenes flooded social media during the initial weeks of the Iran conflict. The synthetic media was sophisticated enough to evade platform detection and drove real-world policy confusion at the international level.
Three security vulnerabilities discovered in LangChain and LangGraph — two of the most widely-used open-source frameworks for building LLM applications. Successful exploitation could allow attackers to access server files, environment secrets, and database contents across the vast enterprise ecosystem built on these frameworks.
McKinsey & Company's internal AI platform Lilli, serving 43,000+ employees, was compromised through RAG knowledge base poisoning and prompt injection. Attackers exploited the retrieval pipeline to inject malicious content into the company's trusted AI advisor — potentially exposing client data across hundreds of active engagements.
FINANCIAL IMPACT
142 incidents across 14 months, scored against 6 BEL dimensions. Q1 2026 alone accounts for $54.1B of total exposure — driven by three macro-scale events. Market & Valuation Impact is the dominant loss dimension at $30.5B.
Six-dimension Breach Economic Loss model · Q1 2026 totals across all 90 incidents
SECTOR IMPACT ANALYSIS
AI security incidents across the 14-month window cut across every sector. Media and entertainment bore the brunt — driven by the deepfake epidemic — while technology platforms and government faced sophisticated, high-severity attacks.
Percentages exceed 100% because incidents frequently impact multiple sectors. The concentration in media and entertainment reflects the deepfake epidemic — 70 of 142 incidents across 14 months involved synthetic media targeting public figures, politicians, or consumer fraud victims. Technology and AI platforms dominate the high-severity end, with 4 of 5 Critical incidents directly involving AI platform vulnerabilities or misuse. Government and defense incidents, while fewer, carry outsized geopolitical consequences — from AI-powered wartime disinformation to military AI governance failures that exposed the gap between AI policy and AI reality.
YEAR-OVER-YEAR GROWTH
The trajectory from sporadic incidents in 2016 to 90 in a single quarter charts a threat landscape in exponential growth. Q1 2026 alone exceeded the entire 2025 annual total by 73%.
At Q1 2026's pace, full-year 2026 is on track for 360+ AI security incidents — nearly 7x the 2025 annual total of 52. If the severity trajectory holds, that implies more than 20 Critical-tier events in a single calendar year. The AI threat landscape is not maturing toward stability — it is accelerating toward a new baseline.
REGULATORY TRIGGER ANALYSIS
Our analysts map each incident to the regulatory frameworks it would trigger. The EU AI Act — reaching full applicability in August 2026 — was implicated in nearly 7 out of every 10 incidents. Regulatory violations now account for 18% of all 142 scored incidents — up 4x from 2025.
METHODOLOGY & DOWNLOAD
Every incident in this report is evaluated using our proprietary Threat Severity Score (TSS) — a four-vector composite score designed to capture the full dimensions of AI security risk. 142 incidents scored across 14 months.
Evaluates attack complexity, exploit novelty, and the technical skill required. Zero-day exploits and novel attack chains score highest. Weighted 0–25.
Measures how easily the incident could be replicated, scaled, or weaponized. Publicly available tools and open-source exploits increase this score. Weighted 0–25.
Assesses direct organizational damage — data loss, financial impact, operational disruption, and reputational harm. Scores reflect both immediate and downstream consequences. Weighted 0–25.
Captures broader ecosystem effects — regulatory catalysts, market sentiment shifts, insurance implications, and cross-industry ripple effects. Weighted 0–25.
Each incident is mapped to the AIRS risk domains it would trigger for enterprise insurance underwriting. Model Risk (58 triggers) and Data & AI Assets (55) are the most frequently exposed domains — reflecting a threat landscape dominated by model-level attacks and data integrity failures across 142 scored incidents.
The Verizon DBIR maps its findings to CIS Critical Security Controls — a prescriptive, prioritized set of defensive actions. No equivalent exists for AI security. The frameworks that do exist serve important but different purposes:
| Framework | What It Does | What It Doesn't Do |
|---|---|---|
| NIST AI RMF | Governance & risk management process | No prescriptive controls |
| EU AI Act | Risk-based legal compliance | No technical implementation guidance |
| ISO 42001 | AI management system certification | High-level; not security-specific |
| MITRE ATLAS | Adversarial attack taxonomy (14 tactics, 66 techniques) | Attack mapping, not defensive controls |
| OWASP LLM Top 10 | LLM vulnerability classification | Narrow scope; no prioritized actions |
No prescriptive, prioritized, implementation-ready AI security controls framework exists today. Our incident intelligence — 142 scored events across 14 months, 17-category classification taxonomy, and 9-domain AIRS model — is laying the empirical groundwork for one. The data in this report represents the kind of real-world evidence base from which such controls will eventually be derived.
Get the complete AI Security Breach Report as a PDF — free, forever. No email required.
Download PDF