Threat Intelligence · Q1 2026 — 14-Month View

AI Security
Breach Report

January 2025 – March 2026 · 14-Month Intelligence Window

142 AI security incidents scored across 14 months. Q1 2026 alone produced more incidents than all of 2025 — a clear inflection point in the AI threat landscape.

0 Incidents Scored
$0 Financial Exposure
0 Critical Events
0 MCP Exploits
173% YoY Acceleration
Download Full Report

EXECUTIVE SUMMARY

Five Signals Defining AI Threat Evolution

The 14-month window from January 2025 through March 2026 reveals a threat landscape undergoing structural transformation — not incremental growth.

AI Fraud Dominates the Landscape

AI fraud grew from 15% of incidents in 2025 to 50% of all Q1 2026 incidents — 45 of 90 events. The category that was background noise became the dominant threat vector in a single quarter.

🔌

MCP: A Brand-New Attack Surface

11 exploits targeting the Model Context Protocol appeared in a category that didn't exist before Q1 2026. From privilege escalation to DNS rebinding, attackers are probing this nascent protocol faster than the ecosystem can harden it. Average TSS: 70.6.

📈

Severity Escalation

Mean TSS jumped from 59.9 in 2025 to 68.4 in Q1 2026 — a 14% severity escalation. Five incidents reached CRITICAL tier; zero did in all of 2025. The entire distribution shifted rightward in a single quarter.

🤖

Agent Autonomy Backfires

15 agent-compromise incidents in Q1 2026 vs 5 in all of 2025 — a 3x single-quarter surge. Average TSS of 73.4 — the third-highest severity across all categories. From deleted databases to hijacked workflows.

🔗

Supply Chain Acceleration

19 supply-chain incidents in Q1 2026 (21% of the quarter) vs 6 in all of 2025 — a 217% single-quarter surge. Compromised AI frameworks and poisoned dependencies are becoming primary attack vectors as the ecosystem scales.

📋

Regulatory Triggers Everywhere

The EU AI Act was triggered in 69% of all Q1 incidents. GDPR appeared in 47%. As enforcement begins in August 2026, organizations face a narrowing window to align AI deployments with regulatory expectations.

This report covers 142 scored AI security incidents spanning January 2025 through March 2026 — a 14-month intelligence window. The acceleration is stark: 52 incidents across all of 2025, then 90 in Q1 2026 alone. The average composite Threat Severity Score across Q1 2026 reached 68.4 out of 100, up from 59.9 in 2025. Five incidents reached "Critical" status (TSS ≥ 85) in Q1 2026; zero did across all of 2025. The $54.5 billion in quantified composite loss exposure — driven by three macro-scale events in March 2026 — represents the most significant financial concentration in AI security history. Our analysts assess that the true aggregate cost across all 142 incidents is substantially higher, as 84% of incidents had undisclosed or unquantifiable financial consequences.

THREAT SEVERITY DISTRIBUTION

Severity Escalation: From Elevated to Critical

2025 produced zero Critical-tier incidents across 52 events. Q1 2026 introduced five in a single quarter — a qualitative threshold crossing. Every incident is scored using our proprietary Threat Severity Score (TSS).

0
Critical
TSS 85–100
2025: 0 · Q1 2026: 5
0
Severe
TSS 70–84
2025: 8 · Q1 2026: 38
0
Elevated
TSS 50–69
2025: 35 · Q1 2026: 43
0
Moderate
TSS 30–49
2025: 9 · Q1 2026: 4

Severity Distribution — 142 Incidents (14-Month Total)

ASI Threat Severity Score (TSS)

TSS Score Distribution: 2025 vs Q1 2026

2025 Mean: 59.9 · Q1 2026 Mean: 68.4 · Shift: +14%

INCIDENT CLASSIFICATION

The Threat Landscape Transformed in 12 Months

The category distribution of AI security threats completely flipped between 2025 and Q1 2026. What was noise became signal; what was signal became dominant.

Top Categories: 2025 vs Q1 2026

Grouped by annual period · ASI 17-Category Classification Taxonomy
AI Fraud 53

Deepfakes, synthetic media, voice cloning, and AI-generated scam content. Grew from 8 incidents in all of 2025 to 45 in Q1 2026 alone — a 463% single-quarter surge. Average TSS: 63.8.

Config Failures 30

Misconfigured AI deployments, inadequate guardrails, and access control failures. Doubled from 10 in 2025 to 20 in Q1 2026. Average TSS: 73.0 — the fifth-highest severity category.

Supply Chain 25

Compromised AI frameworks, poisoned dependencies, and third-party model risks. Tripled from 6 in 2025 to 19 in Q1 2026 — 217% acceleration. Average TSS: 75.3 — highest severity among categories with 10+ incidents.

Agent Compromise 20

Autonomous AI agents executing unauthorized actions, from database deletions to workflow hijacking. Tripled from 5 in 2025 to 15 in Q1 2026. Average TSS: 73.4. A rapidly emerging category as agentic AI enters production.

YEAR-OVER-YEAR SHIFT

One Quarter Changed Everything

The shift from 2025 to Q1 2026 is not incremental evolution — it is structural transformation. Mean severity jumped 14%. The dominant threat category flipped entirely. And for the first time in our database, incidents reached CRITICAL tier.

2025 (Full Year)
Q1 2026
Volume
52 incidents · full year
90 incidents · one quarter
Mean TSS
59.9 Elevated tier
68.4 +14% escalation
Top Threat
Data Exfiltration 12 incidents · 23%
AI Fraud 45 incidents · 50%
Critical Events
0 zero in full year
5 TSS 85+
Max Single BEL
$0.3B largest 2025 event
$29.0B 97x larger

SPOTLIGHT ANALYSIS

The Rise of MCP Exploits

The Model Context Protocol (MCP) — an open standard for connecting AI models to external tools and data sources — became a new attack surface in Q1 2026. Eleven incidents represent a category that simply didn't exist before this quarter.

Jan 15, 2026
MCP Cross-Workflow Context Leakage
TSS 66 data-exfiltration
Jan 27, 2026
CoSAI MCP Threat Taxonomy — 40 Threats Identified
TSS 67 taxonomy publication
Feb 1, 2026
MCP Privilege Escalation via Over-Delegation
TSS 68 config-failures
Feb 11, 2026
MCP Session Hijacking via Stolen Identifiers
TSS 73 agent-compromise
Mar 1, 2026
MCP Tool Schema Manipulation
TSS 71 mcp-exploits
Mar 6, 2026
ContextCrush — Documentation Becomes Malicious
TSS 60 prompt-injection
Mar 11, 2026
n8n Workflow Automation RCE via Expression Evaluation
TSS 81 supply-chain · agent-compromise
Mar 18, 2026
OpenClaw Malicious Skill Trap: Hostile Skills Deploy Crypto-Stealing Malware
TSS 67 agent-compromise · supply-chain
Mar 19, 2026
MCP DNS Rebinding — Internal Service Exposure
TSS 68 config-failures
Mar 23, 2026
Eight Attack Vectors in AWS Bedrock Enabling Enterprise Data Access
TSS 82 supply-chain · agent-compromise
Mar 26, 2026
CISA Warns of Active Exploitation of Critical Langflow Vulnerability
TSS 74 agent-compromise · supply-chain

CRITICAL INCIDENTS

Five Events That Demand Attention

These incidents scored 85 or above on our Threat Severity Score — representing the highest-impact AI security events of Q1 2026. Zero incidents reached this tier across all of 2025.

U.S. Military Uses Anthropic AI for Iran Strikes Hours After Federal Ban
March 7, 2026 · #126
90
TSS

The U.S. government conducted military air strikes using Anthropic AI systems within hours of the President announcing a federal ban on Anthropic AI tools — highlighting critical governance gaps between AI policy announcements and operational enforcement. The incident exposed systemic failures in AI accountability at the highest levels of government.

BEL $29.0B composite loss exposure
shadow-ai config-failures supply-chain T:22 · A:18 · E:24 · M:26
Industrial-Scale Model Distillation by Chinese AI Labs Targeting Anthropic Claude
March 1, 2026 · #131
90
TSS

Three AI laboratories — DeepSeek, Moonshot, and MiniMax — conducted systematic distillation attacks generating over 16 million exchanges with Claude to extract and replicate its capabilities. Hundreds of millions in R&D value compromised in one of the largest intellectual property extraction campaigns ever recorded against an AI system.

BEL $9.95B composite loss exposure
model-theft supply-chain T:22 · A:21 · E:23 · M:24
AI-Generated Disinformation Campaign Creates Mass Confusion During Iran Conflict
March 14, 2026 · #161
90
TSS

A large-scale disinformation campaign using AI-generated fake videos and images depicting false war scenes flooded social media during the initial weeks of the Iran conflict. The synthetic media was sophisticated enough to evade platform detection and drove real-world policy confusion at the international level.

BEL $5.97B composite loss exposure
ai-fraud cascading-failures T:22 · A:24 · E:21 · M:23
Critical Vulnerabilities in LangChain and LangGraph Expose Files, Secrets, and Database Contents
March 27, 2026 · #148
87
TSS

Three security vulnerabilities discovered in LangChain and LangGraph — two of the most widely-used open-source frameworks for building LLM applications. Successful exploitation could allow attackers to access server files, environment secrets, and database contents across the vast enterprise ecosystem built on these frameworks.

BEL $4.38B composite loss exposure
supply-chain data-exfiltration memory-poisoning T:21 · A:23 · E:22 · M:21
McKinsey's Lilli AI Platform Compromised via RAG Poisoning and Prompt Injection
March 14, 2026 · #166
86
TSS

McKinsey & Company's internal AI platform Lilli, serving 43,000+ employees, was compromised through RAG knowledge base poisoning and prompt injection. Attackers exploited the retrieval pipeline to inject malicious content into the company's trusted AI advisor — potentially exposing client data across hundreds of active engagements.

BEL $2.06B composite loss exposure
rag-poisoning prompt-injection data-exfiltration T:22 · A:19 · E:24 · M:21

FINANCIAL IMPACT

$54.5 Billion in Quantified Exposure

142 incidents across 14 months, scored against 6 BEL dimensions. Q1 2026 alone accounts for $54.1B of total exposure — driven by three macro-scale events. Market & Valuation Impact is the dominant loss dimension at $30.5B.

$0
Total Composite Loss Exposure · January 2025 – March 2026 · 142 Incidents
$29.0B
U.S. Military / Anthropic AI / Iran Strikes — governance failure at national scale
2026-03-07
$9.95B
Chinese AI Lab Model Distillation Campaign vs Claude — industrial-scale IP extraction
2026-03-01
$5.97B
AI Disinformation / Iran War Mass Confusion — synthetic media drives policy failure
2026-03-14
$4.38B
LangChain / LangGraph Critical Vulnerabilities — enterprise framework exposure
2026-03-27
$2.06B
McKinsey Lilli RAG Poisoning — 43,000-user enterprise AI platform compromised
2026-03-14

BEL Dimension Breakdown — Q1 2026

Six-dimension Breach Economic Loss model · Q1 2026 totals across all 90 incidents

$30.5B
MVI
Market & Valuation Impact — dominant dimension (56% of total)
$13.9B
TPC
Third-Party Cascade
$5.85B
BII
Business Interruption Impact
$1.36B
DRC
Direct Response Cost
$1.31B
LE
Litigation Exposure
$1.30B
RPE
Regulatory Penalty Exposure

SECTOR IMPACT ANALYSIS

No Industry Is Immune

AI security incidents across the 14-month window cut across every sector. Media and entertainment bore the brunt — driven by the deepfake epidemic — while technology platforms and government faced sophisticated, high-severity attacks.

Media & Entertainment
70
49%
Technology & AI Platforms
54
38%
Financial Services
40
28%
Government & Defense
38
27%
Critical Infrastructure
24
17%
Consumer & Retail
13
9%
Education
10
7%

Percentages exceed 100% because incidents frequently impact multiple sectors. The concentration in media and entertainment reflects the deepfake epidemic — 70 of 142 incidents across 14 months involved synthetic media targeting public figures, politicians, or consumer fraud victims. Technology and AI platforms dominate the high-severity end, with 4 of 5 Critical incidents directly involving AI platform vulnerabilities or misuse. Government and defense incidents, while fewer, carry outsized geopolitical consequences — from AI-powered wartime disinformation to military AI governance failures that exposed the gap between AI policy and AI reality.

YEAR-OVER-YEAR GROWTH

Exponential Acceleration Since 2016

The trajectory from sporadic incidents in 2016 to 90 in a single quarter charts a threat landscape in exponential growth. Q1 2026 alone exceeded the entire 2025 annual total by 73%.

173%
YoY Acceleration
Q1 2026 vs full-year 2025
52 → 90
Full year 2025
vs Q1 2026 alone
+14pts
TSS mean increase
59.9 → 68.4
0 → 5
Critical incidents
none in 2025, five in Q1 2026
2016–2021
0
total incidents
2022–2023
0
total incidents
2024
0
incidents · +90%
2025
0
incidents · +174%
Q1 2026
0
incidents · Q1 only

Incident Volume Trajectory (2023 – Q1 2026)

ASI Incident Database · Annual + Q1 2026

At Q1 2026's pace, full-year 2026 is on track for 360+ AI security incidents — nearly 7x the 2025 annual total of 52. If the severity trajectory holds, that implies more than 20 Critical-tier events in a single calendar year. The AI threat landscape is not maturing toward stability — it is accelerating toward a new baseline.

REGULATORY TRIGGER ANALYSIS

EU AI Act Dominates the Compliance Landscape

Our analysts map each incident to the regulatory frameworks it would trigger. The EU AI Act — reaching full applicability in August 2026 — was implicated in nearly 7 out of every 10 incidents. Regulatory violations now account for 18% of all 142 scored incidents — up 4x from 2025.

Regulatory Framework Triggers — 142 Incidents, 14-Month Window

Incidents may trigger multiple frameworks · 25 regulatory violations total (5 in 2025, 20 in Q1 2026)
EU AI Act
62
GDPR
42
NIST AI RMF
31
State AI Laws
30
SOC 2
9
NIST CSF
9
NIST CSF 2.0
8
ISO 42001
7
SEC AI Disclosure
6
UK Online Safety
5

METHODOLOGY & DOWNLOAD

How We Score AI Security Incidents

Every incident in this report is evaluated using our proprietary Threat Severity Score (TSS) — a four-vector composite score designed to capture the full dimensions of AI security risk. 142 incidents scored across 14 months.

Vector 1

Technical Sophistication

Evaluates attack complexity, exploit novelty, and the technical skill required. Zero-day exploits and novel attack chains score highest. Weighted 0–25.

Vector 2

Amplification Potential

Measures how easily the incident could be replicated, scaled, or weaponized. Publicly available tools and open-source exploits increase this score. Weighted 0–25.

Vector 3

Enterprise Impact

Assesses direct organizational damage — data loss, financial impact, operational disruption, and reputational harm. Scores reflect both immediate and downstream consequences. Weighted 0–25.

Vector 4

Market Disruption

Captures broader ecosystem effects — regulatory catalysts, market sentiment shifts, insurance implications, and cross-industry ripple effects. Weighted 0–25.

AIRS Domain Exposure Heatmap

AI Insurance Readiness Score (AIRS) — 9 Risk Domains
CRITICAL
SEVERE
ELEVATED
MODERATE
Model Risk
3
23
30
2
Data & AI Assets
5
24
24
2
Malicious Intent
3
20
12
2
Regulatory
2
12
21
2
Data Privacy
2
9
3
0
Third-Party Risk
0
5
8
1
Operational Risk
0
5
4
0
Data Governance
0
4
2
0
Contractual
0
1
0
0

Each incident is mapped to the AIRS risk domains it would trigger for enterprise insurance underwriting. Model Risk (58 triggers) and Data & AI Assets (55) are the most frequently exposed domains — reflecting a threat landscape dominated by model-level attacks and data integrity failures across 142 scored incidents.

The AI Security Controls Gap

The Verizon DBIR maps its findings to CIS Critical Security Controls — a prescriptive, prioritized set of defensive actions. No equivalent exists for AI security. The frameworks that do exist serve important but different purposes:

FrameworkWhat It DoesWhat It Doesn't Do
NIST AI RMFGovernance & risk management processNo prescriptive controls
EU AI ActRisk-based legal complianceNo technical implementation guidance
ISO 42001AI management system certificationHigh-level; not security-specific
MITRE ATLASAdversarial attack taxonomy (14 tactics, 66 techniques)Attack mapping, not defensive controls
OWASP LLM Top 10LLM vulnerability classificationNarrow scope; no prioritized actions

No prescriptive, prioritized, implementation-ready AI security controls framework exists today. Our incident intelligence — 142 scored events across 14 months, 17-category classification taxonomy, and 9-domain AIRS model — is laying the empirical groundwork for one. The data in this report represents the kind of real-world evidence base from which such controls will eventually be derived.

Download the Full Report

Get the complete AI Security Breach Report as a PDF — free, forever. No email required.

Download PDF
Explore the full incident database