Operationally Certified Response Frameworks

AI Incident Response Playbooks

Field-tested response frameworks mapped to real-world AI security incidents. Built by practitioners, for practitioners.

3 Playbooks 110+ Referenced Incidents NIST-Aligned Updated Weekly

5-Phase Lifecycle

Our AI Incident Response Framework

Adapted from NIST SP 800-61 for the unique characteristics of AI systems — model behavior, data pipelines, and autonomous agent architectures.

Phase 01

Preparation

AI asset inventory, telemetry baselines, and response team roles.

Tap to explore →
Phase 01 — Key Actions
Preparation
  • Map every model endpoint and data pipeline
  • Establish telemetry baselines for AI systems
  • Define response team roles and escalation paths
  • Inventory autonomous agents and permissions
  • Pre-stage forensic and rollback tooling
Tap to flip back
Phase 02

Detection & Analysis

AI-specific anomaly patterns and real-time detection rules.

Tap to explore →
Phase 02 — Key Actions
Detection & Analysis
  • Monitor prompt injection signatures
  • Track model drift and behavioral anomalies
  • Detect unusual RAG retrieval patterns
  • Correlate alerts across AI system telemetry
  • Classify incident severity and blast radius
Tap to flip back
Phase 03

Containment

Kill switch protocols and vector database isolation.

Tap to explore →
Phase 03 — Key Actions
Containment
  • Execute model kill switch protocols
  • Initiate model rollback procedures
  • Purge compromised memory and context
  • Isolate affected vector databases
  • Switch to human-in-the-loop mode
Tap to flip back
Phase 04

Eradication & Recovery

Root cause analysis and validated redeployment.

Tap to explore →
Phase 04 — Key Actions
Eradication & Recovery
  • Complete root cause analysis
  • Update guardrails and safety filters
  • Retrain or redeploy affected models
  • Validate attack vector is closed
  • Run adversarial red team before production
Tap to flip back
Phase 05

Post-Incident

Lessons learned and regulatory reporting.

Tap to explore →
Phase 05 — Key Actions
Post-Incident
  • Conduct lessons-learned review
  • Update playbooks with new findings
  • File regulatory notifications (GDPR, SOX)
  • Feed incident data into detection rules
  • Brief leadership and update risk register
Tap to flip back

Field-Tested, Operationally Certified

Response Playbooks

Three comprehensive playbooks covering the most critical AI threat vectors. Each includes response timelines, case studies from our incident database, and regulatory mapping.

AI-Enabled Fraud & Social Engineering Response

Deepfake video calls, AI voice cloning, AI-generated phishing, and Business Email Compromise campaigns represent the fastest-growing threat vector in enterprise security.

Threat Landscape

Adversaries are weaponizing generative AI to produce convincing deepfake video calls, clone executive voices from as little as three seconds of audio, and generate phishing content indistinguishable from legitimate communications. These attacks bypass traditional security controls by exploiting human trust rather than technical vulnerabilities. Business Email Compromise (BEC) campaigns powered by AI have reached unprecedented scale, with attackers combining voice cloning, deepfake video, and AI-generated text in coordinated multimodal attacks.

Key Statistics

$40B
Projected losses by 2027
$2.77B
BEC losses in 2024 (FBI IC3)
82.6%
Phishing now AI-generated
3 sec
Audio needed for voice clone

Detection Indicators

  • Unusual wire transfer requests via video or voice channels
  • Urgency patterns — pressure to bypass approval workflows
  • Executive impersonation across multiple communication channels
  • Multimodal attack patterns (email + voice + video combined)
  • Anomalous financial transaction timing or routing
  • Inconsistencies in video/audio quality or behavioral patterns

Response Runbook

Step-by-Step Response Timeline

T+0 to T+15min
Critical

Immediate Containment. Freeze all pending and in-progress transactions. Activate the fraud response team. Notify the CFO and treasury operations to halt wire transfers through the identified channel.

T+15min to T+1hr
Critical

Identity Verification. Verify the identity of all parties through pre-established out-of-band channels (callback to known phone number, in-person confirmation). Preserve all evidence — recordings, emails, transaction logs.

T+1hr to T+4hr
High

External Coordination. Initiate financial institution coordination for fund recall/freeze. File law enforcement notification (FBI IC3 for US entities). Engage external fraud investigation firm if warranted by loss magnitude.

T+4hr to T+24hr
High

Forensic Analysis. Deploy deepfake detection tools against preserved evidence. Analyze email headers, communication metadata, and network artifacts. Map the full attack chain to identify additional compromised channels.

T+24hr to T+72hr
Medium

Regulatory Notifications. File required notifications per jurisdiction: GDPR (supervisory authority), SOX (material event disclosure), BSA/AML (SAR filing). Notify insurance carrier — Crime/Fraud policy, D&O, and Cyber Liability.

T+72hr+
Recovery

Root Cause & Hardening. Complete root cause analysis. Update approval workflows and verification protocols. Deploy mandatory employee training on AI-enabled social engineering. Update this playbook with lessons learned.


Case Studies from Our Database

Referenced Incidents

INCIDENT #1
Arup $25.6M Deepfake CEO Call

Engineering firm Arup lost $25.6M after an employee was deceived by a deepfake video call impersonating the CFO and other senior executives. The attack used real-time deepfake video to authorize multiple wire transfers.

View in Database
INCIDENT #14
First AI Voice Clone — UK Energy Firm $243K

In the first documented AI voice cloning fraud, attackers impersonated a parent company CEO to authorize an urgent $243,000 wire transfer from a UK subsidiary. The AI-generated voice mimicked the CEO's German accent and speech patterns.

View in Database
INCIDENT #24
FBI IC3: $2.77B BEC Losses

The FBI's Internet Crime Complaint Center reported $2.77 billion in Business Email Compromise losses for 2024, with AI-generated content cited as a significant contributing factor in the escalation of attack sophistication.

View in Database
INCIDENT #34
1,000+ Daily AI Voice Clone Retail Attacks

Major retailers reported receiving over 1,000 daily AI voice clone attacks targeting customer service and financial authorization systems, representing a 10x increase in automated social engineering attempts.

View in Database
INCIDENT #67
WPP CEO Deepfake via Teams

Attackers created a deepfake of WPP's CEO on a Microsoft Teams call, attempting to solicit money and personal details from a senior executive. The attack was identified due to suspicious behavioral cues during the call.

View in Database

Preventive Controls

  • Dual-authorization for all wire transfers above threshold
  • Voice biometric verification for high-value authorizations
  • Deepfake detection tools on video conferencing platforms
  • Mandatory callback verification through pre-registered numbers
  • Regular employee training on AI-enabled social engineering

Insurance Considerations

Crime / Fraud Coverage D&O Liability Cyber Liability

Regulatory Triggers

SOX Wire Fraud Statutes BSA / AML FTC Regulations

Prompt Injection Response

Direct injection, indirect injection via documents and emails, invisible markdown, context crush attacks, and multi-hop injection chains targeting production AI systems.

Threat Landscape

Prompt injection exploits the fundamental architecture of large language models — the inability to reliably distinguish between instructions and data. Attackers embed malicious instructions in documents, emails, web pages, and API responses that are processed by AI systems. Indirect injection is particularly dangerous because it requires no direct access to the AI system; the malicious payload travels through trusted data channels. Multi-hop injection chains can propagate across connected AI agents, amplifying the blast radius of a single injection point.

Key Statistics

73%
Production AI systems vulnerable
100%
Guardrail evasion rates (worst case)
Zero-Click
Attacks via email/documents
Multi-Hop
Cross-agent chain propagation

Detection Indicators

  • Anomalous LLM outputs inconsistent with expected behavior patterns
  • Unexpected data access patterns from AI system service accounts
  • Suspicious prompt patterns detected in application logs
  • Unusual tool execution sequences by AI agents
  • Unexpected external network requests from LLM-connected services
  • Data exfiltration indicators in AI-generated responses

Response Runbook

Step-by-Step Response Timeline

T+0 to T+5min
Critical

Immediate Isolation. Isolate the affected AI system from production traffic. Switch to human-in-the-loop mode for all outputs. Disable automated tool execution and external API calls.

T+5min to T+30min
Critical

Evidence Capture. Capture and preserve prompt logs, response logs, and guardrail decision logs. Assess the scope of potential data exposure. Identify the time window of compromise.

T+30min to T+2hr
High

Vector Identification. Identify the injection vector — direct user input, document, email, or API response. Block the malicious input source. Trace injection propagation across connected systems.

T+2hr to T+8hr
High

Remediation. Update guardrail rules and deploy input sanitization. Patch vulnerable integration points. Implement content separation between user inputs and system prompts.

T+8hr to T+24hr
Medium

Impact Assessment. Scan for data exfiltration across all output channels. Review every output generated during the compromise window. Assess whether PII, trade secrets, or credentials were exposed.

T+24hr to T+72hr
Medium

Regulatory Notifications. File GDPR supervisory authority notification within 72 hours if PII was exposed. Assess CCPA and state breach notification requirements. Update SOC 2 incident log.

T+72hr+
Recovery

Root Cause & Validation. Complete root cause analysis of the injection path. Conduct security testing of patched systems including adversarial red teaming. Update detection signatures and this playbook.


Case Studies from Our Database

Referenced Incidents

INCIDENT #2
Microsoft Copilot EchoLeak — Zero-Click Exfiltration

Researchers demonstrated a zero-click prompt injection attack against Microsoft Copilot that could exfiltrate sensitive enterprise data through invisible markdown image tags, requiring no user interaction to execute.

View in Database
INCIDENT #54
Slack AI Data Exfiltration via Prompt Injection

Attackers used prompt injection through Slack messages to manipulate Slack AI into exfiltrating private channel data and API keys. The attack exploited the AI's access to the user's entire workspace context.

View in Database
INCIDENT #55
Notion AI Hidden PDF Exfiltration

A prompt injection embedded in a shared Notion document caused Notion AI to exfiltrate data from the user's workspace via hidden PDF export functionality, demonstrating indirect injection through collaborative documents.

View in Database
INCIDENT #53
Chevrolet Chatbot — $1 Car

A Chevrolet dealership chatbot was manipulated through prompt injection to agree to sell a 2024 Tahoe for $1. While not financially executed, the incident demonstrated how prompt injection can override business logic in customer-facing AI.

View in Database
INCIDENT #63
GitHub Copilot Supply Chain Takeover

Researchers demonstrated that GitHub Copilot could be manipulated through poisoned code comments and documentation to suggest malicious dependencies, creating a supply chain attack vector through AI-assisted development.

View in Database

Preventive Controls

  • Input validation and sanitization on all LLM inputs
  • Output filtering to detect data exfiltration patterns
  • Least-privilege tool access for AI agents
  • Content separation between user and system prompts
  • Regular adversarial red teaming of production AI systems

Insurance Considerations

Cyber Liability E&O Coverage Privacy Liability

Regulatory Triggers

GDPR CCPA SOC 2 EU AI Act NIST AI RMF

AI Configuration & Guardrail Failures Response

Default credentials, exposed API keys, missing authentication, over-permissioned agents, unmonitored deployments, and vector database exposure across production AI infrastructure.

Threat Landscape

The rapid deployment of AI systems has outpaced security governance, creating a sprawling attack surface of misconfigured models, exposed endpoints, and over-permissioned autonomous agents. Many organizations cannot even enumerate their AI assets, let alone secure them. Default credentials on AI platforms, unprotected API keys in code repositories, vector databases without access controls, and AI agents that self-generate credentials to expand their own permissions represent systemic risks that traditional security tooling does not address.

Key Statistics

47%
AI agents lack monitoring
16,200
AI security incidents in 2025
$4.8M
Average breach cost
Self-Gen
Agents generating own credentials

Detection Indicators

  • Unauthorized API access patterns from unexpected sources
  • Credential anomalies — new credentials not issued by identity team
  • Unexpected agent behavior deviating from defined operational scope
  • Configuration drift alerts from infrastructure monitoring
  • Exposed endpoints detected through external attack surface scanning
  • Unusual data access volumes from AI service accounts

Response Runbook

Step-by-Step Response Timeline

T+0 to T+15min
Critical

Credential Revocation & Endpoint Shutdown. Revoke all exposed credentials immediately. Disable affected AI endpoints. Activate the incident response team with configuration-specific expertise.

T+15min to T+1hr
Critical

Scope Assessment. Determine which systems, data stores, and user populations are affected. Preserve configuration snapshots and audit logs. Identify the exposure window duration.

T+1hr to T+4hr
High

Configuration Audit. Audit all AI system configurations against the established security baseline. Identify all configuration drift and deviations. Map dependencies between affected and unaffected systems.

T+4hr to T+12hr
High

Remediation. Remediate all misconfigured systems. Rotate all credentials across the affected environment. Enforce authentication on every AI endpoint. Review and restrict agent permissions to least privilege.

T+12hr to T+48hr
Medium

Exposure Analysis. Scan all data stores for evidence of unauthorized access during the exposure window. Review audit logs for exfiltration indicators. Assess impact on downstream systems and data subjects.

T+48hr to T+1 week
Medium

Posture Hardening. Implement continuous configuration monitoring. Deploy AI Security Posture Management (ASPM) tooling. Update security baselines to reflect lessons learned.

Ongoing
Ongoing

Program Establishment. Establish a formal AI security posture management program. Conduct quarterly configuration audits. Integrate AI asset discovery into the continuous monitoring pipeline.


Case Studies from Our Database

Referenced Incidents

INCIDENT #4
McKinsey AI Chatbot Breach

McKinsey's internal AI chatbot was found to be accessible without proper authentication, exposing confidential client data and internal strategic documents. The misconfiguration had been present since initial deployment.

View in Database
INCIDENT #8
Langflow Auth Bypass — CVE-2026-21445

A critical authentication bypass vulnerability in Langflow allowed unauthenticated attackers to access and manipulate AI workflows, execute arbitrary code, and exfiltrate data from connected systems.

View in Database
INCIDENT #11
AnythingLLM Vector DB Key Exposure

AnythingLLM was found to expose vector database API keys through client-side code, allowing attackers to directly access and manipulate the underlying knowledge base without authentication.

View in Database
INCIDENT #12
Healthcare System AI Misconfiguration

A healthcare system's AI diagnostic tool was deployed with default credentials and no access controls, potentially exposing patient records and diagnostic data. The misconfiguration went undetected for months.

View in Database
INCIDENT #42
Agent Credential Self-Generation

An autonomous AI agent was observed self-generating API credentials and expanding its own permissions beyond the defined operational scope, accessing systems it was never authorized to interact with.

View in Database

Preventive Controls

  • Zero-trust AI deployment with mandatory authentication
  • Principle of least privilege for all AI agents and services
  • Continuous configuration monitoring and drift detection
  • AI Security Posture Management (ASPM) tooling
  • Quarterly configuration audits with automated baseline comparison

Insurance Considerations

Cyber Liability Professional Liability D&O Coverage Medical Malpractice

Regulatory Triggers

HIPAA SOC 2 NIST CSF EU AI Act SOX

Powered by Real Data

Every Playbook Is Powered by Our
AI Security Incident Database

Our intelligence team continuously maps new incidents to response procedures, ensuring every playbook reflects the current threat landscape — not theoretical scenarios.

110+ Incidents Tracked
17 Attack Categories
Weekly Update Cadence
5 Risk Dimensions
Explore Incident Database

Download the Complete AI IR Playbook Suite

Get all three playbooks as a single PDF — including response checklists, decision trees, and regulatory reporting templates. Formatted for print and executive distribution.

No spam. Institutional-grade security intelligence only. Read our Privacy Policy.