Top Market Developments
MLflow Auth Bypass — Unauthenticated Job Control in the AI Pipeline
CVE-2026-0545 (NVD published April 3, 2026). MLflow's FastAPI job endpoints under /ajax-api/3.0/jobs/* lack authentication when the basic-auth app is enabled, allowing unauthenticated clients to submit, read, search, and cancel jobs — and potentially achieve remote code execution if privileged job functions are allowlisted and job execution is enabled.1 This is not a model vulnerability — it is an infrastructure vulnerability in the most widely deployed open-source ML experiment tracking platform. The attack surface is the pipeline itself: CI jobs, scheduled retrains, automated model registrations. A compromised MLflow instance gives an attacker control over what models are trained, what data they see, and where they deploy. SentinelOne's advisory emphasizes the operational controls that will become table-stakes for MLOps vendor due diligence: disable job execution by default, restrict API access, enforce authentication at the reverse proxy or WAF layer.2 The blast radius distinguishes this class of vulnerability from traditional application security findings: a single unauthenticated API call does not compromise one service — it contaminates every model the pipeline has ever touched or will touch.
Docker Model Runner SSRF — Every Model Pull Becomes a Lateral Movement Primitive
CVE-2026-33990 (NVD published April 1, 2026). Docker Model Runner follows an attacker-controlled realm URL from an OCI registry's WWW-Authenticate header without validating scheme, host, or IP range during model pulls, allowing server-side request forgery and internal response reflection. Patched in v1.1.25. Enhanced Container Isolation (Docker Desktop) provides a mitigation barrier in some configurations.3 The significance extends beyond the CVE: "pulling a model" is no longer a benign operation. In containerized ML pipelines, model downloads happen automatically — triggered by CI/CD, orchestration layers, or scheduled retraining. An attacker who controls or compromises a model registry can turn every model pull into an SSRF probe against internal infrastructure. The attack converts a routine MLOps operation into a lateral movement primitive. Enterprises relying on air-gapped model registries for supply chain isolation must now validate their registry authentication flows against this attack class — the assumption that internal registries are safe by proximity is operationally invalidated.
0.001% Poison, 7–11% Harmful Output — The Supply Chain Arithmetic That Changes Underwriting
A Nature Medicine study (cited by Lakera, April 16, 2026) demonstrated that replacing just 0.001% of training tokens with misinformation increased harmful completions by 7–11% — and standard benchmarks failed to detect the degradation. Only a knowledge-graph filter caught it.4 SQ Magazine's comprehensive March 2026 data poisoning analysis extends the picture: as few as 250 malicious documents — 0.00016% of training data — can successfully backdoor models regardless of size. Poisoning 3% of training data yields 12–41% attack success rates in code-generation models. Pre-training backdoors survive 90% of fine-tuning stages intact. The MCPTox benchmark evaluated 1,300+ malicious cases across 45 real MCP servers, revealing up to 72% attack success rates on agent-based systems.5 Open-source LLMs show 30–50% higher susceptibility due to transparent training pipelines, but proprietary models still exhibit over 40% vulnerability under targeted poisoning. Sixty percent or more of LLM training data comes from open web crawls such as Common Crawl, and 15–25% of scraped datasets contain low-quality or unverifiable content. The underwriting implication is structural: model behavior cannot be validated through output-layer testing alone when the contamination is measured in fractions of a percent of input.
Five Supply Chain Attacks in One Month — March 2026 Sets a New Baseline
Zscaler ThreatLabz documented five major software supply chain attacks in March 2026 alone, including the Axios NPM package compromise attributed to a North Korean threat actor.6 Verizon's 2024 DBIR introduced a new metric to track supply chain interconnection influence, finding it present in 15% of breaches — up from 9% the prior year — representing one of the sharpest single-year increases in the report's history.8 Black Kite predicts that by end of 2026, as many as 50% of today's AI vendors will go out of business, leaving customers stranded with unsupported and deeply embedded technologies — creating a new form of supply chain risk: vendor failure as dependency collapse.7 The convergence of attack frequency, regulatory attention, and vendor concentration risk is redefining supply chain exposure from a periodic audit concern into a continuous monitoring requirement. For enterprises with AI systems built on open-source ML frameworks, the March 2026 surge is not a trend line — it is the new operational baseline.
Vendor Spotlight
NetRise
SpotlightMost software supply chain security tools analyze source code — what developers write. NetRise analyzes compiled code — what actually executes in production. The distinction is decisive: source-code-only scanners miss vulnerabilities introduced at the build layer, compressed into firmware, or buried in third-party binaries that organizations never had source access to in the first place. NetRise's core platform generates accurate Software Bills of Materials (SBOMs) from binaries, firmware, and containers, then maps non-code risks — hidden dependencies, cryptographic artifacts, misconfigurations, and exposed secrets — across the full dependency graph. The March 2026 launch of NetRise Provenance extends this capability to the maintainer layer: the platform surfaces who maintains open-source components and how risk propagates from individual contributors through the entire dependency tree.9 A compromised maintainer account at a foundational open-source project — the scenario behind the XZ Utils attack class — becomes visible as a risk propagation event across every downstream dependency. The $10M Series A funding (April 2025) was explicitly directed at accelerating this software supply chain visibility and risk management mission, with enterprise and critical infrastructure clients as the primary market.10
Why It Matters
As AI systems incorporate more open-source ML frameworks, model registries, and inference runtimes, the dependency graph extends far beyond application code into the supply chain of what actually runs the model. NetRise's approach — analyzing what executes rather than what source code says should execute — maps directly to the provenance controls that underwriters will require as AI supply chain exposure matures: can you demonstrate who built, signed, and maintained every component in your AI pipeline? The Verizon DBIR's 15% supply chain breach figure and the March 2026 attack surge make this question non-optional for insurance underwriting. A policy that cannot answer the provenance question is a policy carrying undisclosed supply chain exposure — and the frequency data now exists to price it.
Deep Dive: The AI Supply Chain Attack Surface
15% of breaches
Show supply chain interconnection influence — up from 9% the prior year (Verizon DBIR 2024)8
250 documents
As few as 250 malicious documents can backdoor any model regardless of size — 0.00016% of training data (SQ Magazine)5
The AI supply chain attack surface operates at three distinct layers that traditional software supply chain security was never designed to address. The first layer is the code and infrastructure layer — the ML frameworks, model registries, inference runtimes, and orchestration tools that move models from training to production. CVE-2026-0545 (MLflow) and CVE-2026-33990 (Docker Model Runner) demonstrate that this layer is vulnerable to the same classes of attacks as traditional software, but with higher blast radius: a compromised pipeline contaminates every model it touches, not just the application it hosts.1,3 The second layer is the data and model layer. Wiz's March 2026 guide on AI Bills of Materials (AI-BOMs) describes AI systems as requiring visibility into training data, model provenance, inference-time data sources, and the full dependency tree.11 CycloneDX now supports ML-BOMs. SLSA provenance levels are extending to model artifacts.12 But adoption is early: less than 5% of enterprises maintain comprehensive AI-BOMs. The third layer is the tool and agent layer — the MCP servers, plugins, and third-party integrations that LLMs and agents invoke at runtime. Lakera's MCPTox benchmark demonstrates 72% attack success rates across 45 real MCP servers.4 This is supply chain risk at inference time, not just at build time. The global supply chain security market reached $2.95B in 2026, reflecting the enterprise recognition that this is not a niche concern — it is the infrastructure risk of the AI era.16 Black Kite's prediction that 50% of AI vendors will fail by end of 2026 adds a fourth vector that static SBOM tooling cannot address: vendor concentration risk, where a customer's entire AI pipeline depends on a vendor whose business continuity is itself uncertain.7
Platform Landscape
Enterprise Buyer Signal
15% of breaches, up from 9%
Supply chain interconnection influence found in breaches — the sharpest single-year increase in the Verizon DBIR's supply chain tracking history
Verizon Data Breach Investigation Report 2024
5 attacks
Major software supply chain attacks documented in March 2026 alone, including an NPM compromise attributed to North Korea (Zscaler ThreatLabz)6
50%
Of AI vendors predicted to fail by end of 2026, creating vendor concentration and dependency collapse risk for customers (Black Kite)7
60%+
Of LLM training data sourced from open web crawls; 15–25% of scraped datasets contain low-quality or unverifiable content (SQ Magazine)5
30–50%
Higher poisoning susceptibility in open-source models versus proprietary — but proprietary models still show over 40% vulnerability under targeted attacks (SQ Magazine)5
The enterprise buyer signal is bifurcating along a regulatory axis. On one side, NIST released a concept paper on AI agent identity and authorization (comment deadline April 2026), explicitly addressing provenance and data flow tracking — signaling that supply chain provenance requirements are moving toward compliance mandates.15 Cloudsmith frames 2026 as the year of the shift from static SBOMs to governance-era supply chain security, tying integrity controls directly to AI artifacts.13 The Coalition for Secure AI (CoSAI) argues for executive-grade minimum controls including data provenance tracking and cryptographic model signing.14 On the other side, adoption is lagging: enterprises are deploying AI pipelines faster than they are securing the supply chains feeding them. For underwriters, the AI supply chain posture of an insured — artifact provenance, model signing, dependency auditing, vendor concentration risk, poisoning detection maturity — is becoming as material as patch management cadence was a decade ago. An enterprise that cannot produce an AI-BOM, cannot demonstrate model signing, and cannot identify its dependency tree is carrying undisclosed supply chain exposure that the Verizon DBIR data now allows actuaries to price.
New Vendor Watchlist
Cloudsmith
Supply chain governance platform purpose-built for the shift from static SBOMs to governance-era security. Cloudsmith's 2026 framework operationalizes "sign everything" — SLSA-style cryptographic signing applied to software and AI artifacts alike — and bridges DevSecOps tooling with AI artifact inventory. The platform frames the current moment as a governance inflection: organizations that treat provenance and signing as checkbox compliance will lag organizations that instrument these controls into CI/CD pipelines and make them continuous. For enterprises building AI systems on top of open-source ML frameworks, Cloudsmith provides the artifact governance layer that makes supply chain auditing possible at scale.13
Lakera
AI security guardrails with the deepest published quantitative signal on data poisoning risk. Lakera's April 2026 write-up frames poisoning as a multi-layer problem spanning pre-training, fine-tuning, retrieval, and tool layers — not a single training-time event. The MCPTox benchmark, to which Lakera contributed, evaluated 1,300+ malicious cases across 45 real MCP servers and found up to 72% attack success rates on agent-based systems. The 0.001% poison → 7–11% harmful output finding provides the kind of concrete quantitative signal that actuarial models can incorporate. For underwriters building AI-specific risk frameworks, Lakera's research output is among the most citable in the field.4
Endor Labs
Artifact signing for SLSA provenance — low-code and no-code cryptographic signing for any software artifact, enabling organizations to achieve SLSA Level 3 certification. Endor Labs is built for the provenance-verification use case that AI supply chains will require at scale: not just verifying that a model came from a trusted registry, but cryptographically attesting to the full build chain from training data to deployed artifact. As SLSA provenance levels extend to model artifacts and regulators begin requiring demonstrable provenance, Endor Labs is positioned at the verification layer that makes compliance possible without rebuilding existing CI/CD pipelines.17
Wiz
Cloud security platform with an emerging AI-BOM framework that defines the structure needed to inventory AI systems comprehensively. Wiz's March 2026 guide decomposes the AI Bill of Materials into four layers: the data layer (training data and inference-time data sources), the model layer (foundation models, fine-tuned variants, and internally developed models), the dependency layer (ML frameworks, SDKs, and packages), and the infrastructure layer (compute, storage, and networking). This four-layer taxonomy provides the visibility framework that makes supply chain auditing operationally possible — the prerequisite for any meaningful AI risk assessment at the enterprise level.11
Zscaler ThreatLabz
Supply chain attack intelligence with the most current empirical data on attack frequency. ThreatLabz documented the March 2026 surge — five major software supply chain attacks in a single month, including the Axios NPM compromise attributed to North Korea — providing the attack-frequency baseline that actuarial and underwriting teams require. Beyond intelligence, Zscaler applies runtime behavioral analysis for detecting supply chain compromise at the network layer: identifying C2 callbacks, lateral movement patterns, and data exfiltration that result from compromised dependencies, regardless of whether the specific vulnerability has been catalogued.6