Supply Chain Security

AI Supply Chain Integrity

Effective March 2026

Trust Center›Supply Chain Security

AI Security Intelligence assesses the supply chain security posture of AI vendors, model providers, and agentic infrastructure companies as a core function of our intelligence platform. That position carries an obligation: our own AI supply chain must meet or exceed the standards we hold others to. This page documents how we govern every component, vendor, model, and integration that enters our production environment — from first-party code to third-party AI models to the emerging ecosystem of agentic tool servers.

Section 01

Dependency Transparency

Our intelligence team treats dependency transparency as a foundational security control, not an afterthought. Every software component — library, framework, runtime, or build tool — that enters our production systems is tracked, catalogued, and continuously evaluated for risk. We maintain a comprehensive software bill of materials (SBOM) for every service in our production stack, updated with each deployment.

Our SBOM practices conform to both CycloneDX and SPDX interchange formats, ensuring that our component inventories are machine-readable and compatible with the broadest possible set of downstream analysis tools. SBOMs are generated automatically at build time and stored as versioned artifacts alongside the deployable components they describe. This means our dependency management records are never allowed to drift from actual deployed state — the artifact and its inventory are produced together.

Software composition analysis (SCA) is embedded in our continuous integration pipeline. Every pull request that modifies dependency declarations triggers an automated SCA scan that evaluates new and updated components for known vulnerabilities, license risk, and package integrity. Our analysts are notified of any finding that exceeds our risk tolerance thresholds before the change can be merged. We do not merge dependency changes with unreviewed critical or high-severity findings.

CVE Monitoring and Vulnerability Scanning

Vulnerability scanning does not stop at the point of deployment. Our automated systems run continuous CVE monitoring against the full dependency graph of every production service, comparing declared components against the National Vulnerability Database, GitHub Advisory Database, and curated threat intelligence feeds maintained by our analysts. When a newly published CVE affects a component in our SBOM, our intelligence team is alerted within hours and a remediation ticket is created automatically.

We treat package integrity as a hard requirement for all production dependencies. Every package consumed from public registries is verified against its published checksum and, where available, its cryptographic signature before it enters our build environment. We do not permit the use of packages that cannot be verified against a known-good integrity baseline. Our dependency pinning policies prevent silent upgrades — all version changes are explicit, reviewed, and reflected immediately in our updated software bill of materials.

Our dependency transparency commitment extends to transitive dependencies. We do not accept a clean direct dependency graph as sufficient assurance when transitive components may carry risk. Our SCA tooling evaluates the full transitive closure of every dependency tree, and our SBOM artifacts reflect this complete picture. When transitive dependencies introduce license conflicts or vulnerability exposure, our team resolves them with the same rigor applied to direct dependencies.

Section 02

Vendor Governance

Every third-party service, API provider, cloud platform, and AI model vendor that touches our production environment is subject to our vendor governance program. Our analysts apply the same critical lens to our own vendor relationships that we apply when evaluating AI companies for our intelligence database. We do not apply a lighter standard to our own supply chain because we are the ones being scrutinized.

Vendor Due Diligence and Assessment

Our vendor due diligence process begins before any vendor reaches production. Before onboarding a new vendor, our team conducts a structured vendor assessment covering security controls, data handling practices, incident history, and certifications. We issue a security questionnaire to every vendor that will process data or operate in proximity to our production systems. Responses are evaluated against our minimum security baseline, and vendors that cannot demonstrate adequate controls are not onboarded regardless of functional fit.

Our vendor management program maintains active records for all production vendors, including their current certification status, last assessment date, outstanding findings, and escalation contacts. Vendor risk is reviewed at least annually and whenever a vendor discloses a material security incident, undergoes a significant ownership change, or modifies the scope of their service in ways that affect our data exposure.

We prioritize vendors that hold recognized security certifications. We give strong preference to AI and infrastructure vendors that have obtained SOC 2 Type II attestation or ISO 27001 certification, and we verify these certifications directly against issuing auditor records rather than relying on vendor self-attestation. For vendors handling particularly sensitive processing functions, we require both SOC 2 and ISO 27001 before they can be approved for production use.

Third-Party Risk and Subprocessor Oversight

Third-party risk management at ASI encompasses not only direct vendor relationships but also the downstream subprocessor chains that our vendors rely on. We require our primary vendors to disclose material sub-processor relationships and to notify us before adding new subprocessors that would expand the scope of data processing. This subprocessor visibility is a contractual requirement, not a courtesy request.

Our third-party assessment framework distinguishes between vendors by their data access level and their position in our critical path. Vendors that process or store data, or whose failure would impair our core assessment capabilities, are subject to enhanced vendor audit rights that allow our team to request evidence of controls, review penetration test results, and request remediation timelines for open findings. Vendor risk is quantified and tracked as a standing item in our security governance reporting.

All data processor agreements include provisions for incident notification, data deletion on termination, and prohibition on secondary use of our data for model training or analytics. We do not accept vendor terms that would permit our data to be used to improve third-party models without explicit consent.

Section 03

Model Provenance & Lineage

AI models occupy a unique position in our supply chain: they are both software artifacts and encoded representations of the data and processes used to produce them. Our model provenance program treats models with the same rigorous inventory and verification discipline we apply to software packages — with additional scrutiny for the training conditions and lineage that determine how a model will behave in production.

Model Cards and Documentation

Every AI model deployed in our production environment has a corresponding model card maintained by our intelligence team. These model cards document the model's intended purpose, its performance benchmarks on our internal evaluation sets, known limitations and failure modes, the provenance of its training data, and the conditions under which its outputs should be treated with elevated skepticism. Model cards are version-controlled alongside the models they describe and are updated whenever a model is retrained, fine-tuned, or replaced.

For models we operate internally, our analysts document full model lineage — including the sequence of training runs, fine-tuning passes, and post-training alignment procedures that produced the deployed artifact. This lineage documentation makes it possible to trace any behavioral characteristic of a deployed model back to specific decisions in its development history. When we observe unexpected model behavior in production, our model lineage records are the first resource our team consults.

Training Data and Data Provenance

Data provenance is a first-class concern in our model development process. Our analysts maintain detailed records of every training dataset used to develop or adapt models in our production stack — including the dataset's origin, the date it was collected, any known quality issues or biases, and the data processing steps applied before it entered training. We do not use training data whose provenance cannot be documented.

For fine-tuned models adapted to our domain, we maintain records of the domain-specific training data that shaped the model's specialized capabilities. These records include data sourcing documentation, filtering criteria, and any human review processes applied to curate the fine-tuning corpus. Data lineage connects every production model back to the datasets that define its behavior.

Model Registry, Versioning, and Reproducibility

All models in active use are registered in our internal model registry, which serves as the authoritative inventory of AI systems across our production environment. The model registry records each model's version, deployment date, approval status, associated model card, and the analyst responsible for its governance. No model enters production without a registry entry and a completed model card.

Model versioning is enforced across our entire model lifecycle. Models are never silently updated in production — every version change is explicit, documented, and subject to our standard promotion process, which includes evaluation against our held-out benchmark sets and analyst sign-off. Older versions are retained in our registry for a defined retention period to support incident investigation and regression analysis.

Reproducibility is a design requirement for our model development process. Our training pipelines are code-defined and version-controlled, so that any model in our registry can be reproduced from its documented inputs given the original training data and the training code at the relevant commit. This reproducibility discipline protects us against silent drift and supports the forensic traceability that our position as an AI security assessor demands.

Section 04

Software Supply Chain Integrity

Securing the software supply chain requires more than dependency scanning. Our analysts assess AI vendors on the maturity of their build security, artifact signing, and provenance attestation practices — and we hold our own engineering operations to the same framework. We apply industry-leading standards for secure supply chain governance to every component of our build and deployment infrastructure.

SLSA and Build Provenance

Our build systems are designed to comply with the Supply Chain Levels for Software Artifacts (SLSA) framework. We target SLSA Build Level 2 across our core production services, with our most critical scoring infrastructure held to SLSA Build Level 3 controls. This means our builds are performed in isolated, ephemeral environments with no persistent access to production credentials, and every build produces a signed build provenance attestation that records the exact inputs, the build environment hash, and the identity of the system that performed the build.

Build attestation records are stored as immutable artifacts and verified at deployment time. Our deployment pipeline rejects any artifact that lacks a valid, unexpired build attestation from our authorized build infrastructure. This policy closes the gap between "what we built" and "what we deployed" — an attacker who tampers with an artifact after the build step will produce an artifact whose attestation does not match, and deployment will fail.

Code Signing and Artifact Integrity

Code signing is applied to all deployable artifacts produced by our build pipeline. We use hardware-backed signing keys managed through a dedicated secrets management system, with key access restricted to our automated build infrastructure and unavailable to individual engineers. This means that a compromised developer workstation cannot produce a validly signed artifact — signing authority is held exclusively by our controlled build environment.

Artifact signing extends to container images, model artifacts, configuration packages, and data pipeline definitions. Every artifact that crosses the boundary from our build environment to our deployment environment carries a cryptographic signature that is verified before execution. We maintain a signature transparency log that allows our security team to audit the complete history of signed artifacts in our production environment.

SSDF and NIST 800-218 Alignment

Our secure software development practices are aligned with the NIST Secure Software Development Framework (SSDF), published as NIST SP 800-218. This alignment covers our requirements engineering and threat modeling processes, our code review and static analysis practices, our security testing cadence, and our vulnerability disclosure and response procedures. Our engineering team is trained on SSDF practices, and our build pipeline enforces automated controls that operationalize the framework's technical requirements.

We engage with the OpenSSF ecosystem of open source security tooling and guidance. Our repositories are evaluated against the OpenSSF Scorecard, which assesses security practices including branch protection, dependency update automation, signed releases, and security policy publication. We treat the OpenSSF Scorecard as a continuous monitoring tool, not a one-time certification exercise, and our intelligence team reviews scorecard results as part of our regular security governance reporting.

Our commitment to a secure supply chain is not limited to following published frameworks. Our analysts actively monitor emerging supply chain attack patterns — including dependency confusion attacks, typosquatting campaigns, and compromised maintainer accounts — and adapt our controls in response to observed threat activity in the open source ecosystem.

Section 05

Agent Integration Supply Chain

The emergence of agentic AI architectures has created a new and rapidly evolving supply chain surface. As AI agents increasingly communicate through structured protocols to access tools, retrieve context, and invoke remote capabilities, the model context protocol (MCP) has emerged as a primary integration layer for agentic systems. Our intelligence team tracks the security implications of this protocol as part of our core research mandate — and we apply formal supply chain governance to every MCP server and agentic integration in our own environment.

MCP Inventory and Discovery

We maintain a comprehensive MCP inventory covering every MCP endpoint that our agents are authorized to contact. This inventory — our internal registry of approved MCP server instances — serves the same function as our software bill of materials for traditional dependencies: it provides a complete, current, auditable record of what our agents can access, under what conditions, and with what level of trust. No MCP server may be accessed by our production agents unless it appears in this approved inventory.

Our MCP discovery process governs how new tool servers are evaluated and onboarded. When an engineering team proposes integrating a new MCP endpoint, the integration undergoes security review before it is added to the approved inventory. This review covers the server's exposed capabilities, its authentication requirements, the data it can access or return, and — for any MCP third-party provider — the operator's security posture and compliance documentation.

Server Cards and Capability Negotiation

We require that every approved MCP server in our environment be accompanied by a server card — our adaptation of the model card concept applied to tool servers. Server cards document the server's declared capabilities, its data handling behavior, any persistent state it maintains, its authentication mechanism, and the analyst responsible for its continued governance. Capability negotiation between our agents and MCP servers is constrained by policy: agents may only invoke capabilities that appear in the server card for that endpoint and that have been explicitly approved for the agent's operational context.

Our agents do not perform unrestricted MCP discovery at runtime. Dynamic capability discovery — where an agent queries an MCP endpoint and decides at runtime what tools to invoke — is permitted only within the boundaries established by the pre-approved server card. This policy prevents prompt injection attacks and capability escalation scenarios in which a compromised or malicious MCP server advertises capabilities that were never authorized for use.

MCP Vendor Governance

Third-party MCP vendor relationships are managed under our vendor governance framework, with additional scrutiny appropriate to the agentic context. An MCP third-party provider that can supply data directly to our agents' context windows occupies a privileged position in our trust architecture — one that demands the same rigor we apply to infrastructure vendors with direct data access. We evaluate MCP vendor candidates against our security questionnaire, require documentation of their data handling practices, and conduct structured capability reviews before approving any new external tool server for production use.

Our analysts maintain a standing watch on the evolving MCP security landscape, tracking published vulnerabilities, emerging attack patterns, and changes to the model context protocol specification that may introduce new supply chain risks. This intelligence directly informs the controls we apply to our own MCP integrations and, in parallel, the supply chain security assessments we produce for our clients.

Section 06

Continuous Monitoring

Supply chain security is not a point-in-time posture. Components that were secure at the time of deployment become vulnerable as new CVEs are published. Vendors that passed due diligence last year may have experienced incidents or ownership changes since. Models that performed within acceptable bounds may degrade as the world changes around them. Our continuous monitoring program ensures that the security posture of our supply chain is known, current, and acted upon — not just documented at onboarding.

CVE Monitoring and Vulnerability Management

Our automated systems perform continuous CVE monitoring across our complete production dependency graph. When a new vulnerability is published that affects any component in our software bill of materials, our intelligence team receives an alert within the same business day. Alerts are triaged by severity: critical vulnerabilities affecting production systems trigger immediate escalation and a defined remediation SLA; high-severity findings are scheduled for remediation within a standard window; medium and low findings are batched into our regular patching cycle.

Vulnerability scanning runs continuously in our production environment using runtime analysis tools that complement our build-time SCA scans. This defense-in-depth approach means we catch vulnerabilities that emerge after deployment — such as newly published CVEs for components that were clean when they were built — without waiting for the next deployment cycle. Our security team reviews scan results as part of their daily operations rhythm.

OpenSSF Scorecard and Ecosystem Health

We run the OpenSSF Scorecard against our repositories and those of our critical open source dependencies on a weekly cadence. Scorecard results are tracked over time so our team can identify degradation in the security posture of dependencies — for example, a dependency that stops publishing signed releases or disables branch protection policies. Such changes are not merely academic: they indicate a weakened supply chain that may become a vector for attack.

Our intelligence team monitors the OpenSSF community's published advisories and research on ecosystem-level supply chain threats. When the OpenSSF publishes findings about compromised packages, malicious maintainers, or structural vulnerabilities in package registry infrastructure, our team evaluates our exposure within hours and responds accordingly. This community-connected intelligence capability complements our automated scanning with the human judgment that automated tools cannot provide.

Vendor and Model Monitoring

Continuous monitoring extends beyond software dependencies to our vendor and model supply chains. Our vendor management system tracks the certification renewal status of every production vendor — when a SOC 2 report nears expiration or an ISO 27001 surveillance audit is due, our team initiates re-verification before the certification lapses. Vendors that fail to maintain current certifications are moved to a remediation track, and their risk classification is elevated until certification is restored.

Our model monitoring program evaluates deployed models on a defined cadence against our internal benchmark sets, checking for behavioral drift, degraded performance on safety-relevant test cases, and emerging failure modes that were not present at deployment. Model monitoring results are reviewed by our analysts, and models that fall below performance thresholds are flagged for retraining, replacement, or enhanced human oversight while remediation is in progress.

The threat landscape for AI supply chain attacks evolves rapidly. Our intelligence team publishes internal advisories when significant new supply chain attack patterns emerge — whether targeting model artifacts, package registries, agentic tool servers, or build infrastructure. These advisories inform both our internal security operations and the supply chain security assessments we produce for clients, ensuring that our own posture and our analytical output remain current with the threat environment we track.

"The standard we apply to others, we apply to ourselves — supply chain security is not a score we assign; it is a discipline we practice."

Questions about our supply chain security practices? Contact our team at trust@aisecurityintelligence.com. For our broader security governance framework, see the AI Governance Framework. For our responsible AI commitments, see Responsible AI Principles.

Comprehensive market intelligence for the AI security landscape. Tracking 200+ companies across 10 market categories for enterprise security leaders, defense organizations, and investors.

Platform

  • Market Map
  • Company Database
  • Intelligence Hub
  • CI Monitoring
  • Compliance Navigator
  • Market Index
  • Global Intelligence Map
  • Incident Database

Research

  • Q1 2026 Report
  • Readiness Assessment
  • Weekly Briefing
  • Briefing Archive
  • Seven Pillars
  • IR Playbooks

Company

  • CISOs & Architects
  • Investors & Analysts
  • About
  • Trust Center
  • Terms of Use
  • Privacy Policy
© 2026 AI Security Intelligence. All rights reserved.