Methodology

How We Score

Version 1.0  —  March 2026

AI Security Intelligence publishes this methodology document as a commitment to algorithmic transparency. Our clients, the organizations we score, and the broader AI security community deserve a clear accounting of how our ratings are produced. This document describes the factors we evaluate, the signals we collect, and the principles that govern our scoring engine.

Contents

  1. Our Approach
  2. The Ten-Factor Model
  3. Signal Collection
  4. Scoring Algorithm
  5. Grading Scale
  6. Confidence & Uncertainty
  7. Methodology Governance
  8. What We Don't Do

1. Our Approach

ASI's Control Effectiveness Scoring (CES) engine evaluates organizations' AI security posture using exclusively outside-in, publicly observable signals. We do not require access to internal systems, nor do we ask organizations to self-report. Our methodology mirrors the approach that transformed traditional cybersecurity ratings — adapted specifically for the AI security domain.

This discipline — rigorous, evidence-based, and non-intrusive — is what separates a credible security rating from a survey. It is also what makes our scores useful to insurers, investors, and enterprise security leaders who need an independent assessment of ai risk management posture rather than an organization's own account of it.

Our analysts treat every scored organization as an objective subject of study. No organization has privileged access to influence its own rating. Our commitment to responsible ai practices extends to the engine itself: we apply the same standards of rigor to our own systems that we demand from the organizations we evaluate.


2. The Ten-Factor Model

Our scoring framework evaluates ai governance and security maturity across ten weighted factors. Each factor captures a distinct dimension of an organization's posture. The weights reflect the relative contribution of each factor to overall AI security risk, informed by our proprietary AI Security Incident Database of 110+ classified incidents and our team's analysis of which control failures most frequently precede material AI security events.

# Factor Weight
1
AI Model Governance & Transparency
Measures whether an organization has published documented policies governing how its AI systems are developed, deployed, and monitored — including model cards, system cards, and accountability structures. Strong scores here indicate that ai transparency is treated as a first-class operational obligation.
15%
2
AI Supply Chain Security
Evaluates the rigor applied to third-party AI components, pre-trained models, and machine learning libraries integrated into the organization's systems. Supply chain risk is among the fastest-growing vectors in AI security incidents.
12%
3
Shadow AI Exposure
Assesses the observable evidence of unsanctioned or undisclosed AI tool adoption across the organization, including signals from job postings, code repositories, and procurement disclosures that suggest deployment outpacing governance.
12%
4
AI Endpoint & API Security
Examines observable security hygiene across AI-exposed services and interfaces, including certificate management, public API posture, and disclosed vulnerability patterns in AI-adjacent infrastructure.
12%
5
AI Incident History & Response
Draws on our AI Security Incident Database to assess whether an organization has experienced documented AI security incidents and, when applicable, how transparently and effectively it responded. Past incident behavior is a meaningful predictor of current risk culture.
10%
6
AI Data Protection & Privacy
Evaluates the maturity of an organization's publicly disclosed data governance and privacy practices as they apply specifically to AI training data, inference pipelines, and model outputs — going beyond general privacy policy compliance to assess AI-specific data handling obligations.
10%
7
AI Agent & Autonomous Controls
Assesses the governance structures in place for AI agents and autonomous systems operating with elevated permissions or reduced human oversight. This factor carries increasing weight as agentic architectures proliferate across enterprise environments.
10%
8
AI Model Robustness & Testing
Measures evidence of formal adversarial testing, red-teaming, and robustness evaluation programs applied to AI systems before and after deployment. Explainability and bias detection practices are also factored here as indicators of model validation maturity.
8%
9
AI Regulatory Compliance
Evaluates alignment with applicable ai compliance obligations and voluntary frameworks, including NIST AI RMF, ISO 42001, and the EU AI Act where applicable. Organizations that proactively align with emerging standards demonstrate forward-looking governance maturity.
6%
10
AI Security Talent & Culture
Assesses the depth of AI security expertise within the organization through publicly observable signals including hiring patterns, published research, leadership credentials, and participation in the AI security community. Culture shapes every other factor.
5%

Factor weights are reviewed annually and updated when materially new evidence — from incident data or structural shifts in the AI threat landscape — warrants revision. All weight changes are versioned and documented.


3. Signal Collection

Our intelligence pipeline gathers signals from multiple public and semi-public data sources. Collection is fully passive and non-intrusive — we never interact with a target organization's systems.

Our primary signal sources include:

  • Certificate transparency logs — for mapping AI-adjacent infrastructure and assessing certificate hygiene
  • Public code repositories — for detecting AI model usage, dependency exposure, and supply chain signals
  • Regulatory filings and disclosures — including SEC filings, EU AI Act high-risk system registers, and government contract databases
  • Published governance documentation — responsible AI policies, model cards, system cards, ethics board disclosures, and ai principles pages
  • Job market data — hiring velocity and role composition as a proxy for AI security investment and culture
  • Academic and conference disclosures — published research, CVE filings, and responsible disclosure reports involving artificial intelligence and machine learning systems
  • ASI Proprietary AI Security Incident Database — our analysts maintain a curated, 17-category classification of 110+ AI security incidents, updated continuously as new events emerge

Signal collection cadence varies by source type. Infrastructure signals are refreshed continuously; governance documentation signals are reviewed quarterly; incident history is updated in near-real time as new events are classified.


4. Scoring Algorithm

For each of the ten factors, our analysts and automated pipeline collect available signals and assess them for evidence quality. Signals are not treated as equal. A formal published policy with verifiable authorship carries greater weight than an uncorroborated mention in a marketing document. This confidence-weighted averaging model reduces the influence of ambiguous or low-fidelity evidence.

Each factor produces a component score between 0 and 100, representing our team's assessment of that organization's maturity along that dimension relative to the broader population of scored entities. The ten component scores are then combined using the published weights to produce a composite Control Effectiveness Score (CES) on a 0–100 scale.

The composite score is deterministic given the input signals. Our analysts review algorithmic outputs and may apply structured overrides in limited circumstances — for example, when a recently disclosed material incident has not yet propagated through automated signal feeds. All manual overrides are logged, attributed, and subject to peer review.


5. Grading Scale

Composite scores are translated to letter grades according to the following fixed thresholds. Thresholds are stable across methodology versions unless a formal review concludes that population-level distributions have shifted materially.

A
80–100
Strong
Mature AI security controls across all domains. The organization demonstrates consistent, documented governance and proactive security practices.
B
65–79
Good
Solid overall posture with identifiable room for improvement in specific factors. Most core controls are in place.
C
50–64
Developing
Basic measures are in place, but significant gaps remain across one or more high-weight factors. Active improvement is needed.
D
35–49
Weak
Minimal observable controls. Material risk exposure is present. Immediate investment in AI security governance is warranted.
F
0–34
Critical
Negligible posture. Little to no publicly observable AI security governance. Immediate and comprehensive remediation is required.

6. Confidence & Uncertainty

Every score carries an associated confidence metric that reflects the quality and volume of signals available for that organization. When signal availability is limited for a given factor — for example, because an organization maintains minimal public-facing documentation — the engine reports reduced confidence rather than inferring a score it cannot support.

We believe intellectual honesty about uncertainty is itself a form of rigor. A low-confidence score is not a failure of our methodology; it is an accurate representation of what the evidence permits us to conclude. Organizations seeking to raise both their score and its confidence level can do so by publishing more substantive governance documentation — which is itself a genuine improvement in ai accountability.

Confidence levels are displayed alongside scores in all client-facing products and reports. Our analysts do not inflate confidence to make outputs appear more authoritative than the underlying evidence supports.


7. Methodology Governance

Changes to factor weights, scoring algorithms, or signal sources undergo formal internal review. No change takes effect without documentation of its rationale, expected impact on the scored population, and backtesting against historical data. All methodology versions are archived and attributed.

Our own scoring system is itself subject to the ai governance framework we apply to others. The principles of fairness, transparency, and human oversight that we evaluate in external organizations are embedded in how we manage our own engine. Our AI Governance Framework governs the scoring system and the intelligence pipeline that feeds it.

This document represents Version 1.0 of our published methodology. Substantive revisions will be released as new versions with a changelog. Minor clarifications that do not affect scoring behavior may be made without a version increment.

Questions about methodology can be directed to our team at methodology@aisecurityintelligence.com.


8. What We Don't Do

In the interest of complete algorithmic transparency, we are explicit about the boundaries of our methodology:

  • We do not perform active scanning. Our collection is entirely passive. We do not probe, ping, or interact with any organization's systems, networks, or services.
  • We do not conduct penetration testing. CES is not a technical security assessment. It is a governance and posture rating derived from publicly observable signals.
  • We do not use insider information. Our analysts do not solicit or accept non-public information about any scored organization.
  • We do not sell or share individual company scores without authorization. Scores are shared with the rated organization and, in aggregate form, used in our published research and indices. Individual scores are not licensed to third parties without the subject organization's consent.
  • We do not guarantee completeness. The outside-in model has inherent limits. An organization may have strong internal controls that are not publicly visible. Our score reflects observable evidence, not an audit conclusion.

Methodology Version 1.0 — Published March 2026. AI Security Intelligence LLC. For questions, contact methodology@aisecurityintelligence.com. See also: AI Governance Framework · Responsible AI Principles · Vulnerability Disclosure Policy

Comprehensive market intelligence for the AI security landscape. Tracking 200+ companies across 10 market categories for enterprise security leaders, defense organizations, and investors.

Platform

  • Market Map
  • Company Database
  • Intelligence Hub
  • CI Monitoring
  • Compliance Navigator
  • Market Index
  • Global Intelligence Map
  • Incident Database

Research

  • Q1 2026 Report
  • Readiness Assessment
  • Weekly Briefing
  • Briefing Archive
  • Seven Pillars
  • IR Playbooks

Company

  • Trust Center
  • About
  • Terms of Use
  • Privacy Policy
  • Vulnerability Disclosure
© 2026 AI Security Intelligence. All rights reserved.