Join Our Team

Build the Standard

AI Security Intelligence is defining how the world understands, measures, and insures AI risk. We're looking for people who want to shape that future.

Why This Work Matters

The AI security landscape is expanding faster than any institution can track alone. Enterprises are deploying AI systems at scale, insurers are underwriting risks they can barely quantify, and regulators are writing frameworks without real-time signal. The gap between adoption and accountability is widening every quarter.

We exist to close that gap. Our intelligence platform monitors 200+ companies across the AI security ecosystem, scores organizational readiness with proprietary frameworks like AIRS and CES, and delivers the analytical foundation that underwriters, CISOs, and policymakers need to make decisions with confidence.

This is foundational work — the kind that defines a discipline before it fully exists. If that sounds like something you want to be part of, we want to hear from you.

What Defines This Team

Depth Over Speed

We publish when the analysis is right, not when the deadline says so. Rigor is the product. Every score, every classification, every signal — it has to hold up under scrutiny.

Build in the Open

Our Trust Center, methodology documentation, and responsible AI principles aren't afterthoughts — they're core product. We hold ourselves to the same standard we apply to everyone else.

Domain Ownership

Every member of this team owns a domain — not just a task list. You'll have the latitude to shape your area of the platform, and the responsibility to make it the best in the industry.

Where We're Growing

We're building the team thoughtfully. These represent the disciplines we're investing in — some are active now, others are on the near horizon. If your expertise aligns, we'd welcome a conversation.

AI Security Analyst

Monitor and classify AI security incidents, contribute to threat scoring, and maintain the intelligence pipeline that powers our platform. Background in cybersecurity, AI/ML, or intelligence analysis.

Intelligence Remote

Research Engineer

Build and refine the scoring engines, data collectors, and analytical frameworks that underpin CES and AIRS. Strong Python, data engineering, and a genuine interest in AI risk modeling.

Engineering Remote

Insurance & Risk Strategist

Bridge the gap between AI security intelligence and insurance underwriting. Help design scoring methodologies that actuaries and underwriters can operationalize. Experience in cyber insurance, risk modeling, or actuarial science.

Strategy Coming Soon

Product Designer

Shape the experience layer of our intelligence platform — from data visualization and scoring interfaces to the editorial design of our research publications. Fluency in information-dense design is essential.

Design Coming Soon

Regulatory & Compliance Analyst

Track and interpret AI regulatory developments across jurisdictions. Contribute to our Compliance Navigator and help clients understand the evolving landscape of AI governance requirements.

Policy Coming Soon

AI Security Team & Leadership

AI Security Intelligence is built around a dedicated AI security team with deep specialization in threat intelligence, AI risk modeling, and security research. Our head of AI security leads the technical direction of our scoring infrastructure, while our AI safety team maintains the integrity of every model in our production pipeline.

Our team includes specialists drawn from enterprise cybersecurity, AI/ML research, insurance analytics, and policy. We operate a dedicated AI security group responsible for red-teaming our own systems, maintaining our incident classification pipeline, and advancing our proprietary methodologies. Our AI security leadership reports directly to the executive team — security is not a support function here, it is the core product.

We maintain an AI ethics board comprising internal leadership and external advisors. The responsible AI committee meets quarterly to review methodology changes, audit scoring fairness, and evaluate alignment with evolving standards including NIST AI RMF and ISO 42001. This governance structure ensures that our scoring systems operate under continuous ethical oversight.

Industry & Research Engagement

Our analysts actively contribute to the AI security research community. We present findings at major industry conferences including RSA Conference, Black Hat, BSides, and OWASP events. Our threat intelligence methodology has been developed in dialogue with practitioners across the AI safety summit circuit, and our scoring frameworks draw on peer-reviewed research from venues including NeurIPS, ICML, and USENIX Security.

We publish original research through our Weekly Briefing, quarterly State of AI Security reports, and thematic deep-dives. Our research blog documents methodology evolution, incident analysis, and emerging threat patterns. We believe that the organizations best positioned to assess AI security risk are those actively engaged in understanding it at the frontier — through publications, conference participation, and open contribution to the standards bodies shaping this domain.

Our team engages with AAAI and DEF CON's AI Village community, contributing to the shared understanding of adversarial AI, model robustness, and the intersection of AI safety with traditional cybersecurity. This research presence is not peripheral to our mission — it is foundational to the credibility of every score we publish.

"The organizations that define how AI risk is measured will shape how the entire industry is governed. We intend to be that organization."

— AI Security Intelligence

Ready to Define the Standard?

We don't have a formal application process yet — and that's intentional. If our mission resonates, send us a note. Tell us what you'd bring, what excites you about this space, and how you'd want to contribute.

Start a Conversation

careers@aisecurityintelligence.com