Company Overview
AI-native security platform providing automated vulnerability detection and protection for LLMs and AI models.
Headquartered in Tel Aviv, Israel — a hub for cybersecurity innovation — DeepKeep offers its DeepKeep Platform as a solution for organizations navigating the complexities of automated model scanning, vulnerability detection, and integrity verification. The platform is positioned within the broader AI Model Security category, where AI Security Intelligence tracks 9 companies building specialized capabilities.
Founded in 2021, DeepKeep has been building its platform during the critical period when enterprise AI adoption — and the corresponding security challenges — began their exponential acceleration.
Why Watch This Company
Model security is the cybersecurity category that most closely mirrors the evolution of software supply chain security a decade ago — and DeepKeep is addressing this parallel head-on. Their approach to automated model scanning, vulnerability detection, and integrity verification tackles a threat surface that will only expand as organizations deploy more models in more critical workflows.
Key Facts
📍
Headquarters
Tel Aviv, Israel
🛡
Category
AI Model Security
⚙
Key Product
DeepKeep Platform
Primary Product
◆
DeepKeep Platform
AI-native security platform providing automated vulnerability detection and protection for LLMs and AI models.
AI Model Security Landscape
AI Model Security →
AI Model Security protects machine learning models from adversarial manipulation, supply chain compromise, intellectual property theft, and integrity attacks throughout their lifecycle. As models become core enterprise assets — trained on proprietary data and deployed in critical decision paths — they represent high-value targets for adversaries seeking to poison training data, inject backdoors, steal model weights, or manipulate inference outputs.
9 companies tracked in this category
Buyer's Evaluation Framework
Key questions to evaluate any AI Model Security vendor — including DeepKeep:
Can the platform scan ML models for vulnerabilities, backdoors, and malicious payloads before deployment?
Does the solution verify model provenance and integrity throughout the ML supply chain?
How does the vendor protect against adversarial attacks — both at training time (data poisoning) and inference time (evasion attacks)?
Does the platform support the full range of model architectures, including LLMs, vision models, and multi-modal systems?
Featured Profiles in AI Model Security
Deep-dive intelligence profiles with full market analysis, development timelines, and product breakdowns.
📊 Funding History & Investment Rounds
👤 Executive Team & Key Hires
🎯 Competitive Positioning Matrix
📡 Signal Tracking — M&A, Product, Partnerships
📈 Quarterly Revenue & Growth Metrics
🔗 Supply Chain & Integration Mapping
Full Intelligence Profile
Access complete funding data, executive profiles, competitive positioning matrix, signal tracking, and strategic analysis.
Request Full Access →
Category Peers — AI Model Security
8 other companies in this category