Security you can verify.
AI you control.
The platform security fundamentals — SOC 2 alignment, AES-256, SSO, RBAC, audit logs — apply to every account. AI data sovereignty depends on which deployment option you choose. We'll explain exactly what each option means for your data.
Always on — every account
Platform security that doesn't depend
on how you configure AI.
SOC 2 Type II Alignment
SENTINEL is built against SOC 2 Type II security controls across all five trust service criteria: Security, Availability, Processing Integrity, Confidentiality, and Privacy. Full audit reports available under NDA.
Encryption at Rest & In Transit
All data is encrypted at rest with AES-256 and in transit with TLS 1.3. Marketplace API credentials are stored in isolated, encrypted vaults — never accessible to SENTINEL staff.
Role-Based Access Controls
Granular, role-based permission system — control what each team member can view, edit, export, or act on. Per-module permissions, per-brand access scoping, and audit-logged admin actions.
SSO & MFA Support
Single Sign-On via SAML 2.0 and OAuth, compatible with Okta, Google Workspace, Microsoft Entra, and any standard identity provider. Multi-factor authentication enforced at the account level.
Full Audit Log
Every action in the platform — API calls, configuration changes, data exports, workflow triggers, and user logins — is captured in an immutable, tamper-evident audit log. Searchable by user, time, and action type.
AI data sovereignty
Your data sovereignty level is your choice.
Three ways to run SENTINEL AI.
SENTINEL never uses your data to train shared models — that's true regardless of configuration. But how far your data travels depends on which AI deployment you choose. Here's exactly what each option means.
Self-Hosted LLM
Your infrastructure. Your model. Your data never leaves.
SENTINEL connects to any LLM you host — Ollama, vLLM, LM Studio, or any OpenAI-compatible API endpoint. Inference runs on your hardware or your private cloud. No data transits SENTINEL's systems or any third-party AI provider. True zero-retention, by architecture.
Best for: High-compliance operators, brands with strict data governance, enterprise accounts.
Bring Your Own API Key
Connect your own OpenAI, Anthropic, or Gemini account.
You supply your own API key for your preferred cloud AI provider. SENTINEL routes inference through your account — subject to your own agreement with that provider, not ours. SENTINEL itself never stores prompts or responses. Billing goes directly to your provider account.
Best for: Teams already using cloud AI with existing enterprise data agreements.
SENTINEL-Managed Cloud AI
We handle the AI integration. You're billed at cost.
SENTINEL configures and manages a cloud AI connection on your behalf — typically OpenAI or Anthropic, depending on your use case. Inference costs are passed through at cost with no markup. Your data is subject to that provider's enterprise data processing terms. SENTINEL does not use your data to train shared models — but the provider's own data policies apply.
Best for: Operators who want AI without setup overhead. Best for non-sensitive catalog data.
An honest note on "zero-retention"
When marketing says “zero-retention AI,” they usually mean the AI company doesn't retain your data for training. SENTINEL itself never retains prompts, responses, or inferences — that's true regardless of configuration. But if you're using Option 3 (SENTINEL-Managed Cloud AI), your data will transit a third-party AI provider (e.g. OpenAI or Anthropic) and is subject to their enterprise data processing terms. For operators with strict data governance requirements, we recommend Option 1 (self-hosted) or Option 2 (your own API key under your own agreement). We'd rather you understand the tradeoffs than sign something you didn't fully read.
Questions about security
or your deployment options?
We'll walk you through the right AI configuration for your data governance requirements before you sign anything.