AI product security — the 2026 baseline

AI products add a new threat model on top of standard SaaS risks. Prompt injection, model supply chain, cost-of-abuse, data residency in training, and EU AI Act compliance all become first-class concerns.

Top security risks

Prompt injection defeats guardrails

Every frontier model is jailbreakable given enough tokens. Tool-scope restriction is the defense.

Cost abuse via leaked keys

One leaked OpenAI key on GitHub = $50K burned overnight. Spend caps + rate limits are non-negotiable.

Customer data in model training

OpenAI / Anthropic offer zero-data-retention enterprise endpoints. Use them or get sued.

Model supply chain (weights source, provenance)

Using Hugging Face models without vetting = training data / backdoor risk.

Regulatory context

EU AI Act (high-risk AI systems by Aug 2026), EU Cyber Resilience Act (CE marking by 2027), NYC LL144 (bias audits), Colorado AI Act, state AI transparency rules.

Checklist

  • AIBOM published listing every model used
  • Zero-data-retention contracts with model providers
  • Prompt-injection regression corpus in CI
  • Tool-scope restriction documented per tool
  • Cost caps on every inference endpoint
  • Model card for each model in inference path
  • Human-oversight documented for any high-stakes decision
What your buyers look for

Enterprise buyers in 2026 ask AI-specific questions (AIBOM, data residency, training-data practice). Vanta's AI-readiness module and similar are emerging. Publish your AIBOM publicly.