Is AI-generated code safe?

Short answer

No — 45-62% of AI-generated code contains at least one security bug when the prompt is security-neutral. With explicit security cues in the prompt, the rate drops to 8-20%. Trust AI code to work; never trust it to be safe without review or scanning.

Multiple independent 2025-2026 studies consistently find: - Stanford (2025): 40% of AI-suggested code has a vulnerability - Georgia Tech (2026): 45% insecure rate on neutral prompts across Claude, GPT-5, Gemini, DeepSeek - Tenzai (Dec 2025): 69 vulnerabilities across 15 AI-coded apps; every app missed CSRF

The three most common bugs in AI-generated code: 1. Missing authorization on API endpoints (BOLA) 2. Leaked secrets in client bundle via wrong env-var prefix 3. Missing input validation leading to SQL injection or XSS

The mitigation: pair AI coding with a security scanner that understands AI-generated-code patterns. Not a pattern-matching SAST (high false-positive rate, useless for AI code) — a tool that verifies findings by reproducing the bug.

Practical workflow: Cursor / Claude / Copilot writes the code → Securie reviews every commit → you merge the fixes Securie proposes.

People also ask