7 min read

Why AI-generated code is unsafe by default

Every major study in the last twelve months has measured the same thing: 40 to 62 percent of code produced by modern AI assistants contains a real security vulnerability. Here is what that looks like in practice, and why traditional SAST tools miss most of it.

The data is not controversial anymore. Tenzai's December 2025 audit of fifteen apps built by the five largest AI coding platforms found sixty-nine vulnerabilities — every single app was missing CSRF protection, every platform introduced at least one SSRF, and none of them set security headers. Georgia Tech's Vibe Security Radar tracked thirty-five new CVE filings in March 2026 traced directly to AI-generated code, up from six in January. Escape.tech's crawl of 5,600 vibe-coded production apps found two thousand distinct vulnerabilities.

Why does this happen? Three reasons, in order of importance.

1. Code assistants optimise for "works", not "works safely"

When you ask Claude or GPT-5 for a Supabase query that returns the current user's orders, the obvious answer is:

select * from orders where user_id = auth.uid();

That query works. It passes every test you will write. It is also a textbook broken-access-control bug if your app is multi-tenant — any user can read any order from any tenant by spoofing the request. The AI does not know your authorization model. Neither does your linter.

2. Classic SAST cannot see purpose

Tools like Snyk and Semgrep work by pattern-matching on AST shapes. "SELECT without a where clause on a user-controlled column" is detectable. "SELECT that forgets tenant-scoping because the AI didn't know tenants existed" is not. SAST finds the syntactic family of a bug; Securie reasons about the intent of the function and checks whether the implementation honors that intent.

3. The review surface has collapsed

Before AI, a junior engineer would write a hundred lines of code a day. A senior would review them. Today the junior generates a thousand lines of code an hour, and the senior is generating too. Nobody is looking carefully. The pull-request description itself is often AI-generated.

What actually catches these bugs

Three things, in combination:

1. An intent graph. Each module declares what it is for (auth boundary, payment path, PII handler, admin surface). Violations of declared intent — not just pattern mismatches — become first-class findings. 2. A verification sandbox. A candidate finding is rebuilt in a disposable copy of the app with realistic fixtures, then the exploit is actually run. If it does not work, the finding is dropped before any human sees it. 3. A framework-aware patch. Securie writes the fix using the idiomatic APIs of the framework you are on — a Supabase RLS policy, not a generic SQL guard.

Try it on your repo

If you are shipping on Next.js + Supabase + Vercel, connect your repo at securie.ai/signup. Free during early access, no card, one-click install.