From a growing sample of publicly-reachable Supabase projects we've audited, the same seven mistakes come up every time: RLS off on at least one table, service-role key in the client, missing tenant scoping, default-allow policies, no policies on storage buckets, exposed JWT secret, and over-broad anon-role grants. Fixes for each.
Your first enterprise prospect just sent a 200-question security questionnaire. Here is the exact playbook to pass SOC 2 Type 1 in six weeks — the policies to copy, the controls to wire up, the mistakes to avoid. Written for solo founders.
We ran 500 authentication-related prompts against Claude Opus 4.7, GPT-5.4, Gemini 2.5, and DeepSeek V3.2. 92% of the generated code had at least one security bug. Here is the catalog of the top seven recurring mistakes.
Moltbook leaked 1.5 million API keys, 35,000 emails, and 4,060 private messages in 72 hours. Wiz's disclosure showed the root cause: a single Supabase table without row-level security. Here is the timeline, the exact bug, and the ten-minute hardening walkthrough for your own app.
The Next.js middleware-bypass vulnerability was disclosed in March 2025 and patched within 24 hours. One year later, forty percent of public Next.js apps are still running vulnerable versions. Here is why, and the two-minute check to run on yours.
We reran the 2025 study against Claude Opus 4.7, GPT-5.4, Gemini 2.5, and DeepSeek V3.2. The share of insecure suggestions has improved — but only when the prompt asks for security. The prompts that reliably produce safer code are short and we have them in this post.
We are opening invite-only early access to Securie, a new kind of application security tool built for the era of AI-generated code. Free during beta. Founding-rate discount for life.
Every major study in the last twelve months has measured the same thing: 40 to 62 percent of code produced by modern AI assistants contains a real security vulnerability. Here is what that looks like in practice, and why traditional SAST tools miss most of it.
A look at the three-layer model stack we ship with — Foundation-Sec local, GLM-5.1 and DeepSeek for primary reasoning, and a bounded frontier escalation layer. Why we chose it and what it costs.