Leaked OpenAI API key — what attackers do and how to rotate

Your OpenAI key was committed to GitHub or shipped in a client bundle. Here is what an attacker can do in the first sixty seconds, how to rotate safely, and how to prevent the next one.

The next 60 seconds matter

Automated GitHub scrapers pick up OpenAI keys within seconds of push. The attacker runs GPT-4o / GPT-5 inference until your spend cap trips — typical damage is $1K-$50K per key before detection. Keys attached to team accounts can additionally exfiltrate fine-tuned model weights and chat-completion histories with attached files.

  • Burn your spend budget on GPT-5.4 inference
  • Exfiltrate any fine-tuned models trained on your corpus
  • Pull your Assistants API thread histories if they exist
  • Create new fine-tunes that poison your account

Rotation playbook

  1. Revoke the compromised key at platform.openai.com/api-keys immediately
  2. Rotate the key in every environment (Vercel, GitHub Actions, local .env)
  3. Review usage in the past 24 hours at platform.openai.com/usage for unexpected spikes
  4. If spike detected: contact OpenAI billing support for fraud reversal within 7 days
  5. Audit git history: `git log --all -p | grep -E 'sk-(proj-)?[a-zA-Z0-9]{40,}'` — force-push alone does not help, the reflog still carries the key

Prevent the next one

  • Never prefix with NEXT_PUBLIC_ — that ships to the browser bundle
  • Enable GitHub push protection org-wide
  • Set a daily spend cap per project in the OpenAI dashboard
  • Route inference server-side only; the browser never needs the key
Pattern we scan for
sk-proj-... (48+ chars) or sk-... (legacy 51-char)