Lovable — VibeScamming prompt-injection backdoor
Guardio Labs disclosed a prompt-injection chain that tricked Lovable's AI into generating backdoored code. Attackers could supply crafted prompts that resulted in compromised apps shipping to production.
What happened
Guardio Labs published the VibeScamming research showing how Lovable's AI code generation could be manipulated via crafted prompts embedded in shared artifacts, project descriptions, or imported templates — resulting in generated code containing attacker-controlled backdoors.
Timeline
Guardio Labs begins security testing Lovable's AI pipeline.
Prompt injection confirmed to produce backdoored code in generated apps.
Guardio discloses to Lovable; initial mitigations applied.
Public write-up published.
Root cause
Lovable's pipeline ingested untrusted text (project titles, descriptions, templates) into the code-generation prompt without sanitization or tool-scope restriction.
Impact
- Affected apps silently shipped with backdoor code
- Users of compromised templates were exposed
- Category-level wakeup call on vibe-coding supply-chain risk
Partially. Securie's L20 pre-merge AI-code scanner analyzes AI-generated code for common backdoor patterns before it lands in a repo. The injection itself (upstream of generation) is a platform-level fix Lovable needs to make; the generated-code scan layer catches most resulting backdoors post-hoc.
Lessons
- Prompt injection is a real risk for any AI code generator
- Generated code must be scanned before merge, not trusted by default
- Template marketplaces need code review