Vibe Coding Is a Security Disaster: Here's the Skill File That Fixes It
Lovable and Vercel both had security incidents in one week. Vibe coding platforms optimize for speed and treat security as a documentation problem. Here is the Cursor skill file that adds a real review layer to every AI code session.

Vibe Coding Is a Security Disaster: Here's the Skill File That Fixes It
Lovable's official response to their security incident was: we told you public means public. Vercel got breached this week through a compromised third-party AI tool. Environment variables exposed. Customers told to rotate all credentials.
Two incidents in seven days. Both in the vibe coding ecosystem.
This is not a coincidence. Speed-optimized platforms are built to reduce friction between idea and deployment. Security review is friction. The math is not hard to follow.
What Vibe Coding Teams Get Wrong
The failure pattern is consistent across every team I walk into running AI-heavy development workflows.
Engineers accept AI diffs without reading them for security implications. The agent wrote working code. The tests pass. The PR ships. Nobody checked whether that working code exposed something it should not.
The most common issues:
- API keys hardcoded into client components. The agent put them there because the prompt had the key in it. The engineer accepted the diff.
- OAuth flows missing state validation. The AI built the happy path. Edge cases like CSRF protection were not in scope.
- Database queries accessible through public API routes. The AI did not know the route was meant to be private. No one told it.
- Environment variables leaking into client-side bundles. The naming convention was wrong. The agent followed the pattern it saw in the codebase.
None of these are AI errors in the way people imagine. The model did what it was instructed to do. The problem is that "write me an auth flow" does not include "check for these specific security gaps" unless someone explicitly adds that instruction.
That is the gap a skill file fills.
The Skill File That Fixes It
A Cursor skill file is a set of instructions that loads into the agent context window before every task. You can write one specifically for security review. Here is the one I give teams running AI-heavy workflows:
# Security Review — Cursor Skill File
## Run Before Accepting Any AI-Generated Diff
### Secrets and Environment Variables
- [ ] No hardcoded API keys, tokens, or credentials in any file
- [ ] All secrets referenced via process.env, never inline
- [ ] NEXT_PUBLIC_ prefix only on variables intentionally exposed to the client
- [ ] .env.local is in .gitignore
### API Routes and Access Control
- [ ] Every API route has authentication middleware unless explicitly marked public
- [ ] User input validated before use in queries or external calls
- [ ] No public routes returning user-specific data without an auth check
### OAuth and Auth Flows
- [ ] State parameter validated on OAuth callbacks
- [ ] Session tokens rotated after privilege escalation
- [ ] No auth tokens stored client-side without explicit review
### Third-Party AI Tool Integration
- [ ] Third-party tools use scoped API keys with minimum required permissions
- [ ] Webhooks validated with shared secrets
- [ ] AI-generated outputs treated as untrusted input until reviewed
## Escalate When You See
- Credentials passed through URL parameters
- Auth bypass conditions (skip check if dev mode)
- Generated code writing to the filesystem without validation
- Public routes returning user records without field filtering
Drop this into your .cursorrules file or save it as security-review.md in your .cursor/skills/ folder. Every agent session now runs with this checklist as active context. The agent starts catching its own issues before you review the diff.
What This Looks Like in Practice
When I engage with a company as fractional CTO and they are running vibe coding platforms, the first two hours follow the same script. Read the repo. Find the secrets in the wrong places. Find the public routes. Find the auth flow that works for the happy path and breaks on edge cases.
Then we build the skill file together. Then it goes into the PR checklist. Then the team stops making those specific mistakes because the agent now has context that redirects it.
The Vercel incident was excessive third-party permissions. The Lovable incident was an expectation gap. Both are solvable with process, not by abandoning the tooling. Vibe coding platforms are useful. The engineers who get good at AI-native development ship features in hours that used to take sprints.
The engineers who get hurt are the ones who treated speed as a complete strategy.
Get the Full Security Skill File
I posted the complete setup on LinkedIn — the PR checklist template and a Claude prompt for automated security scanning of existing codebases. Comment "Guide" and I'll DM you the full package directly.
Work With Me
I help engineering orgs adopt AI across their teams — not just in the code, but in how product, support, and ops work too. If you want your team moving faster without the security exposure that comes with unreviewed speed, let's talk.
Kris Chase
@krisrchase