Back to Blog
AI & AutomationAIAutomationCodingFractionalCTO

Vibe Coding Is a 10% Solution Being Sold as 100%

Chamath is right: 90% of code is maintenance. AI vibe coding solves 10%. Here is what actually works.

5 min read
1,066 words
Vibe Coding Is a 10% Solution Being Sold as 100%

Vibe Coding Is a 10% Solution Being Sold as 100%

Chamath made a claim this week that broke the AI-coding discourse in half: 90% of a company's code is maintenance, refactoring, and dealing with legacy systems. Vibe coding — where you describe what you want and Claude or Cursor generates it — solves maybe 10% of that. The greenfield work.

That is not a flaw in the tooling. That is a reality check about what engineering actually is.

If your team is mostly building new features in clean codebases, vibe coding is a force multiplier. If your team is shipping updates to a system built three years ago by five different contractors, vibe coding is a liability.

Why Vibe Coding Struggles with Real Code

The premise of vibe coding is elegant: describe the feature, let the AI write it, ship it. This works when the AI can see the entire context and the system has a clean boundary.

Real production systems do not work that way. They have:

  • Implicit contracts. A function signature tells you what it takes and returns. What it does to the surrounding system requires tribal knowledge. AI sees the function. It does not see the pattern.
  • Scattered consistency rules. Every mature codebase has conventions that are not enforced by the type system. Naming. Logging. Error handling. The AI writes code that looks right but breaks consistency in a way that breaks the next person's flow state.
  • Technical debt as load-bearing architecture. The system works because of a trade-off made two years ago when you needed to ship faster. The AI does not know that trade-off exists. It optimizes locally and breaks globally.

Veracode's 2025 analysis hit 45% of AI-generated code introducing a security vulnerability. That number is not surprising. Vulnerabilities are almost always the result of breaking a pattern or assumption that was not visible in the immediate context. AI sees the code. It does not see the contract.

What Actually Works: Guided Generation, Not Vibe Generation

The teams I work with that succeed with AI coding are not using it for vibe-based generation. They are using it for guided generation.

The difference is small. The outcome is different.

Vibe generation:

  • "Write a function that takes a user ID and returns their profile"
  • AI generates something reasonable
  • It does not fit the codebase's error handling, does not follow the logging pattern, and introduces a race condition with an existing cache

Guided generation:

  • "Write a function that takes a user ID and returns their profile. Use the shared error handler in util/errors, log using the winston config from index.js, and check the Redis cache first — here is the existing cache-check function"
  • AI generates something that fits
  • It passes review because it is consistent with the system

This is not vibe coding. This is scaffolding with AI assistance.

The Practical System Prompt for Your Team

Here is what I use with teams who are doing this right. Save it and adapt it for your codebase:

You are a code assistant working on a mature production system.
Your job is not to be creative — it is to be consistent and safe.

Before writing any code:
1. Identify the closest existing implementation in the codebase for what the user is asking for
2. Match its patterns exactly: error handling, logging, naming, structure
3. Call out any assumptions you are making about contracts or implicit behavior
4. Ask clarifying questions if the request conflicts with existing patterns

When reviewing your own code before submitting:
1. Does this follow the error handling pattern in util/errors.ts?
2. Does this use the logging setup from the config?
3. Does this respect the caching strategy we use?
4. What assumption am I making that is not obvious from the code?
5. Could this break something downstream?

If the answer to any of these is "I am not sure," ask the user before proceeding.

This is not a prompt that generates code faster. It is a prompt that generates code you can actually ship.

The Real Skill You Need: Verification, Not Generation

Here is the thing that changed how I think about AI coding:

The bottleneck is not generation speed. The bottleneck is verification speed.

A senior engineer can write code at 10 lines per minute when they are in flow. Claude can write 100 lines per minute. But that senior engineer can verify 5 lines per minute because they have to understand the implications across the system.

So if you are using AI to generate code without guidance, you are trading a 5 lpm verification bottleneck for a 30 lpm bottleneck. You are going slower.

The teams that are actually moving faster are the ones who:

  1. Write the AI a tight constraint (guided generation, not vibe generation)
  2. Have the AI generate code that respects those constraints
  3. Ship it with confidence because verification is built into the constraints

This is why Cursor with skill files works. Why Claude Code Routines structured properly works. Why vibe coding without scaffolding does not work.

How to Talk About This With Your Team

If your team is frustrated with AI coding ("We just spend time fixing bad code"), they are probably doing vibe coding.

If your team is moving faster, they are probably doing guided generation with tight constraints.

The conversation is not "Do we use AI coding?" It is "What constraints do we add to make sure the AI is solving our actual problem?"

That is the conversation that matters.

Get the Full System Prompt Template

I built a more detailed system prompt configuration that includes checks for your specific tech stack, caching layers, and error patterns. It is a skill file that you can load into Cursor, Claude Code, or any AI coding setup.

Comment "Guide" on my LinkedIn post about this and I'll DM you the full template.


Work With Me

I help engineering orgs adopt AI across their entire team — not just in the code, but in how product, support, and operations work too.

If you want your team shipping faster without growing headcount, and you want to do it in a way that does not break under the weight of real systems, let's talk.