Back to Blog
Engineering LeadershipAIEngineeringLeadershipFractionalCTOAIAdoption

Should You Fire Engineers Who Refuse to Use AI? You're Asking the Wrong Question.

60% of executives plan to lay off employees who won't use AI. 48% call AI adoption a massive disappointment. Those two stats are not unrelated.

5 min read
904 words
Should You Fire Engineers Who Refuse to Use AI? You're Asking the Wrong Question.

Should You Fire Engineers Who Refuse to Use AI? You're Asking the Wrong Question.

60% of executives say they plan to lay off engineers who won't adopt AI tools. 77% say those employees won't be considered for promotions. These numbers are real, the pressure is real, and the instinct behind them makes sense.

Here's the stat nobody leads with: 48% of those same executives call AI adoption a "massive disappointment."

The organizations forcing adoption through ultimatums are the same ones calling it a failure. That's not a coincidence.

The Actual Problem

Engineers who resist AI tools usually fall into one of two categories:

  1. They haven't seen it work in a way that's relevant to their actual job yet
  2. They've been burned by overhyped tooling before and are waiting for real proof

Neither of those is fixed by firing someone.

The "fire them" instinct treats resistance as defiance. In most cases it's skepticism — and skepticism from experienced engineers is often the most valuable signal you have. They've seen Agile mandates, microservices mandates, blockchain mandates. They're applying pattern matching, not malice.

Coercing them into compliance gets you checkbox adoption: engineers who open Cursor, accept suggestions without reading them, and ship worse code than before. You'll see the metrics you want in your dashboard and the consequences three months later in your incident log.

What Actually Works

I've watched engineering orgs go AI-native across 27 years of building teams. The ones that do it well don't mandate anything. Here's the framework:

1. Put one true believer in a visible position

Find the engineer on your team most excited about AI tooling. Give them a real project — not a sandbox, not a PoC. Let them work on something that ships. Make the results visible in your next engineering sync.

When skeptical engineers see a colleague who genuinely builds 2-3x faster and ships cleaner PRs, the conversation changes. Proximity to real wins converts faster than any policy.

2. Define what "good AI adoption" actually looks like

Most teams skip this. They get Cursor licenses and assume adoption happens. It doesn't because there's no definition of success.

Write a one-page "AI usage standard" for your team. Something like:

# AI Usage Standards — Engineering Team

## Expected:
- Cursor (or equivalent) for all new code generation
- AI-assisted PR review enabled on all repos
- Prompt templates checked into /docs/prompts/ for common tasks

## Review requirements for AI-generated code:
- All AI-generated functions reviewed by the engineer before commit
- Any AI-generated SQL must be explicitly tested with real data
- No AI-generated auth or security logic ships without senior review

## Not required:
- Using AI for debugging (personal preference)
- AI for documentation (personal preference)

This removes ambiguity. Engineers who resist now have something concrete to respond to, not a vague edict.

3. Track the right metrics

If your only AI adoption metrics are "how many engineers have Cursor" and "lines of code per day," you're measuring outputs, not outcomes.

Add:

  • Mean time to resolve production incidents (MTTR) — does it go up or down after AI adoption?
  • PR defect rate — AI-assisted PRs vs. baseline
  • Time from ticket creation to shipped feature (cycle time)

If those numbers improve, AI adoption is real. If they don't, you have a tooling or training problem, not a resistance problem.

4. Use a skill file, not a mandate

A skill file gives every engineer a repeatable, team-aligned starting point for prompting. Instead of telling people "use AI more," give them this:

# Skill: Code Review Preparation
You are a senior engineer on a TypeScript/React codebase reviewing your own PR before requesting review.

For each changed file:
1. Identify any functions over 50 lines and suggest extraction
2. Flag any missing error handling
3. Check for console.log or debug artifacts left in
4. Note any type assertions (as X) that should be replaced with proper typing
5. Identify tests that should exist but don't

Output as a numbered checklist. Be specific.

When adoption starts with a concrete artifact engineers can actually use, resistance drops fast. People don't resist useful things.

The Question Worth Asking

"Should I fire engineers who refuse AI?" is a frustration question. It surfaces when adoption isn't going well and you want a lever to pull.

The better question: Have you made it embarrassingly easy to start? Is there one engineer on your team using AI in production where others can see the output? Do you have a skill file library they can pull from without having to invent their own prompts?

The teams moving fastest on AI aren't the ones with the strictest policies. They're the ones with the clearest path from "I want to try this" to "I shipped something real with it."

Coercion gets compliance. A clear path gets adoption.

Work With Me

I help engineering orgs adopt AI across their entire team — not just the code, but how product, support, and operations work too. If you want your org moving faster without growing headcount, let's talk.