Back to Blog
Engineering LeadershipAIGitHubCopilotEngineeringGovernance

GitHub Copilot Started Training on Your Code Today. Here Is the Policy Template Your Team Needs.

As of April 24, 2026, GitHub Copilot opted in Free, Pro, and Pro+ users to AI training on their code. Most engineering teams found out on Twitter. Here is the audit checklist and policy template to close the governance gap.

4 min read
730 words
GitHub Copilot Started Training on Your Code Today. Here Is the Policy Template Your Team Needs.

GitHub Copilot Started Training on Your Code Today. Here Is the Policy Template Your Team Needs.

As of April 24, 2026, GitHub Copilot automatically opted in Free, Pro, and Pro+ users to AI training on their code. Prompts, suggestions, private repo context — all fair game unless you found the toggle and turned it off. Business and Enterprise tiers are exempt. Privacy is now a paid feature.

Most engineering teams learned about this on Twitter, not from their CTO.

That is the governance gap.

What Actually Happened

GitHub announced that starting today, user-generated data — your prompts, the code Copilot suggests, your repo context — feeds into model training. The default is opted in. The opt-out toggle is buried in account settings under Copilot preferences.

The business logic is straightforward from GitHub's side: they need training data to improve the model. But this is a policy decision that affects every developer on your team, and it arrived as a settings change, not a conversation.

For most engineering orgs, nobody has a documented position on this. There is no audit of which team members are on which Copilot plan. There is no communication to the team about what the decision is.

That is the norm, not the exception. In 27 years of building and leading engineering teams, the same pattern repeats: a tool changes the rules of the relationship, and orgs scramble to catch up two weeks later.

Three Things to Do Before End of Day

1. Audit your team's Copilot plan tier.

Free, Pro, and Pro+ users are opted in. Business and Enterprise are not. If you have a mixed team — some on individual plans, some on org plans — you likely have developers already opted in without knowing it.

Run a quick inventory. If you work across multiple companies as a fractional CTO or engineering lead, this question belongs at every engagement.

2. Make the call and document it.

There is a right answer for your org, but it depends on context. A team building internal tooling at a startup has different risk calculus than a team building regulated financial infrastructure.

Your options:

  • Opt out for all team members on individual plans (recommended for most orgs)
  • Opt in for non-sensitive projects, opt out for sensitive work
  • Migrate developers handling sensitive code to Business or Enterprise tier

Pick one. Write it down. Put it in your engineering handbook under AI Tooling Policy.

3. Tell your team before they find out on their own.

The worst version of this: an engineer notices they were opted in, asks why no one told them, and you have no good answer. A two-paragraph Slack message from engineering leadership closes that loop before it opens.

Here is the communication template:

## AI Tooling Update — GitHub Copilot Training Policy

Effective April 24, GitHub Copilot automatically opted in Free/Pro/Pro+ users
to AI model training on user data (prompts, suggestions, repo context).
Business and Enterprise tiers are exempt.

Our policy: [opt-out for all / opt-in with restrictions / Enterprise tier mandate]

What you need to do: [specific steps if action required]

Why this matters: your code contributions are part of what you build here.
We decide how that data gets used — you should not have to guess.

Questions? Reply here or DM me.

Adapt the middle section to your org's decision. Send it. Done.

The Broader Pattern

This is not an isolated GitHub story. It is a preview of where the industry is heading.

AI tools are integrating deeper into how teams build software. Every layer of that integration carries a data relationship: what the tool sees, what it stores, what it trains on. The orgs building governance frameworks now will have playbooks when the next change arrives. The ones ignoring it will keep scrambling.

Being AI-native does not mean accepting every default. It means knowing what each tool does with your data and having a deliberate position on it.

Get the Full Policy Template

I published a detailed breakdown on LinkedIn, including the complete AI tooling policy template and the audit checklist I use with new engineering engagements.

Comment "Guide" on that post and I'll DM you the complete template directly.

Work With Me

I help engineering orgs adopt AI across their teams — not just in the code, but in how product, support, and operations work too. If you want your org moving faster without growing headcount, let's talk.