Back to Blog
Engineering LeadershipEngineering ReportingExecutive DashboardsClaude SkillsGitHub Analytics

I Stopped Writing Weekly Engineering Status Updates by Hand

The reporting-side system I use to pull GitHub, Asana, Linear, Jira, Slack, and email activity into project-level decks plus an executive rollup every week.

11 min read
2,010 words
I Stopped Writing Weekly Engineering Status Updates by Hand

I Stopped Writing Weekly Engineering Status Updates by Hand

The reporting-side system I run weekly to turn noisy engineering activity into an executive-ready deck.

Your teams are not failing because they lack output. They are failing because leadership cannot see output clearly enough, quickly enough, and consistently enough across tools.

If you are overseeing multiple teams, you are probably dealing with some version of this:

  • πŸ”Ή One team in Asana, another in Jira, another in Linear
  • πŸ”Ή Multiple GitHub orgs and inconsistent PR discipline
  • πŸ”Ή Slack threads that hide important blockers in plain sight
  • πŸ”Ή Email updates that are valuable but impossible to aggregate manually
  • πŸ”Ή Friday status reports that are always late and always incomplete

That is not a people problem. It is an aggregation problem.

I built a weekly reporting pipeline that runs this as a system:

  • βœ… Collects activity across engineering systems
  • βœ… Normalizes it by team, project, owner, and time window
  • βœ… Detects what shipped, what moved, what stalled, and what is at risk
  • βœ… Builds one deck per project plus an executive rollup

The point is not to create more reporting. The point is to eliminate manual reporting assembly while improving decision quality.

What this reporting pipeline actually pulls every week

At minimum, I want these evidence streams in one normalized dataset:

  • πŸ”Ή GitHub: PR opened, merged, reviewed, stale, and cycle time
  • πŸ”Ή Asana, Jira, and Linear: completed, in progress, blocked, moved, and newly created
  • πŸ”Ή Slack: high-signal threads tied to delivery risk, dependencies, or decisions
  • πŸ”Ή Email: stakeholder escalation or dependency updates that affect delivery confidence
  • πŸ”Ή Team signals: review lag, throughput, and delivery variance by project

Optional streams I add when available:

  • πŸ”Ή Meeting transcripts tagged by project
  • πŸ”Ή AI-assisted coding adoption metrics by engineer or team
  • πŸ”Ή Incident or on-call overlays that explain velocity dips

Why this works: status updates are deterministic synthesis

Most weekly updates follow the same structure every time:

  • βœ… What shipped
  • βœ… What changed
  • βœ… What is blocked
  • βœ… What is likely to slip
  • βœ… What leadership should decide next

That structure is deterministic. The hard part is data collection and normalization, not slide writing.

Once the pipeline assembles evidence, the report and deck generation become straightforward and repeatable.

System architecture: from connectors to deck

This is the architecture I use conceptually. You can implement it with MCP connectors, API scripts, or both.

1) Collector layer

Collector tasks fetch last-7-day activity from:

  • πŸ”Ή GitHub orgs and repos
  • πŸ”Ή PM tools (Asana, Jira, Linear)
  • πŸ”Ή Communication tools (Slack, Gmail)

Collectors should store raw records with:

  • βœ… Source system
  • βœ… Source identifier
  • βœ… Timestamp
  • βœ… Actor
  • βœ… Project mapping key
  • βœ… Raw payload

2) Normalization layer

Normalize all records into one shape so downstream logic does not care where data came from.

Suggested canonical dimensions:

  • πŸ”Ή project_id
  • πŸ”Ή team_id
  • πŸ”Ή contributor_id
  • πŸ”Ή event_type
  • πŸ”Ή event_status
  • πŸ”Ή event_timestamp
  • πŸ”Ή risk_flag
  • πŸ”Ή confidence_score

3) Signal and risk engine

This is where raw events become decision-ready insights:

  • βœ… Detect stale PRs and review bottlenecks
  • βœ… Detect blocked tickets with no owner movement
  • βœ… Detect scope churn within a sprint window
  • βœ… Detect delivery risk based on movement and lag patterns

4) Narrative engine

The narrative engine creates:

  • πŸ”Ή Executive summary paragraphs
  • πŸ”Ή Project-level summaries
  • πŸ”Ή Contributor-level highlights
  • πŸ”Ή Risk callouts with evidence links

5) Deck generator

Final outputs:

  • βœ… One project deck per team stream
  • βœ… One consolidated executive deck
  • βœ… Optional machine-readable report object for archival and automation

Paste-ready Claude skill template for weekly reporting

Below is a complete skill template you can drop into your skills directory and adapt.

---
name: weekly-engineering-reporting-deck
description: Aggregate last-7-day engineering activity across GitHub, Asana, Jira, Linear, Slack, and email; normalize events; generate project decks and an executive rollup with risks, velocity, and ownership insights.
---

# Weekly Engineering Reporting Deck

Use this skill when leadership needs a weekly status deck generated from connected systems without manual status chasing.

## Primary Objective

Create:
- one deck per project
- one executive rollup deck
- a traceable evidence appendix with source links

## Required Inputs

- reporting window (default last 7 days)
- list of repositories or orgs
- list of PM projects (Asana, Jira, Linear)
- optional Slack channels
- optional email filters
- project-to-repo mapping rules

## Workflow

1) Gather data from connectors
- Pull PR, issue, review, and merge data from GitHub
- Pull ticket states and transitions from Asana/Jira/Linear
- Pull high-signal Slack/email artifacts tied to delivery

2) Normalize and map
- Map all events to project and team
- Deduplicate overlapping events
- Preserve source links for evidence

3) Compute metrics
- Throughput by project and contributor
- PR cycle time and review lag
- Ticket completion and carryover
- Blocker duration and aging
- Scope churn indicators

4) Detect risks
- Stale PRs
- Blocked tasks without movement
- Projects with widening completion variance
- Teams with growing review queues

5) Generate outputs
- Executive deck
- Project decks
- Evidence appendix

## Output Contract

Each deck must include:
- Executive summary
- What shipped this week
- What moved to next week
- Current risks with owner and mitigation
- Metrics and trend visuals
- Decisions needed from leadership

## Quality Bar

- Every risk claim must cite evidence
- Every project must include owner-level accountability
- Unknown data must be labeled explicitly
- Assumptions must be tagged as assumptions

Dynamic operator prompt you can paste directly

Use this as your primary weekly run prompt. It is strict by design so output quality stays high.

Use the weekly-engineering-reporting-deck workflow.

Reporting window:
- Start: {{REPORT_START_ISO}}
- End: {{REPORT_END_ISO}}
- Timezone: {{TIMEZONE}}

Data sources:
- GitHub orgs/repos: {{GITHUB_TARGETS}}
- Asana projects: {{ASANA_PROJECTS}}
- Jira projects or boards: {{JIRA_PROJECTS}}
- Linear teams/projects: {{LINEAR_TARGETS}}
- Slack channels: {{SLACK_CHANNELS}}
- Email filters: {{EMAIL_FILTERS}}

Project mapping rules:
- {{PROJECT_MAPPING_RULES}}

Leadership audience:
- Primary audience: {{AUDIENCE}} (fractional CTO, VP Eng, founders, PMO, etc)
- Emphasis: {{EMPHASIS}} (delivery confidence, risk, velocity, staffing, dependency management)

Required outputs:
1) Executive rollup deck
2) One deck per project stream
3) Data appendix with source links and assumptions
4) Machine-readable metrics table (CSV or JSON format in markdown code block)

Deck requirements:
- Include slide-by-slide content with exact headings
- Include chart recommendations and chart-ready data tables
- Include owner-level accountability and movement
- Include what completed, what moved, what is blocked, what is at risk
- Include stale PRs and review lag analysis
- Include ticket state transitions and completion trends
- Include summary of high-signal Slack/email decisions affecting delivery

Risk model requirements:
- Classify each project risk as Low, Medium, High
- Explain why, with evidence
- Provide mitigation owner and expected resolution date

Data quality requirements:
- If a connector is unavailable, state it clearly under Data Gaps
- Mark any inferred statement with [Assumption]
- Never fabricate missing values

Final format:
- Section A: Executive Summary (1 page equivalent)
- Section B: Portfolio Analytics (cross-project tables and charts)
- Section C: Project Decks (one subsection per project)
- Section D: Risks and Escalations
- Section E: Decisions Needed This Week
- Section F: Evidence Appendix

Now execute end-to-end and return:
- the complete deck content in markdown
- a concise email-ready summary for leadership
- a short Slack-ready summary for each project channel

Example prompt using your original scenario

This is your original ask, upgraded for structure and repeatability:

Use the weekly-engineering-reporting-deck workflow for:
- GitHub: github.com/ligonier/ligonier-app
- Asana project: Ligonier Engineering (project id 1211568568272163)
- Window: last 7 days

Generate:
- executive summary
- completed work
- status changes
- items moved or blocked
- pull requests completed, closed, or stale
- risk callouts with owners
- chart-ready data tables
- a slide-by-slide deck outline with visual recommendations

Also include:
- who did what by contributor
- cycle time, review lag, and carryover trends
- decisions leadership needs to make this week

If any claim cannot be verified from connected systems, mark it as [Assumption].

Deck structure that leadership can consume in five minutes

When I generate decks, I force this structure:

Slide 1: Executive rollup

  • βœ… What shipped this week
  • βœ… What is most at risk next week
  • βœ… One-sentence confidence level per project

Slide 2: Portfolio scoreboard

  • πŸ”Ή Throughput by project
  • πŸ”Ή Completion versus carryover
  • πŸ”Ή PR median cycle time
  • πŸ”Ή Review lag percentile

Recommended visuals:

  • πŸ”Ή Grouped bar chart for completed versus carryover
  • πŸ”Ή Line chart for cycle time trend
  • πŸ”Ή Heat map for risk by project and team

Slides 3-N: Project-level status

For each project:

  • βœ… Completed items with ticket and PR evidence
  • βœ… In-progress work with owner and ETA
  • βœ… Blockers with dependency source
  • βœ… Scope movement from original plan
  • βœ… Risks and mitigation actions

Final slides: decisions and escalations

  • πŸ”Ή Decisions leadership must make this week
  • πŸ”Ή Escalations requiring external coordination
  • πŸ”Ή Data gaps that weaken confidence

Implementation notes and prerequisites

Before running this weekly:

  • βœ… Confirm GitHub connector authentication
  • βœ… Confirm Asana, Jira, and Linear connector scopes
  • βœ… Confirm Slack and Gmail connector access and retention settings
  • βœ… Confirm timezone and reporting window standards
  • βœ… Confirm project mapping tables are up to date

Security and privacy guardrails:

  • ⚠️ Exclude private HR and legal channels from reporting scope
  • ⚠️ Never include sensitive email content beyond required delivery context
  • ⚠️ Store only minimum evidence needed for traceability
  • ⚠️ Redact personal data in executive-facing outputs when required

Operational caveats:

  • [Assumption] AI adoption metrics require instrumentation that not all teams have
  • [Assumption] Meeting transcript quality depends on capture and tagging consistency
  • [Assumption] Cross-tool identity mapping may require a maintained alias table

Why this matters for fractional CTO work

If you oversee multiple teams across different companies, this system gives you leverage:

  • βœ… You start Friday with aligned context
  • βœ… You see risk before deadlines are missed
  • βœ… You spend less time collecting and more time deciding
  • βœ… You can compare delivery health across unlike tool stacks

The reporting pipeline is not overhead. It is the control plane for engineering leadership at scale.