Eliminating Waste
in the SDLC

What happens when AI can use your tools
Adam Thede

One Developer.
Three Platforms.

I build a personal life-tracking platform. Ruby on Rails backend, iOS app, macOS lifelogging app. One developer. Ten years of building.

10
Years in Development
3
Platforms
5
Data Providers
1
Developer

I don't write much code anymore.

My job today looks more like a conductor, a project manager, and a product owner than a software engineer in the traditional sense.

I think about the product holistically. I write specifications. I build implementation plans. I spin up agents to do the development. I facilitate the pull request review cycle. I merge, deploy, and monitor. The full lifecycle — but I'm orchestrating it, not executing it by hand.

The Toyota Lens

Taiichi Ohno defined seven types of waste — muda — in the Toyota Production System. The genius wasn't the taxonomy. It was the practice:

  1. Stand on the factory floor
  2. Watch the work happen
  3. Identify what doesn't add value
  4. Eliminate it
  5. Repeat

The question isn't

"How do I write code faster?"

It's

"What am I still doing by hand
that a machine could do better?"

I Was the Bottleneck

Not because I was writing code slowly — the AI agents were handling that. I was the bottleneck because I was still the one doing everything else.

Project Management
Creating Jira tickets manually
Writing descriptions
Moving swimlanes
Assigning story points
Error Triage
Open Sentry
Read the stack trace
Search Jira for duplicates
Search GitHub for PRs
Infrastructure
Check GCloud dashboards
Monitor Puma health
Cross-reference with backlog
Write up findings

All motion. No production. Muda.

Tool Handoff

AI can use tools. The same tools I use.

MCP — Model Context Protocol — lets AI agents interact directly with external services. Not through me. Directly.

GitHub  PRs, issues, reviews
Jira  Tickets, sprints, boards
Claude Code
via MCP Servers
Sentry  Exceptions, triage
Heroku  Deploy, logs, dynos

+ Playwright (browser automation) · GCloud (infrastructure)

Entire Categories
of Waste

Project Management
15 min per ticket × 8 tickets/sprint
Automatic. Better quality.
MCP → Jira
Error Triage
15 min manual triage session
Seconds. Cross-referenced.
MCP → Sentry + Jira + GitHub
Infrastructure
30 min dashboard review
One command. Full report.
/devops slash command

The quality of my project management improved when I stopped doing it myself. The AI doesn't forget. It doesn't get lazy at 4pm on a Friday. It does the same thorough job every single time.

Infrastructure Monitoring
Became Conversational

1. GCloud health checks
2. Sentry exception scan
3. Jira cross-reference
4. GitHub PR status
5. Create tickets for new issues
6. Markdown report → /docs
/devops — Morning Audit
GCloud VMs:         HEALTHY
Load Balancer:      OK
Puma Workers:       2/2 running
Sentry (24h):       3 new exceptions
Jira Match:         2 already tracked
GitHub PRs:         1 addresses issue
Action: Created SILO-847 for untracked exception

By the time I've finished my coffee.

The Expensive Waste

With the operational overhead handled, a different category of waste became visible: building the wrong thing.

I audited 8 pull requests — about 250+ review comments.

Missing Tests
24%
N+1 Queries
14%
SQL Injection
12%
50% of all review comments
from 3 preventable
categories

Half the review cycle was catching preventable mechanical issues. That's waste. But the more expensive waste was in the other 50% — misunderstood requirements, missing edge cases.

Specifications as Contracts

Gherkin acceptance criteria force clarity before code exists. Each scenario maps directly to a test.

Feature: Monthly Usage Report

  Scenario: Admin generates report
    Given I am logged in as an admin
    When I navigate to Reports
         and select "March 2026"
    Then I see a summary table with
         active users, signups, churn
    And the report downloads as CSV

  Scenario: Report with no activity
    Given an account with no activity
    When I generate the February report
    Then I see all values at zero
    And the CSV has headers only

Why This Works

  • Forces clarity — If I can't write the scenario, I don't understand the feature well enough to build it
  • Eliminates ambiguity — The agent builds exactly what I described, not what it guessed I meant
  • Each scenario = one test — The spec and the test suite are the same artifact
  • Lives in the GitHub Issue — The issue IS the specification. No separate docs to sync.

Pre-Flight Checklist

Every rule came from an actual review comment on an actual PR. The checklist lives in the agent's context — it reads it before writing a line of code.

Pre-Push Quality Gate: feature/usage-reports
SQL Injection Scan:      PASS
N+1 Query Detection:     PASS
Test Coverage:          PASS
System Test Exists:      PASS
RuboCop Compliance:      PASS
Test Suite Green:        PASS
Schema Diff:           CLEAN
Overall:                READY TO PUSH

Where Rules Live

  • CLAUDE.md — 900-line project briefing loaded every session. Contains coding standards, architecture, anti-patterns.
  • Pre-commit hook — Rubocop blocks the commit if it finds offenses. The agent can't skip it.
  • /verify command — Runs the full checklist on demand before any push.

The planning phase got longer.
The total cycle got dramatically shorter.

The Cheapest Waste
Elimination I've Found

The Problem

Unit tests pass. Controller tests pass. The agent reports success. You deploy and the page is broken.

  • A misnamed controller
  • A Stimulus identifier that doesn't match
  • A Turbo frame the view wasn't expecting

Everything worked in isolation. Nothing worked in the browser.

The Fix

One system test per feature. Headless Chrome. Happy path end-to-end.

System Test: Usage Reports
Route resolves:        OK
Controller responds:     OK
View renders:          OK
JS initializes:        OK
Form submits:          OK
Happy path:            VERIFIED

Catches in seconds what manual QA catches in hours.

Everything Runs
in Parallel

These aren't sequential improvements. They're all running simultaneously, right now, on every feature.

Agent 1
DevOps Audit
GCloud → Sentry → Jira cross-reference → triage new issues → markdown report
Agent 2
Feature Implementation
Gherkin spec → implementation with pre-flight checklist → tests → system test
Agent 3
PR Review Cycle
Copilot review → address comments → resolve threads → post QA checklist
Automated
Sprint Board
Tickets created → status updated → story points assigned → swimlanes managed

My role: observe the system, identify what doesn't add value, remove it.

Every Elimination
Is Permanent

Each fix is encoded into the system so it never comes back. And each one reveals the next layer of waste that was previously invisible.

Hit a problem
Encode the fix
Next layer visible
Repeat
January
Review cycle waste
Encoded → pre-flight checklist
February
Specification waste
Encoded → Gherkin in issues
March
Deployment verification
Working on it now

The Human Work

After you strip away the code writing, the ticket management, the triage, the monitoring, the boilerplate, the mechanical review — what's left?

  • Product discovery — Deciding what to build and why
  • Architecture decisions — The choices that determine whether the system scales or collapses
  • Judgment calls — Trade-offs that require the full context of the business, market, and tech landscape
  • Watching the process — Standing on the factory floor, looking for the next pocket of waste

That's not less work. It's higher-leverage work. The work that determines whether the product succeeds or fails.

The tools will keep getting better.
The models will keep getting smarter.

But the practice of systematically identifying and eliminating waste —
standing on the factory floor,
watching the work,
refusing to accept motion without value

— that's the skill that compounds.

Every tool you hand to the AI
is a category of waste you
never have to manage again.

Start handing over the tools.

Adam Thede

This talk:  thedetech.com/blog/eliminating-waste-in-the-sdlc/

More on AI-augmented development:

"The One-Person Engineering Team"  ·  "Whispering to the Machine: Take Two"

thedetech.com/blog

Thede Technologies