I build a personal life-tracking platform. Ruby on Rails backend, iOS app, macOS lifelogging app. One developer. Ten years of building.
I don't write much code anymore.
My job today looks more like a conductor, a project manager, and a product owner than a software engineer in the traditional sense.
I think about the product holistically. I write specifications. I build implementation plans. I spin up agents to do the development. I facilitate the pull request review cycle. I merge, deploy, and monitor. The full lifecycle — but I'm orchestrating it, not executing it by hand.
Taiichi Ohno defined seven types of waste — muda — in the Toyota Production System. The genius wasn't the taxonomy. It was the practice:
The question isn't
"How do I write code faster?"
It's
"What am I still doing by hand
that a machine could do better?"
Not because I was writing code slowly — the AI agents were handling that. I was the bottleneck because I was still the one doing everything else.
All motion. No production. Muda.
AI can use tools. The same tools I use.
MCP — Model Context Protocol — lets AI agents interact directly with external services. Not through me. Directly.
+ Playwright (browser automation) · GCloud (infrastructure)
The quality of my project management improved when I stopped doing it myself. The AI doesn't forget. It doesn't get lazy at 4pm on a Friday. It does the same thorough job every single time.
By the time I've finished my coffee.
With the operational overhead handled, a different category of waste became visible: building the wrong thing.
I audited 8 pull requests — about 250+ review comments.
Half the review cycle was catching preventable mechanical issues. That's waste. But the more expensive waste was in the other 50% — misunderstood requirements, missing edge cases.
Gherkin acceptance criteria force clarity before code exists. Each scenario maps directly to a test.
Feature: Monthly Usage Report
Scenario: Admin generates report
Given I am logged in as an admin
When I navigate to Reports
and select "March 2026"
Then I see a summary table with
active users, signups, churn
And the report downloads as CSV
Scenario: Report with no activity
Given an account with no activity
When I generate the February report
Then I see all values at zero
And the CSV has headers only
Every rule came from an actual review comment on an actual PR. The checklist lives in the agent's context — it reads it before writing a line of code.
The planning phase got longer.
The total cycle got dramatically shorter.
Unit tests pass. Controller tests pass. The agent reports success. You deploy and the page is broken.
Everything worked in isolation. Nothing worked in the browser.
One system test per feature. Headless Chrome. Happy path end-to-end.
Catches in seconds what manual QA catches in hours.
These aren't sequential improvements. They're all running simultaneously, right now, on every feature.
My role: observe the system, identify what doesn't add value, remove it.
Each fix is encoded into the system so it never comes back. And each one reveals the next layer of waste that was previously invisible.
After you strip away the code writing, the ticket management, the triage, the monitoring, the boilerplate, the mechanical review — what's left?
That's not less work. It's higher-leverage work. The work that determines whether the product succeeds or fails.
The tools will keep getting better.
The models will keep getting smarter.
But the practice of systematically identifying and eliminating waste —
standing on the factory floor,
watching the work,
refusing to accept motion without value
— that's the skill that compounds.
Every tool you hand to the AI
is a category of waste you
never have to manage again.
Start handing over the tools.
This talk: thedetech.com/blog/eliminating-waste-in-the-sdlc/
More on AI-augmented development:
"The One-Person Engineering Team" · "Whispering to the Machine: Take Two"
thedetech.com/blog