Whispering to the Machine: Take Two

Eight months ago, I wrote a snapshot of what it felt like to collaborate with AI in software development. I talked about using Cursor, chatting with models, migrating a codebase from Bootstrap to Tailwind. I laid out five principles: plan meticulously, work iteratively, use the right tool, embrace collaboration, don’t be afraid to start fresh.

Those principles still hold. Every one of them. But the way I execute on them has changed so dramatically that the original piece reads like a dispatch from a different era. Eight months in this field might as well be a decade.

Here’s what the work actually looks like in February 2026.


6 AM: The DevOps Agent

My day starts with a slash command.

I built a custom command in Claude Code called /devops. When I run it, it kicks off a sequential audit of my entire production infrastructure — without me touching a dashboard, a browser, or a monitoring tool.

It starts with Google Cloud Platform. Read-only gcloud commands check the health of my VMs, the load balancer, whether any servers are rotating, whether Puma instances are getting killed, whether there have been gateway timeouts overnight. Then it moves to Sentry, scanning for new exception notifications that need attention. Then it cross-references everything it found with Jira — my backlog, the current sprint, active tickets — to see if any of these issues are already documented. Then it checks GitHub to see if there are open pull requests that might already address what it found.

If it finds something new — an issue that doesn’t have a Jira ticket or a PR in progress — it creates the ticket. We walk through a quick triage together: backlog or current sprint? Then it writes a full markdown report into my docs directory and gives me a summary in the terminal.

The whole thing takes a few minutes. By the time I’ve finished my coffee, I have a complete picture of the health of my production systems, cross-referenced against my project management tools, with new issues already documented and triaged.

That’s not autocomplete. That’s a DevOps engineer. Running in my terminal. Built from a slash command I wrote myself.


The Workspace

If you looked at my screen on a typical morning, here’s what you’d see:

A terminal application called Ghostty with six or seven tabs open. Each tab is a Claude Code instance focused on a different workstream — some devoted to issues the DevOps agent surfaced, some to features on the product roadmap, some to items in the current sprint. Each one has its own context, its own task, its own branch.

A native Mac terminal running separately with four tabs keeping my local development environment alive — the application server, background workers, CSS compilation, the usual.

An Obsidian window with my project’s docs directory open. This is where all the planning documents, feature specs, DevOps reports, and release notes live. It’s my reference library — and crucially, it’s also available to every Claude Code instance as context.

And Cursor, still installed, but demoted. I used to live in Cursor. It was the center of my workflow eight months ago. Now I use it mostly as a file viewer and diff tool. The occasional Claude Code session runs inside it, but the real work happens in the terminal. The shift from IDE-based chat to CLI-based agents turned out to be the most consequential tooling change I’ve made.


Agent Teams

A couple of weeks ago, I tried something I’d been hearing about: agent teams. I had a substantial feature that required changes to the backend, the frontend, and the test suite — the kind of thing that would normally be serial work, one piece at a time.

Instead, I worked with Claude Code in planning mode to decompose the feature into parallel workstreams. We spent the most time here — figuring out how to split the work so that agents could operate independently on the same feature branch without stepping on each other. The planning agent took on what it called “phase zero” — the foundational scaffolding that all the other agents would need. Once that was in place, it spawned three agents: one for the backend, one for the frontend, one for the test coverage.

I watched them work. The planning agent didn’t just delegate and disappear — it played conductor. If any of the worker agents needed something, they reported back to it. If the conductor needed a judgment call from me — permission to make an architectural choice, clarification on a requirement — it surfaced the question. Most of the time, the agents resolved things among themselves. I only got pulled in when it actually mattered. The feature that would have taken me a week got built in a fraction of that time.

But here’s what surprised me: in the two weeks since that experiment, I’ve noticed Claude Code starting to parallelize work on its own. Without me asking. Without me structuring the plan for parallel execution. It recognizes when tasks are independent — research across different parts of a codebase, web lookups, file analysis — and spins up parallel workstreams automatically.

I think about something Boris Cherny said: plan for the model six months from now, not the model you have. I have a strong feeling that agent orchestration — the thing I spent the most time planning — will increasingly become something the AI handles itself. The human’s job will move further up the abstraction stack.


Context Is the New Code

If there’s one thing the tools haven’t solved yet, it’s this: context management is still the human’s job.

I maintain a massive docs directory — project plans, feature specs, DevOps output, architectural decisions, release notes. Hundreds of markdown files organized by purpose. Every Claude Code instance can access it. But knowing which documents to surface for which task, at which moment — that’s still on me.

An agent working on a frontend feature doesn’t need last week’s DevOps incident report. An agent triaging a production issue doesn’t need the product roadmap. The right context at the right time for the right purpose — that’s the skill that separates effective orchestration from noise.

This is, I think, the most underappreciated part of working with AI agents. People focus on the prompting, the model selection, the tooling. But the real leverage comes from the information architecture you build around the agents. The docs directory, the planning documents, the CLAUDE.md files that establish conventions — that’s the infrastructure that makes everything else work.

The agents didn’t build that. I did. And maintaining it is a meaningful part of the job.


The Consolidation

Eight months ago, my toolkit was sprawling: Cursor for the IDE, multiple model providers, Google Gemini for some tasks, Claude for others, constant context-switching between interfaces.

Now it’s simple. Claude Code in the terminal, powered by Opus. That’s the daily driver.

That doesn’t mean I stopped looking. I actively experiment with every serious tool that ships — Google’s Antigravity, OpenAI’s Codex, the latest Gemini models. I give each one a real project, not just a toy example. Some are impressive. Some are fun for a feature or two. None of them have been compelling enough to change my daily habits. The workflow benefits of staying in one ecosystem — where the context, the conventions, the muscle memory all compound — outweigh the marginal gains of tool-hopping.

There’s a human story here too. Last summer, my partner asked what I wanted for Father’s Day. I told her — half-joking — that what I really wanted was permission to upgrade to a $100-a-month AI subscription. She wrote me a letter, tongue-in-cheek, granting it. I still waited another month or two before actually pulling the trigger. A hundred dollars a month for a coding assistant sounded absurd.

Now I can’t imagine working without it. I have it for my day job and for my personal projects. The ROI isn’t even close — it’s the single highest-leverage tool purchase I’ve ever made.


The Principles Held Up

Looking back at the 2025 piece, the five principles I outlined haven’t changed:

Plan meticulously — more true than ever. The planning phase now includes decomposing work for parallel agents, not just for my own serial execution.

Work iteratively — still the foundation. Small commits, automated quality gates, verification loops.

Use the right tool for the job — simplified. The “right tool” consolidated from a scattered toolkit to one powerful ecosystem.

Embrace collaboration — evolved from “chat with a model” to “orchestrate a team of agents.”

Don’t be afraid to start fresh — still the biggest unlock. When an approach isn’t working, kill it and regenerate. The cost of starting over is lower than ever.

The principles are load-bearing. What changed is the scale at which they operate.


Your Job Title Is Changing

Here’s the thing I’d say to any engineering leader who’s still on the fence about this shift:

The traditional engineering org chart — one person for frontend, one for backend, one for DevOps, one for security, one for product — is dissolving. Not because those roles don’t matter, but because each of those roles can now be handled remarkably well by an agent persona managed by a single orchestrator.

One person managing a collection of agents, each responsible for what used to be a separate job title. That’s not science fiction. That’s my Tuesday.

The title “software engineer” is going to mean something very different in two years than it does today. The engineers who recognize this and start building the orchestration muscle now — learning to decompose problems, manage context, build verification systems, and direct intent instead of writing implementation — will have an enormous head start.

The ones who wait will be catching up to a target that’s already moved.

Pick up the tools. Start now. The learning curve is real, but the landscape is moving faster than your comfort zone.

This is a sequel to Whispering to the Machine: A Snapshot of AI-Powered Development in 2025. I expect to write another update when the landscape shifts again. Based on the current pace, that might be next month.