The brownfield vs greenfield split is the real answer to "how do people offload all coding." They mostly don't, on legacy codebases. The "never write code" crowd is almost always working on new projects where the AI has full context from the start.
The economics flip on brownfield for exactly the reason you describe: the diagnosis and fix are tightly coupled. You already did the expensive cognitive work to understand the bug. The AI's planning overhead exceeds the writing overhead at that point.
Where the calculus does shift on brownfield is larger feature additions that can be spec'd in isolation. "Add this new endpoint that follows the same pattern as these three existing ones" works well because the AI can pattern-match against existing code. "Fix this subtle race condition in the session handler" almost never does, because the AI's diagnosis phase is unreliable enough that you'd rather just trace it yourself.
I've experienced the same issues with Claude Code. I think it's very important that you sufficiently specify what you want to accomplish. Both overall for the project, but also for the immediate task you're trying to complete (e.g., new feature, bug fix). There are some good frameworks for this:
- https://openspec.dev/
- https://github.github.com/spec-kit/
For most applications, it is certainly possible to never write code and still produce something substantial. But in my experience, you have to be really diligent about the specifications and unit/contract/e2e testing. Where I've run into trouble requiring me to dig into the code is when creating software that uses new algorithms that haven't been trained on.
With brownfield projects, I can’t speak to. All of my (consulting) projects start with an empty AWS account and empty git repo.
My Claude/Codex session have temporary AWS Credentials in environment variables. The AWS SDK and CLI use those variables automatically.
My AWS account is bootstrapped with infrastructure as code.
They both do pretty well with troubleshooting since they have the code and my markdown summaries from the contract, diagrams, call transcripts, project plan etc.
They can both look at live CloudWatch logs to find errors.
"Offloading all coding" is perhaps a misleading expression. Those who say they no longer write code are often describing a change in what kind of work they do, not that they've stopped writing code entirely. They spend more time on technical specification, architectural decisions, considering differences, and figuring out when the model misinterprets intent—and less time on actual code typing.
Your brownfield instinct is right though. The productivity gap between "fixing it yourself" and "require → plan → evaluate → deploy → evaluate" only narrows when the task is large enough to justify the cost incurred, or when you're running parallel agents. For a bug requiring only two lines of code, the cost of context switching alone can negate the return on investment (ROI).
I agree with this completely. Since coding agents came along, I stay completely on the architectural, requirements level and don’t look at code at all. I have damn good markdown files and I have coding agents transcribe what they are doing and decisions I made.
The brownfield vs greenfield split is the real answer to "how do people offload all coding." They mostly don't, on legacy codebases. The "never write code" crowd is almost always working on new projects where the AI has full context from the start.
The economics flip on brownfield for exactly the reason you describe: the diagnosis and fix are tightly coupled. You already did the expensive cognitive work to understand the bug. The AI's planning overhead exceeds the writing overhead at that point.
Where the calculus does shift on brownfield is larger feature additions that can be spec'd in isolation. "Add this new endpoint that follows the same pattern as these three existing ones" works well because the AI can pattern-match against existing code. "Fix this subtle race condition in the session handler" almost never does, because the AI's diagnosis phase is unreliable enough that you'd rather just trace it yourself.
Yeah that pretty much aligns with my experience in regard to feature additions. It’s great at those due to the reasons you mentioned!
I've experienced the same issues with Claude Code. I think it's very important that you sufficiently specify what you want to accomplish. Both overall for the project, but also for the immediate task you're trying to complete (e.g., new feature, bug fix). There are some good frameworks for this: - https://openspec.dev/ - https://github.github.com/spec-kit/
For most applications, it is certainly possible to never write code and still produce something substantial. But in my experience, you have to be really diligent about the specifications and unit/contract/e2e testing. Where I've run into trouble requiring me to dig into the code is when creating software that uses new algorithms that haven't been trained on.
With brownfield projects, I can’t speak to. All of my (consulting) projects start with an empty AWS account and empty git repo.
My Claude/Codex session have temporary AWS Credentials in environment variables. The AWS SDK and CLI use those variables automatically.
My AWS account is bootstrapped with infrastructure as code.
They both do pretty well with troubleshooting since they have the code and my markdown summaries from the contract, diagrams, call transcripts, project plan etc.
They can both look at live CloudWatch logs to find errors.
> leads me to a point where I can see that the issue is a simple fix with a couple lines of code.
If you can see the problem, know how to fix it, and still ask spicy autocomplete to do it for you, that isn't "using a tool", it's cargo culting.
"Offloading all coding" is perhaps a misleading expression. Those who say they no longer write code are often describing a change in what kind of work they do, not that they've stopped writing code entirely. They spend more time on technical specification, architectural decisions, considering differences, and figuring out when the model misinterprets intent—and less time on actual code typing.
Your brownfield instinct is right though. The productivity gap between "fixing it yourself" and "require → plan → evaluate → deploy → evaluate" only narrows when the task is large enough to justify the cost incurred, or when you're running parallel agents. For a bug requiring only two lines of code, the cost of context switching alone can negate the return on investment (ROI).
I agree with this completely. Since coding agents came along, I stay completely on the architectural, requirements level and don’t look at code at all. I have damn good markdown files and I have coding agents transcribe what they are doing and decisions I made.
You're replying to what looks like a bot account.