That only works for certain type of simpler products (mostly one-man projects, things like web apps) - you're not going to be building a throw-away prototype, either by hand or using AI, of something more complex like your company's core operating systems, or an industrial control system.
When it comes to general software development for customers in the everyday world (phones, computers, web). I often write once for proof, iterate as product requirements becomes clearer/refined, rewrite if necessary (code smell, initial pattern was inefficient for the final outcome).
On a large project, often I’ll touch something I wrote a year ago and realize I’ve evolved the pattern or learned something new in the language/system and I’ll do a little refactor while I’m in there. Even if it’s just code organization for readability.
I do this, too. And it makes me awful at generating "preliminary LOEs", because I can't tell how long something will take until I get in there and experiment a little.
Self created or formalized methods work, but they have to have habits or practices in place that prevent disengagement and complacency.
With LLMs there is the problem with humans and automation bias, which effects almost all human endeavors.
Unfortunately that will become more problematic as tools improve, so make sure to stay engaged and skeptical, which is the only successful strategy I have found with support from fields like human factors research.
NASA and the FAA are good sources for information if you want to develop your own.
Maybe I am more of a Leet coder than I think?
The primary reason is because what you are rapidly refactoring in these early prototypes/revisions are the meta structure and the contacts.
Before AI the cost of putting tests on from the beginning or TTD slowed your iteration speed dramatically.
In the early prototypes what you are figuring out is the actual shape of the problem and what the best division of responsibilities and how to fit them together to fit the vision for how the code will be required to evolve.
Now with AI, you can let the AI build test harnesses at little velocity cost, but TDD is still not the general approach.
Like any framework they all have costs,benefits, and places they work and others that they don’t.
Unless taking time to figure out what your inputs and expected outputs, the schools of thought that targeted writing all tests and even implement detail tests I would agree with you.
If you focus on writing inputs vs outputs, especially during a spike, I need to take prompt engineering classes from you
Yep, and I believe that one will be harder to overcome.
Nudging an LLM into the right direction of debugging is a very different skill from debugging a problem yourself, and the better the LLMs get, the harder it will be to consciously switch between these two modes.
So I look up at the token usage, see that it cost 47 cents, and just `git reset --hard`, and try again with an improved prompt. If I had hand-written that code, it would have been much harder to do.
In my experience this is a bad workflow. "Build it crappy and fast" is how you wind up with crappy code in production because your manager sees you have something working fast and thinks it is good enough
The question is, will the ability of LLMs to whip out boilerplate code cause managers to be more willing to rebuild currently "working" code into something better, now that the problem is better understood than when the first pass was made? I could believe it, but it's not obvious to me that this is so.
With AI this loop is much easier. It is cheap to even build 3 parallel implementations of something and maybe another where you let the system add whatever capability it thinks would be interesting. You can compare and use that to build much stronger "theory of the program" with requirements, where the separation of concerns are, how to integrate with the larger system. Then having AI build that, with close review of the output (which takes much less time if you know roughly what should be being built) works really well.