Implementation plans, intermediate bot/hunan review to control for complexity, convention adherence, actual task completion, then provide guidance, manage the context and a ton of other things to cage and harness the agent.
Then, what it produces, it almost passes the sniff test. Add further bot and human code review, and we've got something that passes muster.
The siren song of "just do it/fix it" is hard to avoid sometimes, especially as deadlines loom, but that way lies pain. Not a problem for a quick prototype or something throwaway (and OP is right, that that works at all is nothing short of marvelous), but to create output to be used in long term maintainable software a lot has to happen, and even that it's sometimes a crap shoot.
But why not do it by hand then? To me it still accelerates and opens up the possibility space tremendously.
Overall I'm bullish on agents improving past the current necessary operator-driven handholding sooner than later. Right now we have the largest collection of developer agent RL data ever, all the labs sucking up that juicy dev data. I think that will improve today's tooling tremendously.
I have no doubt that agents will become meaningfully useful for some things at some point. This just hasn't really happened yet, aside of the really simple stuff perhaps.
The output is very problematic. It breaks itself all the time, makes the same mistakes multiple times, I have to retread my steps. I’m going to have it write tests so it can better tell what it’s breaking.
But being able to say “take this GTK app and add a web server and browser based mode” and it just kinda does it with minimal manual debugging is something remarkable. I don’t fully understand it, it is a new capability. I do robotics and I wish we had this for PCB design and mechanical CAD, but those will take much longer to solve. Still, I am eager to point Claude at my hand written python robotics stack from my last major project [1] and have it clean up and document what was a years long chaotic prototyping process with results I was reasonably happy with.
The current systems have flaws but if you look at where LLMs were five years ago and you see the potential value in fixing the flaws with agentic coding, it is easy to imagine that those flaws will be addressed. There will be higher level flaws and those will eventually be addressed, etc. Maybe not, but I’m quite curious to see where this goes, and what it means for engineering as a human being at these times.
[1] https://github.com/sequoia-hope/acorn-precision-farming-rove...