On the other hand, when people who claim success with AI share their prompts, I see all the same misses and flaws that keep me from fully buying in. For the person though, it seems like they gloss over these errors and claim wild success. Their prompts never actually seem that different from the ones that fail me as well.
It seems like “you’re not doing it correctly” is just a rationalization to protect the pro-AI person’s established opinion.
Nobody does it correctly—ai or not.
It’s about breaking the problem down into epics, tasks, and acceptance criteria that is reviewed. Review the written code and adjust as needed.
Tests… a lot of tests.
For all the other trivial things, I can delegate those out to it, and expect junior results when I give it sub-optimal guidance, however through nominal and or extreme guidance I can get adequate / near-perfect results.
Another dimension that really matters here is the actual model used, not every model is the same.
Also, if the AI does something wrong, have it assess why things went wrong, revert back to the previous checkpoint and integrate that into the plan.
You're driving, you are ultimately, in control, learn to drive. It's a tool, it can be adjusted, you can modify the output, you can revert, you can also just not use it. But, if you do actually learn how to use it you'll find it can speed up your process. It is not a cure-all though, it's good in certain situations, just like a hammer.