i.e. continually gambling and praying the model spits something out that works instead of thinking.
But more seriously, in the ideal case refining a prompt based on a misunderstanding of an LLM due to ambiguity in your task description is actually doing the meaningful part of the work in software development. It is exactly about defining the edge cases, and converting into language what is it that you need for a task. Iterating on that is not gambling.
But of course if you are not doing that, but just trying to get a ”smarter” LLM with (hopefully deprecated study of) ”prompt engineering” tricks, then that is about building yourself a skill that can become useless tomorrow.
If the outcome is indistinguisable from using "thinking" as the process rather than brute force, why would the process matter regarding how the outcome was achieved?
Your concept of thinking is the classic retoric - as soon as some "ai" manages to achieve something which previously wasn't capable, it's no longer AI and is just xyz process. It happened with chess engines, with alphago, and with LLMs. The implication being that human "thinking" is somehow unique and only the AI that replicate it can be considered to have "thinking".