No, you have to talk to it like to an adult human being.
If one's doing so and still gets garbage results from SOTA LLMs, that to me is a strong indication one also cannot communicate with other human beings effectively. It's literally the same skill. Such individual is probably the kind of clueless person we all learn to isolate and navigate around, because contrary to their beliefs, they're not the center of the world, and we cannot actually read their mind.
It's a perspective shift few seem to have considered: if one wants an expert software developer from their AI, they need to create an expert software developer's context by using expert developer terminology that is present in the training data.
One can take this to an extreme, and it works: read the source code of an open source project and get and idea of both the developer and their coding style. Write prompts that mimic both the developer and their project, and you'll find that the AI's context now can discuss that project with surprising detail. This is because that project is in the training data, the project is also popular, meaning it has additional sites of tutorials and people discussing use of that project, so a foundational model ends up knowing quite a bit, if one knows how to construct the context with that information.
This is, of course, tricky with hallucination, but that can be minimized. Which is also why we will all become aware of AI context management if we continue writing software that incorporates AIs. I expect context management is what was meant by prompt engineering. Communicating within engineering disciplines has always been difficult.
Yeah, I'd rather just type things out myself, and continue communicating with my fellow humans rather than expending my limited time on this earth appeasing a bullshit generator that's apparently going to make us all jobless Soon™