And if you phrase your question wrong, you have a chance of triggering a Chain of Thought process that gets stuck in a loop "analysing" some utterly irrelevant part of the problem (here, "why would anyone want to order tea?"), locking up important subsystems of other systems on the network in the process.
The computer-based drinks machine onboard the Heart of Gold on the other hand… Trying to order tea there now sounds suspiciously like a bout of futile prompt-engineering; trying to goad an LLM into giving you tea, but ending up with something which is almost, but not quite, entirely unlike tea.