You're correct. For the downvoters, here is the explanation:
You know how sometimes an LLM completes a given text in a way that you would consider "wrong"? Well, the LLM has no concept of "correct" or "wrong". It just has a very broad and deep model of the entirety of the text in the training data. Sometimes you might consider the completion "correct". Sometimes you might consider the completion "wrong". Sometimes I may consider the completion "correct" and you may consider the completion "wrong", just as with many other kinds of statements. The logical reasoning happens outside of the statement itself and can reasonably have many interpretations.
Would you ever consider that a plant has grown in a way that is "wrong"? Or that a broken reciprocating saw cut through plywood in the "wrong" way?
Well, feel free to make up a new word then and see if it catches on. I will keep calling it hallucination because that effortlessly describes what is happening through analogy, a powerful tool by which we can make difficult concepts more accessible. I hope you realize that my use of quotes is another common literary tool which indicated I know a LLM can't actually see.
In fact, I think it's incorrect to even call such responses "wrong". ChatGPT successfully put together a reasonable-sounding response to the prompt, which is exactly what it is designed to do.