That's categorically different than hallucination, which is an active misinterpretation/invention of sensory data.
In fact, I think it's incorrect to even call such responses "wrong". ChatGPT successfully put together a reasonable-sounding response to the prompt, which is exactly what it is designed to do.
You know how sometimes an LLM completes a given text in a way that you would consider "wrong"? Well, the LLM has no concept of "correct" or "wrong". It just has a very broad and deep model of the entirety of the text in the training data. Sometimes you might consider the completion "correct". Sometimes you might consider the completion "wrong". Sometimes I may consider the completion "correct" and you may consider the completion "wrong", just as with many other kinds of statements. The logical reasoning happens outside of the statement itself and can reasonably have many interpretations.
Would you ever consider that a plant has grown in a way that is "wrong"? Or that a broken reciprocating saw cut through plywood in the "wrong" way?