You're correct. For the downvoters, here is the explanation:
You know how sometimes an LLM completes a given text in a way that you would consider "wrong"? Well, the LLM has no concept of "correct" or "wrong". It just has a very broad and deep model of the entirety of the text in the training data. Sometimes you might consider the completion "correct". Sometimes you might consider the completion "wrong". Sometimes I may consider the completion "correct" and you may consider the completion "wrong", just as with many other kinds of statements. The logical reasoning happens outside of the statement itself and can reasonably have many interpretations.
Would you ever consider that a plant has grown in a way that is "wrong"? Or that a broken reciprocating saw cut through plywood in the "wrong" way?
You know how sometimes an LLM completes a given text in a way that you would consider "wrong"? Well, the LLM has no concept of "correct" or "wrong". It just has a very broad and deep model of the entirety of the text in the training data. Sometimes you might consider the completion "correct". Sometimes you might consider the completion "wrong". Sometimes I may consider the completion "correct" and you may consider the completion "wrong", just as with many other kinds of statements. The logical reasoning happens outside of the statement itself and can reasonably have many interpretations.
Would you ever consider that a plant has grown in a way that is "wrong"? Or that a broken reciprocating saw cut through plywood in the "wrong" way?