Preferences

It "sees" something it "believes" is true, how is hallucination not an accurate term?

Because it's neither "seeing" anything, nor is it "believing" anything.

In fact, I think it's incorrect to even call such responses "wrong". ChatGPT successfully put together a reasonable-sounding response to the prompt, which is exactly what it is designed to do.

You're correct. For the downvoters, here is the explanation:

You know how sometimes an LLM completes a given text in a way that you would consider "wrong"? Well, the LLM has no concept of "correct" or "wrong". It just has a very broad and deep model of the entirety of the text in the training data. Sometimes you might consider the completion "correct". Sometimes you might consider the completion "wrong". Sometimes I may consider the completion "correct" and you may consider the completion "wrong", just as with many other kinds of statements. The logical reasoning happens outside of the statement itself and can reasonably have many interpretations.

Would you ever consider that a plant has grown in a way that is "wrong"? Or that a broken reciprocating saw cut through plywood in the "wrong" way?

Well, feel free to make up a new word then and see if it catches on. I will keep calling it hallucination because that effortlessly describes what is happening through analogy, a powerful tool by which we can make difficult concepts more accessible. I hope you realize that my use of quotes is another common literary tool which indicated I know a LLM can't actually see.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal