Preferences

I also think "hallucination" is generally an actively terrible term for AI output that isn't factually correct. It's misleading, at the very least.

I think it's incredibly accurate. Humans can misinterpret some internal inputs as external inputs and we hallucinate. This is doing the same, erroneous interpretation leading to nonsense interactions.
I don't think that's an accurate description of these errors at all, honestly. It's not a matter of ChatGPT "interpreting" anything. It's a matter of it assembling a response that is linguistically most likely, given its training.

That's categorically different than hallucination, which is an active misinterpretation/invention of sensory data.

Agreed. I find "confabulating" more appropriate.
I agree and unfortunately I didn’t construct my original comment in a manner that makes that agreement clear!
It "sees" something it "believes" is true, how is hallucination not an accurate term?
Because it's neither "seeing" anything, nor is it "believing" anything.

In fact, I think it's incorrect to even call such responses "wrong". ChatGPT successfully put together a reasonable-sounding response to the prompt, which is exactly what it is designed to do.

You're correct. For the downvoters, here is the explanation:

You know how sometimes an LLM completes a given text in a way that you would consider "wrong"? Well, the LLM has no concept of "correct" or "wrong". It just has a very broad and deep model of the entirety of the text in the training data. Sometimes you might consider the completion "correct". Sometimes you might consider the completion "wrong". Sometimes I may consider the completion "correct" and you may consider the completion "wrong", just as with many other kinds of statements. The logical reasoning happens outside of the statement itself and can reasonably have many interpretations.

Would you ever consider that a plant has grown in a way that is "wrong"? Or that a broken reciprocating saw cut through plywood in the "wrong" way?

Well, feel free to make up a new word then and see if it catches on. I will keep calling it hallucination because that effortlessly describes what is happening through analogy, a powerful tool by which we can make difficult concepts more accessible. I hope you realize that my use of quotes is another common literary tool which indicated I know a LLM can't actually see.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal