This is not something that's impossible for an LLM to do. There is no fundamental issue there. It is, however, very easy for an LLM to fail at it.
Humans get their (imperfect, mind) meta-knowledge "for free" - they learn it as they learn the knowledge itself. LLM pre-training doesn't give them much of that, although it does give them some. Better training can give LLMs a better understanding of what the limits of their knowledge are.
The second part is acting on that meta-knowledge. You can encourage a human to act outside his knowledge - dismiss his "out of your depth" and provide his best answer anyway. The resulting answers would be plausible-sounding but often wrong - "hallucinations".
For an LLM, that's an unfortunate behavioral default. Many LLMs can recognize their own uncertainty sometimes, flawed as their meta-knowledge is - but not act on it. You can run "anti-hallucintion training" to make them more eager to act on it. Conversely, careless training for performance can encourage hallucinations instead (see: o3).
Here's a primer on the hallucination problem, by OpenAI. It doesn't say anything groundbreaking, but it does sum up what's well known in the industry: https://openai.com/index/why-language-models-hallucinate/
OpenAI claims that hallucination isn't an inevitability because you can train a model to "abstain" rather than "guess" when giving an "answer". But what does that look like in practice?
My understanding is that an LLM's purpose is to predict the next token in a list of tokens. To prevent hallucination, does that mean it is assigning a certainty rating to the very next token it's predicting? How can a model know if its final answer will be correct if it doesn't know what the tokens that come after the current one are going to be?
Or is the idea to have the LLM generate its entire output, assign a certainty score to that, and then generate a new output saying "I don't know" if the certainty score isn't high enough?
"Next token prediction" is often overstated - "pick the next token" is the exposed tip of a very large computational process.
And LLMs are very sharp at squeezing the context for every single bit of information available in it. Much less so at using it in the ways you want them to.
There's enough information at "no token emitted yet" for an LLM to start steering the output towards "here's the answer" or "I don't know the answer" or "I need to look up more information to give the answer" immediately. And if it fails to steer it right away? An LLM optimized for hallucination avoidance could still go "fuck consistency drive" and take a sharp pivot towards "no, I'm wrong" mid-sentence if it had to. For example, if you took control and forced a wrong answer by tampering with the tokens directly, then handed the control back to the LLM.
Can you help correct where I'm going wrong?
Why is there no fundamental limitation that would prevent LLMs from matching human hallucination rates? I'd like to hear more about how you arrived at that conclusion.