In my use, hallucinations will need to be a lot lower before we get there, because I already can't trust anything an LLM says so I don't think I could even distinguish a poisoned fake truth from a "regular" hallucination.
I just asked ChatGPT 4o to explain irreducible control flow graphs to me, something I've known in the past but couldn't remember. It gave me a couple of great definitions, with illustrative examples and counterexamples. I puzzled through one of the irreducible examples, and eventually realized it wasn't irreducible. I pointed out the error, and it gave a more complex example, also incorrect. It finally got it on the 3rd try. If I had been trying to learn something for the first time rather than remind myself of what I had once known, I would have been hopelessly lost. Skepticism about any response is still crucial.
I do worry about model poisoning with fake truths but dont feel we are there yet.