"Easy". You make the model distinguish between information and references to information. Information may be fabricated (for example, a fictional book is mostly composed of lies) but references are assumed to be factual (a link does point to something and is related to something). Factual information is true only to the degree that it is conveyed exactly, so the model needs to be able to store and reproduce references verbatim.
Of course, "easy" is in quotes because none of this is easy. It's just easier than AGI.
How do you make an LLM understand that it must only give factual sources? Just some naive RL with positive reward on the correct sources and negative reward on incorrect sources is not enough -- there are obscenely many more hallucinated sources possible, and the set of correct sources is a set of insanely tiny measure.