Google points to phys and phys is a republish of the MIT article.
I guess I'm trying to understand your comment. Is there a distinction you're making between LLM summaries or LLM generated text, or are you stating that they aren't being transparent about the summaries being generated by LLMs (as opposed to what? human editors?).
Because at some point when I launched the app, it did say summaries might be inaccurate.
Looks like you found an example where it isn't properly citing the summaries. My guess is that they will tighten this up, because I looked mostly at the first and second page and most of those articles seemed to have citations in the summaries.
Like most people, I would want those everywhere to guard against potential hallucinations. No, the citations don't guarantee that there weren't any hallucinations, but if you read something that makes you go "huh" – the citations give you a low-friction opportunity to read more.
But another sibling commenter talked about the phys.org and google both pointing to the same thing. I agree, and this is exactly an issue I have with other aggregators like Ground.news.
They need to build some sort of graph that distills down duplicates. Like I don't need the article to say "30 sources" when 26 of them are just reprints of an AP/Reuters wire story. That shouldn't count as 30 sources.
The main point of my original comment was that I wanted to understand what this is, how it works and whether I can trust the information on there, because it wasn't completely clear to me.
I'm not super up to date with AI stuff, but my working knowledge is that I should never trust the output of an LLM and always verify it myself, so therefore I was wondering if this is just LLM output or if there is some human review process, or a mechanism related to the citation functions that makes it output of a different, more trusted category.
I did catch the message on the loading screen as well now, I do still think it could be a little more clear on the individual articles about it being LLM generated text, apart from that I think I understand somewhat better what it is now.
Either you mean every time you read something interesting (“huh”) you should check it. But in that case, why bother with reading the AI summary in the first place…
Or you mean that any time you read something that sounds wrong, you should check it. But in that case, everything false in the summaries that happens to sound true to you will be confirmed in your mind without you ever checking it.
...yes? If I go to a website called "_ News" (present company included), I expect to see either news stories aggregated by humans or news stories written and fact checked by humans. That's why newspapers have fact checking departments, but they're being replaced by something with almost none of the utility and its proponents are framing the benefits of the old system as impossible or impractical.
Like, I was asking whether they were expecting the curation/summarization to be done by humans at Kagi News.
> And pretty much every single sentence cites the original article(s).
Yeah but again, correct me if I'm wrong, but I don't think asking an LLM to provide a source / citation yields any guarantee that the text it generates alongside it is accurate.
I also see a lot of text without any citations at all, here are three sections (Historical background, Technical details and Scientific significance) that don't cite any sources: https://kite.kagi.com/s/5e6qq2