I have read it. It is nothing new on the subject, but it was just the recent paper I saw on HN and the person was asking for the link.
The crux is an LLM is and can never be intelligent in the sense of an AGI. It is easier to think of it as a way to store and retrieve knowledge.
Even if I did read it, I have no hope of understanding if it has made a fundamental mistake because I don't have the subject matter expertise either.
(I imagine it has made a fundamental mistake anyway: for LLMs to be useful progress toward AGI they don't have to be a feasible way to create AGI by themselves. Innovation very often involves stepping through technologies that end up only being a component of the final solution, or inspiration for the final solution. This was always going to be an issue with trying to prove a negative.)
It was a paper posted on HN a few days ago and someone asked for the evidence of my statement. I supplied it.
Now if they actually read it and disagreed with what it was saying, I'd be more than happy to continue the conversation.
Dismissing it just because you don't understand is a terrible thing to do to yourself. It's basically sabotaging your intelligence.
Sometimes papers are garbage, but you can only make that statement after you have read/understood it.
Use an LLM if you want.
The core piece as quoted from the abstract: "AGI predictions fail not from insufficient compute, but from fundamental misunderstanding of what intelligence demands structurally."
Then goes in detail as to what that is and why LLMs don't fit that. There are plenty other similar papers out there.