But...it is. It's an incredibly complex and impressive stochastic parrot - but that's basically what it is.
That doesn't mean it can't be useful. It absolutely can. There are some problems that will likely be greatly improved by throwing an LLM at it.
What I am saying is that people need to temper their expectations and not get caught up in tech fanaticism and anthropomorphize something that isn't there.
To think otherwise invites mysticism.
> anthropomorphize something that isn't there.
With above as counterexample, you just don't know this.
I otherwise agree in that LLMs in their current form are highly unlikely to give rise to AGI, for many reasons.
But as it stands your argument lacks rigour and actively makes assumptions on matters that remain an open subject of experimental and scientific inquiry (hard problem of consciousness et al).
To emphasize, I want to close that the epistemic position we aught to take is that of uncertainty. We shouldn't be sure something is there, just as we shouldn't be sure something isn't there.
We as yet don't know enough to say one way or the other. That's the point I want to emphasize. Stay open minded until the relevant fields start making stronger claims.
You may find this leading theory on how our brains work interesting: https://en.m.wikipedia.org/wiki/Predictive_coding
And you are parroting the LLMs are stochastic parrots argument.
You are overly confident in your assessment LLMs are not world models, more sure than those in the relevant fields of neuroscience, cognition, and machine learning researchers themselves.
This is an area of active study. Reflect on that. We don't yet know if LLMs aren't modeling something more than the next token.
But you seem to know, because of some sensibilities that LLMs with such simple architecture can't be more than a token predictor. Okay.