And I know not everyone thinks in a literal stream of words all the time (I do) but I would argue that those people's brains are just using a different "token"
Prior to LLMs, there was never any suggestion that thoughts work like autocomplete, but now people are working backwards from that conclusion based on metaphorical parallels.
Predictive coding theory was formalized back around 2010 and traces it roots up to theories by Helmholtz from 1860.
Predictive coding theory postulates that our brains are just very strong prediction machines, with multiple layers of predictive machinery, each predicting the next.
Roots of predictive coding theory extend back to 1860s.
Natalia Bekhtereva was writing about compact concept representations in the brain akin to tokens.
Yes, you can draw interesting parallels between anything when you're motivated to do so. My point is that this isn't parsimonious reasoning, it's working backwards from a conclusion and searching for every opportunity to fit the available evidence into a narrative that supports it.
> Roots of predictive coding theory extend back to 1860s.
This is just another example of metaphorical parallels overstating meaningful connections. Just because next-token-prediction and predictive coding have the word "predict" in common doesn't mean the two are at all related in any practical sense.
Fascinating framing. What would you consider evidence here?
Other posters already noted other reasons for it, but I will note that you are saying 'similar to autocomplete, but obviously' suggesting you recognize the shape and immediately dismissing it as not the same, because the shape you know in humans is much more evolved and co do more things. Ngl man, as arguments go, it sounds to me like supercharged autocomplete that was allowed to develop over a number of years.
However, what it is doing is layered autocomplete on itself. I.e. one part is trying to predict what the other part will be producing and training itself on this kind of prediction.
What emerges from this layered level of autocompletes is what we call thought.