It's still just information retrieval. You're just dividing it into internal information (the compressed representation of the training data) and external information (web search, API calls to systems, etc). There is a lot of hidden knowledge embedded in language and LLMs do a good job of teasing it out that resembles reasoning/thinking but really isn't.
Given the fact that "thinking" still hasn't been defined rigourously, I don't understand how people are so confident in claiming they don't think.
Isn't that still "not thinking"?
LLM's be like "The dumb humans can't even see the dots"[1]
[1]https://compote.slate.com/images/bdbaa19e-2c8f-435e-95ca-a93...
How about non-determinism (i.e. hallucinations)? Ask a human ANY question 3 times and they will give you the same answer, every time, unless you prod them or rephrase the question. Sure the answer might be wrong 3 times, but at least you have consistency. Then again, maybe that's a disadvantage for humans!