Its not and its pretty evident to anyone that has actually used SotA LLMs for more than 5 minutes.
---
LLM: The answer is A.
Me: That's wrong. Try again.
LLM: Oh I'm sorry, you're completely right. The answer is B.
Me: That's wrong. Try again.
LLM: Oh I'm sorry, you're completely right. The answer is A.
Me: Time to short NVDA.
LLM: As an AI language learning model without real-time market data or the ability to predict future stock movements, I can't advise on whether it's an appropriate time to short NVIDIA or any other stock.
---
When you look at things like https://arxiv.org/abs/2408.06195 you notice that the amount of tokens needed to solve trivial tasks is somewhat ridiculous. On the order of 300k tokens for a simple grade school problem. That is roughly three hours at a rate of 30 token/s. You could fill 400 pages of a book with that many tokens.
Super-human knowledge is certainly true (all of wikipedia in multiple languages at all times, quickly)
Consider however, an important distinction. A young child is really, exactly not the way to think of these machines and their outputs. The implicit connection there is that there is some human-like progress to more capability.. not so.
Also note that "chain of reasoning" around 2019 or so, was exactly the emergent behavior that convinced many scientists that there was more going on that just a "stochastic response" machine. Some leading LLMs do have the ability to solve multi-step puzzles, against the expectations of many.
My "gut feeling" is that human intelligence is multi-layered and not understood; very flexible and connected in unexpected ways to others and the living world. These machines are not human brains at all. General Artificial Intelligence is not defined, and many have reasons to spin the topic in public media. Let's use good science skills while forming public opinion on these powerful and highly-hyped machines.
why do you believe that's not emergent phenomena from its super vast training corpus?