Intelligence likely doesn’t require that much data, and it may be more a question of evolutionary chance. After all, human intelligence is largely (if not exclusively) the result of natural selection from random mutations, with a generation count that’s likely smaller than the number of training iterations of LLMs. We haven’t found a way yet to artificially develop a digital equivalent effectively, and the way we are training neural networks might actually be a dead end here.
Which gives us no information on computational complexity of running that algorithm, or on what it does exactly. Only that it's small.
LLMs don't get that algorithm, so they have to discover certain things the hard way.
Humans ship with all the priors evolution has managed to cram into them. LLMs have to rediscover all of it from scratch just by looking at an awful lot of data.
The fact that they can pull it off to this extent was a very surprising finding.