The point I am trying to make is that the benefits of AI-related tech is likely to be quite pervasive and we should be looking at what corporations are actually doing. Sort of what this poem says:
For while the tired waves, vainly breaking / Seem here no painful inch to gain, / Far back through creeks and inlets making, / Comes silent, flooding in, the main.
The potential of a sufficiently intelligent agent, probably something very close to a really good AGI, albeit still not an ASI, could be measured in billions of billions of mostly inmediate return of investment. LLMs are already well into the definition of hard AI, there are already strong signs it could be somehow "soft AGI".
If by chance, you're the first to reach ASI, all the bets are down, you just won everything on the table.
Hence, you have this technology, LLM, then most of the experts in the field (in the world blabla), say "if you throw more data into it, it becames more intelligent", then you "just" assemble an AI team, and start training bigger, better LLMs, ASAP, AFAP.
More or less this is the reasoning behind the investments, sans accounting the typical pyramidal schemes of investments in hyped new stuff.
Juicero, tulips....
> then you "just" assemble an AI team, and start training bigger, better LLMs, ASAP, AFAP.
There's a limit to LLMs and we may have reached it. Both physical: there is not enough capacity in the world to train bigger models. And data-related: once you've gobbled up most of internet, movies and visual arts, there's an upper limit on how much better these models can become.
Oh sure, yes. For Nvidia.
Gold rush, shovels...
The association with higher AI goals is merely a mixture of pure marketing and LLM company executives getting high on their own supply.