Are you saying you disagree that a new architectural leap is needed and just more compute for training is enough? Or are you saying a new architectural leap is needed and that or those new architectures will only be possible to train with insane amounts of compute?
If the latter I dont understand how you could know that about an innovation that’s not yet been made
In other words, it’s is MORE likely that an OpenAI/Google/Microsoft/Grok/Anthropic gets us closer to AGI than a startup we haven’t heard of yet. Simply because BigTech has cornered the market and has a de facto monopoly on compute itself. Even if you had raised $10 billion in VC funding, you literally can not buy GPUs because there is not enough manufacturing capacity in the world to fill your order. Thus, investors know this and capital is flowing to BigTech, rather than VC funds. Which creates the cycle of BigTech getting bigger, and squeezing out VC money for startups.
At best, they can sell their IP to BigTech, who will then commercialize it.