If you're wondering how they'll know it's happening, the USA has had DARPA monitoring stuff like this since before OpenAI existed.
While one in particular is speedracing into irrelevance, it isn't particularly representative of the rest of the developed world (and hasn't in a very long time, TBH).
Like how I can say that the future of USA's AI is probably going to obliterate your local job market regardless of which country you're in, and regardless of whether you think there's "no identified use-case" for AI. Like a steamroller vs a rubber chicken. But probably Google's AI rather than OpenAI's, I think Gemini 3 is going to be a much bigger upgrade, and Google doesn't have cashflow problems. And if any single country out there is actually preparing for this, I haven't heard about it.
Accusations about being off-topic is really pushing it: you want to bet on governments' incompetence in dealing with AI, and I don't (on the basis that there are unarguably still many functional democracies out there), on the other hand, the thread you started about the state of Europe's AI industry had nothing to do with that.
> Like how I can say that the future of USA's AI is probably going to obliterate your local job market regardless of which country you're in
Nobody knows what the future of AI is going to look like. At present, LLMs/"GenAI" it is still very much a costly solution in need of a problem to solve/a market to serve¹. And saying that the USA is somehow uniquely positioned there sounds uninformed at best: there is no moat, all of this development is happening in the open, with AI labs and universities around the world reproducing this research, sometimes for a fraction of the cost.
> And if any single country out there is actually preparing for this, I haven't heard about it.
What is "this", effectively? The new flavour Gemini of the month (and its marginal gains on cooked-up benchmarks)? Or the imminent collapse of our society brought by a mysterious deus ex machina-esque AGI we keep hearing about but not seeing? Since we are entitled to our opinions, still, mine is that LLMs are a mere local maxima towards any useful form of AI, barely more noteworthy (and practical) than Markov chains before it. Anything besides LLMs is moot (and probably a good topic to speculate about over the impending AI winter).
¹: https://www.anthropic.com/news/the-anthropic-economic-index
Is there a source for this other than "trust me bro"? DARPA isn't a spy agency, it's a research organization.
> governments won't "look ahead", they'll just panic when AGI is happening
Assuming the companies tell them, or that there are shadowy deep-cover DARPA agents planted at the highest levels of their workforce.
Please don't cross into personal attack, no matter how wrong another commenter is or you feel they are.
Maybe you can post a link in case anyone else is as clumsy with search engines as I am? After all, you can google it just as fast as you claim I can.
Seriously, our government just announced it's slashing half a billion dollars in vaccine research because "vaccines are deadly and ineffective", and it fired a chief statistician because the president didn't like the numbers he calculated, and it ordered the destruction of two expensive satellites because they can observe politically inconvenient climate change. THOSE are the people you are trusting to keep an eye on the pace of development inside of private, secretive AGI companies?