Preferences

* or governments fail to look far enough ahead, due to a bunch of small-minded short-sighted greedy petty fools.

Seriously, our government just announced it's slashing half a billion dollars in vaccine research because "vaccines are deadly and ineffective", and it fired a chief statistician because the president didn't like the numbers he calculated, and it ordered the destruction of two expensive satellites because they can observe politically inconvenient climate change. THOSE are the people you are trusting to keep an eye on the pace of development inside of private, secretive AGI companies?


That's just it, governments won't "look ahead", they'll just panic when AGI is happening.

If you're wondering how they'll know it's happening, the USA has had DARPA monitoring stuff like this since before OpenAI existed.

> governments

While one in particular is speedracing into irrelevance, it isn't particularly representative of the rest of the developed world (and hasn't in a very long time, TBH).

"irrelevance" yeah sure, I'm sure Europe's AI industry is going to kick into high gear any day now. Mistral 2026 is going to be lit. Maybe Sir Demis will defect Deepmind to the UK.
That's not what I was going for (I was more hinting at isolationist, anti-science, economically self-harming and freedoms-eroding policies), but if you take solace in believing this is all worth it because of "AI" (and in denial about the fact that none of those companies are turning a profit from it, and that there is no identified use-case to turn the tables down the line), I'm sincerely happy for you and glad it helps you cope with all the insanity!
I know, you wanted to vent about the USA and abandon the thread topic, and I countered your argument without even leaving the topic.

Like how I can say that the future of USA's AI is probably going to obliterate your local job market regardless of which country you're in, and regardless of whether you think there's "no identified use-case" for AI. Like a steamroller vs a rubber chicken. But probably Google's AI rather than OpenAI's, I think Gemini 3 is going to be a much bigger upgrade, and Google doesn't have cashflow problems. And if any single country out there is actually preparing for this, I haven't heard about it.

> I know, you wanted to vent about the USA and abandon the thread topic, and I countered your argument without even leaving the topic.

Accusations about being off-topic is really pushing it: you want to bet on governments' incompetence in dealing with AI, and I don't (on the basis that there are unarguably still many functional democracies out there), on the other hand, the thread you started about the state of Europe's AI industry had nothing to do with that.

> Like how I can say that the future of USA's AI is probably going to obliterate your local job market regardless of which country you're in

Nobody knows what the future of AI is going to look like. At present, LLMs/"GenAI" it is still very much a costly solution in need of a problem to solve/a market to serve¹. And saying that the USA is somehow uniquely positioned there sounds uninformed at best: there is no moat, all of this development is happening in the open, with AI labs and universities around the world reproducing this research, sometimes for a fraction of the cost.

> And if any single country out there is actually preparing for this, I haven't heard about it.

What is "this", effectively? The new flavour Gemini of the month (and its marginal gains on cooked-up benchmarks)? Or the imminent collapse of our society brought by a mysterious deus ex machina-esque AGI we keep hearing about but not seeing? Since we are entitled to our opinions, still, mine is that LLMs are a mere local maxima towards any useful form of AI, barely more noteworthy (and practical) than Markov chains before it. Anything besides LLMs is moot (and probably a good topic to speculate about over the impending AI winter).

¹: https://www.anthropic.com/news/the-anthropic-economic-index

> the USA has had DARPA monitoring stuff like this since before OpenAI existed

Is there a source for this other than "trust me bro"? DARPA isn't a spy agency, it's a research organization.

> governments won't "look ahead", they'll just panic when AGI is happening

Assuming the companies tell them, or that there are shadowy deep-cover DARPA agents planted at the highest levels of their workforce.

[flagged]
> it sounds like you're triggered or something

Please don't cross into personal attack, no matter how wrong another commenter is or you feel they are.

I googled it, and I can't find support for the claim that DARPA is monitoring internal progress of AI research companies.

Maybe you can post a link in case anyone else is as clumsy with search engines as I am? After all, you can google it just as fast as you claim I can.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal