Preferences

The whole article seems like a strawman.

I have not yet heard one person worry about AIs taking over humanity. They're worried about their jobs. And most people who were worried 2 years ago are much less worried.

And a better scenario is Aliens with IQ of 300 are coming. And they will all be controlled by the [US|Russian|Israeli|Hamas|Al-Qaeda|Chinese] government.

Edit: To be clear, I was referring to people I personally know. Sure, lots of people out there are terrified of lots of things - religious fanaticism, fluoride in the water, AI apocalypse.

And "huge economic disruption" is not "AI taking over humanity". I'm interpreting the article's take on AI doing damage as one where the AI is in control, and no human can stop it. Currently, for each LLM out there, there are humans controlling it.


There's a group of people who have reinvented religion because they're afraid of an AI torturing them for eternity if they don't work on AI hard enough. It's very silly but there are many people who actually believe this is a risk: https://en.wikipedia.org/wiki/Roko's_basilisk
I was referring to people I personally know.

The existence of crazy/anxious people in the world is well established, and not in dispute.

I really don't think that very many people are concerned about AI because of Roko's Basilisk; that's more of a meme.
The AI alignment folks seem much more anxious about an AI version of Pascal's Wager they cooked up.
Sure some of them maybe, but given that many concerned people think the chance of extinction is 10% or higher, it's not really low probability enough to be considered a Pascal's Wager.
I didn't know that 10% was a common threshold. Thank you for the insight.
You are cherry picking the single most absurd event in a history of over 20 years of public discussion of AI catastrophic risk.

Only about .0003 of all public discussion of AI catastrophic risk over those 20 years has invoked or referred to Roko's basilisk in any way.

I don't know of anyone worried about AI who is worried mainly because of the basilisk.

Next you'll mention Pascal's Mugging, which likewise is the main worry of exactly zero of the sane people worried about AI -- and (despite the representations of at least one past comment on this site) was never even a component of any argument for the dangerousness of continued "progress" in AI.

So you agree that there is more than "a single person worr[ied] about AIs taking over humanity."

I was specifically pointing out how absurd the most ridiculous people in that category are

Not even them, simulations of them. They're not even real! Why the hell should I care if some far-future AI tortures simulations of me? Go ahead, spin up a hundred, hell spin up a trillion!
If you have not heard of one person worried about AIs taking over humanity, you're really not paying attention.

Geoff Hinton has been warning about that since he quit Google in 2019. Yoshua Bengio has talked about it, saying we should be concerned in the next 5-25 years. Multiple Congresspeople from both parties have mentioned the risk of "loss of control".

Not one person?

Here's Sam Altman, Geoffrey Hinton, Yoshua Bengio, Bill Gates, Vitalik Buterin, Demis Hassabis, Ilya Sutskever, Peter Norvig, Ian Goodfellow, and Rob Pike:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

https://en.wikipedia.org/wiki/Statement_on_AI_Risk

It's amusing that this is not an summary - it's the entire statement. Please trust these tech-leaders that may or may not have business with AI that it can become evil or whatever, so that regulatory capture becomes easier, instead of pointing out the other dozens of issues about how AI can be (and is already being) negatively used in our current environment.
Bengio is a professor and Hinton quit Google so that he could make warnings like this.

And this is just to highlight that there are clearly many familiar people expressing "worry about AIs taking over humanity" as per GP.

There are much more in depth explanations from many of these people.

What's actually amusing is skeptics complaining about $-incentives to people warning about dangers as opposed to the trillion dollar AI industry: Google, Meta, Nvidia, Microsoft, all of VC, trying to bring it about. Honestly the $ is so lopsided in the other direction. Reminds me of climate change, all the "those people are just in the renewable energy industry lobby"...

But the trillion dollar industry also signed this statement, that's the point - high ranking researchers and executives from these companies signed the letter. Individually these people may have valid concerns, and I not saying all of them have financial self-interest, but the companies themselves would not support a statement that would strangle their efforts. What would strangle their efforts would be dealing with the other societal effects AI is causing, if not directly then by supercharging bad (human) actors.
I may be cynical but I’ve seen a lot of AI hype artists warn about the danger as a form of guerrilla marketing. Who knows though.
You really think this worries Sam Altman?

I actually agree that mitigation of AI risk should be studied and pursued. That's different from thinking the AIs will take over.

Most of the worries I've heard from Geoff (and admittedly it was in 1-2 interviews) are related to how AI will impact the economic workforce, and the change may be very disruptive as to completely change our way of living, and that we are not prepared for it. That's much milder than "AI taking over humanity". And it's definitely not any of the following:

> Due to alignment difficulty and orthogonality, it will pursue dangerous convergent subgoals.

> These will give the AI a decisive strategic advantage, making it uncontainable and resulting in catastrophe.

The economic damage will not be due to AI, but due to the humans controlling it (OpenAI, Anthropic, etc), and due to capitalism and bad actors.

Even in the interview I heard from Geoff, he admitted that the probability he assigns to his fears coming true is entirely subjective. He said (paraphrased): "I know it's not 0%, and it's not 100%. It's somewhere in between. The number I picked is just how I feel about it."

Finally, that statement was in 2023. It's been 2 years. While in many ways AI has become much better, it has mostly only become better in the same ways. I wonder how worried those people are now.

To be clear, I'm not saying I think AI won't be a significant change, and it may well make things much worse. But "AI taking over humans"? Not seeing it from the current progress.

> You really think this worries Sam Altman?

Yes.

"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could." - Sam Altman

He's had more recent statements along these lines. But personally, I believe his fault is that he thinks careening towards this is inevitable and he's hoping the best thing to do given the wildly diverging outcomes likely is just to hope the emerging intelligence will come up with the alignment.

On Hinton: "I actually think the risk is more than 50%, of the existential threat."

https://www.reddit.com/r/singularity/comments/1dslspe/geoffr...

I know Sam says stuff, but I don't think he actually is worried about it. It's to his benefit to say things like this, as he gets to be involved in setting the rules that will ultimately benefit him.

As for Hinton:

> He said (paraphrased): "I know it's not 0%, and it's not 100%. It's somewhere in between. The number I picked is just how I feel about it."

I'm not claiming he's not worried about it. I'm providing context on how he came up with his percentage.

I think it's plausible he is lying about believing that in order to get more money from investors
The whole idea that AI can take over humanity is pretty much non-sense. Like in the Terminator films, AGI is developed somehow then decides to get rid of humanity. This scenario isn't realistic.

It's far more likely we'll develop some lousy AI and then put it in charge of something critical. Either national infrastructure like electricity or nuclear weapons. That lousy AI then produces some lousy outcome, like deciding the only way to stabilize the grid is to disable all electrical production.

The biggest threat to humanity is our own decisions.

> The biggest threat to humanity is our own decisions

The vast majority of us don't have any input on those decisions, unfortunately

Believe me, if I had even a modicum of influence to leverage anywhere, I would be fighting tooth and nail against AI as much as I can

Instead I just growl impotently in comment threads online like this one and hope this new technology finds some kind of equilibrium before it fucks us all over

But it's really tough to feel helpless watching this massive capital machine just grinding over society, knowing how much it is fucking things up

Maybe, MAYBE AI turns into a wonderful technology that ushers in a post scarcity utopia but I can't help but feel we're in for a few years, maybe even a few decades of extreme pain before that comes about

I'm almost 40. I don't particularly want to live through the back half of my life in the pain that is coming

I am worried about AIs' taking over humanity.

In fact I think it is likely to happen absent some drastic curtailing of the freedoms of the AI labs, e.g., a government-enforced ban on all training of very large models and a ban on publication and discussion of algorithmic improvements.

> I have not yet heard one person worry about AIs taking over humanity. They're worried about their jobs.

We all live in our bubbles. In my bubble, people find it more interesting to talk about the bigger picture than about their job.

Maybe you haven't met Rationalists personally, but they are numerous and powerful members of the Bay Area tech scene.

"Many of the A.I. world’s biggest names — including Shane Legg, a co-founder of Google’s DeepMind; Anthropic’s chief executive, Dario Amodei; and Paul Christiano, a former OpenAI researcher who now leads safety work at the U.S. Center for A.I. Standards and Innovation — have been influenced by Rationalist philosophy. Elon Musk, who runs his own A.I. company, said many of the community’s ideas aligned with his own.

"Mr. Musk met his former partner, the pop star Grimes, after they made the same cheeky reference to a Rationalist belief called Roko’s Basilisk."

https://www.nytimes.com/2025/08/04/technology/rationalists-a...

This seems like an argument from incredulity, and also static-world fallacy.

People are not thinking about the trend in AI. How good were AIs three years ago? How good are they now? How good will they be in another three years?

Kelsey Piper's analogy is good. How much smarter is a teacher than a room full of kindergartners? Who gets whom to do their bidding?

This item has no comments currently.