Preferences

Not one person?

Here's Sam Altman, Geoffrey Hinton, Yoshua Bengio, Bill Gates, Vitalik Buterin, Demis Hassabis, Ilya Sutskever, Peter Norvig, Ian Goodfellow, and Rob Pike:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

https://en.wikipedia.org/wiki/Statement_on_AI_Risk


It's amusing that this is not an summary - it's the entire statement. Please trust these tech-leaders that may or may not have business with AI that it can become evil or whatever, so that regulatory capture becomes easier, instead of pointing out the other dozens of issues about how AI can be (and is already being) negatively used in our current environment.
Bengio is a professor and Hinton quit Google so that he could make warnings like this.

And this is just to highlight that there are clearly many familiar people expressing "worry about AIs taking over humanity" as per GP.

There are much more in depth explanations from many of these people.

What's actually amusing is skeptics complaining about $-incentives to people warning about dangers as opposed to the trillion dollar AI industry: Google, Meta, Nvidia, Microsoft, all of VC, trying to bring it about. Honestly the $ is so lopsided in the other direction. Reminds me of climate change, all the "those people are just in the renewable energy industry lobby"...

But the trillion dollar industry also signed this statement, that's the point - high ranking researchers and executives from these companies signed the letter. Individually these people may have valid concerns, and I not saying all of them have financial self-interest, but the companies themselves would not support a statement that would strangle their efforts. What would strangle their efforts would be dealing with the other societal effects AI is causing, if not directly then by supercharging bad (human) actors.
I may be cynical but I’ve seen a lot of AI hype artists warn about the danger as a form of guerrilla marketing. Who knows though.
You really think this worries Sam Altman?

I actually agree that mitigation of AI risk should be studied and pursued. That's different from thinking the AIs will take over.

Most of the worries I've heard from Geoff (and admittedly it was in 1-2 interviews) are related to how AI will impact the economic workforce, and the change may be very disruptive as to completely change our way of living, and that we are not prepared for it. That's much milder than "AI taking over humanity". And it's definitely not any of the following:

> Due to alignment difficulty and orthogonality, it will pursue dangerous convergent subgoals.

> These will give the AI a decisive strategic advantage, making it uncontainable and resulting in catastrophe.

The economic damage will not be due to AI, but due to the humans controlling it (OpenAI, Anthropic, etc), and due to capitalism and bad actors.

Even in the interview I heard from Geoff, he admitted that the probability he assigns to his fears coming true is entirely subjective. He said (paraphrased): "I know it's not 0%, and it's not 100%. It's somewhere in between. The number I picked is just how I feel about it."

Finally, that statement was in 2023. It's been 2 years. While in many ways AI has become much better, it has mostly only become better in the same ways. I wonder how worried those people are now.

To be clear, I'm not saying I think AI won't be a significant change, and it may well make things much worse. But "AI taking over humans"? Not seeing it from the current progress.

> You really think this worries Sam Altman?

Yes.

"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could." - Sam Altman

He's had more recent statements along these lines. But personally, I believe his fault is that he thinks careening towards this is inevitable and he's hoping the best thing to do given the wildly diverging outcomes likely is just to hope the emerging intelligence will come up with the alignment.

On Hinton: "I actually think the risk is more than 50%, of the existential threat."

https://www.reddit.com/r/singularity/comments/1dslspe/geoffr...

I know Sam says stuff, but I don't think he actually is worried about it. It's to his benefit to say things like this, as he gets to be involved in setting the rules that will ultimately benefit him.

As for Hinton:

> He said (paraphrased): "I know it's not 0%, and it's not 100%. It's somewhere in between. The number I picked is just how I feel about it."

I'm not claiming he's not worried about it. I'm providing context on how he came up with his percentage.

I think it's plausible he is lying about believing that in order to get more money from investors

This item has no comments currently.