Long before AI causes mass harm without human involvement, humans will find hundreds of ways to make it cause harm, and harm at scale. I do think the technology itself is part of the risk though, because of the flaws, the scale, etc inherent to its current iterations. However, maybe those are still the fault of humans for not giving it the proper limits, warnings, etc. to mitigate those things
That said, it can be used for good in the right hands (accessibility tools, etc), potentially, though I'm certainly more of a doomer at this point in time.
A small group of people risking the lives of billions without their consent is morally repugnant.
Geoff Hinton /left his job/ at Google because he sees the risks as real, so I think that's a tough case to make.
If it were a perfect society and people discovered deep neural networks, there would be no excuse to blame bad actors for posing the question of whether AGI is an existential problem or not, and what to do about it. Unlike bad groups of people, the question won't go away. In the real world, it is entirely possible to have to consider multiple issues at once, not at the expense of any issue.
He did not leave his job when the bias and harm of the existing I models at Google were doing.
He did not quit his job when Google fired their AI ethics champion rather than change their behaviour
Has he had anything to say about the harm Google's algorithms do?
Their message is being amplified and distorted by forces that are. That they're both comfortably wealthy from creating the problems they now preach solutions to is also no small matter.
Perhaps the simplest explanation for why researchers are saying there is a danger is that they genuinely believe it.
The incentive here is that they want to live, and they want their loved ones to live.
While putting an AI in charge of weapons is at least three Hollywood plots[0], it has also (GOFAI is still AI) behind at least two real-life near-misses from triggering global thermonuclear war[1].
[0] Terminator (and the copycat called X-Men Days of Future Past); Colossus the Forbin Project; WarGames
[1] Stanislav Petrov incident, 26 September 1983; Thule Site J incident October 5, 1960 (AKA "we forgot to tell it that the moon doesn't have an IFF transponder and that's fine")
The other is a hypothetical future scenario that has literally no A->B->C line drawn to it, pushed by people whose ability to feed themselves depends on attention farming.
So no. Let's not focus on the bullshit one, and let's focus on the one that is hurting people in the now.
https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintel...
> For example, it might program a virus that will infect every computer in the world, causing them to fill their empty memory with partial copies of the superintelligence, which when networked together become full copies of the superintelligence. Now the superintelligence controls every computer in the world, including the ones that target nuclear weapons. At this point it can force humans to bargain with it, and part of that bargain might be enough resources to establish its own industrial base, and then we’re in humans vs. lions territory again.
I believe there's a rift between doomers and eye-rollers because this kind of leap sounds either hollywood-hacker-sci-fi or plausible-and-obvious. The notion that software can re-deploy an improved version of itself without human intervention is just outside the realm of possibility to me (or somehow blackmailing or persuading a human to act alongside it?? Is that AI anymore or is that just a schizophrenic who thinks the computer is talking to him?)
These are all taken as a given because the entire concept is just old testament god but with glowy parts. This is an essential part of the dogma, which is why there's never any sort of justification for it. Super smart computer is just assumed to be magic.
It's plausible and obvious to them because "a super-intelligence can make anyone do anything", can reprogram any computer to it's will, and can handwave away literally technical deficiencies.
Compiler optimizations are a counterexample, could be called automated self-improvement -- but it's to do the same thing with fewer resources, what I'm looking for is a reason to believe an optimization could go the other way, have more capabilities with the same resources.
"A God is a mind that is much more intelligent than any human."
"Go programs had gone from “dumber than children” to “God” in eighteen years, and “from never won a professional game” to “Godlike play” in six months."
"God has an advantage that a human fighting a pack of lions doesn’t – the entire context of human civilization and technology, there for it to manipulate socially or technologically."
"A God might be able to analyze human psychology deeply enough to understand the hopes and fears of everyone it negotiates with."
"Humans instinctively balance thousands of different considerations in everything they do; so far this hypothetical God is only balancing three (least cancer, quickest results, highest probability). To a human, it would seem maniacally, even psychopathically, obsessed with cancer curing. If this were truly its goal structure, it would go wrong in almost comical ways."
(Oh, god, there's my personal bugbear: "bigger brain size correlates with greater intelligence". You know what else correlates with bigger brain size? Hint: I'm 6'5"; how much smarter am I than someone 5'1"?)
Are you suggesting human intelligence is unique and the upper limit for intelligence? I mean we can argue about the difficulty and timeline for achieving human level artificial intelligence, but it seems unlikely that it's impossible.
> literally no A->B->C line drawn to it
Spend a couple of hours to educate yourself about the issue, and make a convincing counterargument please:
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...
https://astralcodexten.substack.com/p/why-i-am-not-as-much-o...
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...
There is a good real world example of how optimising for a goal aggressively can make the world unliveable for some species, which is humans destroying the environment for our economic growth. But at least there we do care about it and are trying to change it, because, among other reasons, we need the environment to survive ourselves. An AI wouldn’t necessarily need that, so it would treat us more like we treat bacteria or something, only keeping us around if necessary or if we don’t get in the way (and we will get in the way)
It might sound silly but there is some solid reasoning behind it
Sure, the probability of AI-triggered extinction is lowish, but the consequences are essentially infinite, so we are justified in focusing entirely on the AI-extinction threat. (But I guess it's not infinite enough to justify rounding up all the AI researchers and shooting them?)
It's rhetorical sleight of hand.
It seems to me they're inconsistent ways of understanding what is happening, since concern about misuse also seems to me to imply regulation. But I seem to see a lot of people on HN who say both things who give the impression that they're agreeing with each other.
MS Azure Cloud is. Anthropic (very alignment/safety-centered company) is backed by Google and their cloud. Any API tokens sold are just cloud credits with markup.
So to me the doomerism does come from those entities acting as fronts to the major players that would very much like to build a regulatory trench.
Setting aside the plausibility of this degree of coordination, I don't even see the alignment of interests. What does Google or Azure care of whether they sell GPUs to quasi-monopoly or a bunch of smaller competitors? In fact, isn't the standard economic argument that monopolies underproduce, so competition should if anything sell more GPUs? Meanwhile, promoting doomerism seems pretty risky for the GPU business - they have to promote it just enough to get hype and regulatory capture, but not enough that their ability to buy and use GPUs is actually restricted. Seems like a risky bet for a corporate exec to make, unless they can really control the doomers with puppet-like precision...
what gets published by academia is based entirely on who gets funded
my conspiracy theory is a stretch, granted, but to clarify:
It behooves the cloud providers for their customers to believe that their latest $1/hour l/user upcharge is revolutionary (so much so that ethicists are shouting please, stop, you know not what you do!)
OpenAI and Anthropic need the public to trust them and only them as providers of "safe" AI (not like the other open-source AI that might turn into an especially persuasive holocaust denier any minute) - so from the regulatory angle they want to play up the theoretical dangers of AI while assuring the public the technology is in good hands.
As for the academics, well it's not like anyone gets funding for writing boring papers about how nothing is going to happen and everything is fine. No one has to be puppeteered, they just need to look around and see what hype train might direct some funding their way.
Why aren't we talking more about it?
I read the article you linked to, both parts. I wonder how much of people having psychotic breaks in the rationalist community is due to 1) people with tendencies toward mental illness gravitating toward rationalism or 2) rationalism being a way to avoid built-in biases in human thought, but those biases being important to keeping us sane on an individual level. (If you fully grasp the idea that everyone might die, and have an emotional reaction to that that's proportional compared to just one person you know, it can be devastating). I think we are bad at thinking about big numbers and risks, because being very good at evaluating risks is actually not great for short term, individual survival -- even if it's good for survival as a species.
I know personally the whole AI/AGI thing has got me really down. It's a struggle to reconcile it with how little a lot of people seem to put stock in the idea of AGI ending up in control of humanity at some point. I totally agree that everything on your list is a real issue -- but I think that, even if we completely solve all those issues, how do we not end up with a society where most important decisions are eventually made by AGI? My assumptions are 1) that we eventually make AGI which is just superior to humans in terms of making decisions and planning, and 2) there will be significant pressure from capitalism and competition among governments to use AGI over people once that's the case. Similar to how automation has almost always won out over hand-production so far for manufactured goods.
That's more the scenario Paul Christiano worries about than Yudkowsky. It seems more likely to me. But I still think that a lot of our mental heuristics about what we should worry about break down when it comes to the creation of something that out-guns us in terms of brainpower, and I think Yudkowsky makes a lot of good points about how we tend to shy away from facing reality when reality could be dreadful. That it's really easy to have a mental block about inventing something that makes humanity go extinct, where if it's possible to do that, there's no outside force that will swoop in and stop us like a parent stopping a child from falling into a river. If this is a real danger, we have to identify it in advance and take action to prevent it, even while there are a bunch of other problems to deal with, and even while a lot of people don't believe in it, and even while there's a lot of money to be made in the meantime but each bit takes us closer to a bad outcome. Reality isn't necessarily fair, we could be really screwed, and have all the problems you mentioned in addition to the risk of AGI killing us all (either right away or taking over and just gradually using up all the resources we need to live like we've done to so many species).
https://aiascendant.substack.com/p/extropias-children-chapte... (and I still recommend all 7 parts)
It's hard for me to respond to the rest your comments, because I simply don't agree with the framing. To me I see a big trap where people read a lot of words, but don't connect those words to reality, which requires action.
If you only manipulate words (e.g. "intelligence" and "AGI" are big ones), then you're prone to believing in and reflecting illusions. He was upset that I wasn't upset.
---
You didn't ask for advice, but you say you're not feeling good. I would draw an analogy to a friend who in 2017 was preoccupied with Trump possibly causing a nuclear war with North Korea. There were a lot of memes in the news about this.
Now certainly I couldn't say at the time that it was impossible. But looking back 6 years, I'm glad that I completely ignored it and just lived my life. And that isn't to say it's gone, and not real -- certainly something like that could happen in the future, given the volatile political situation. But it's simply out of my control. (And that's what I told my friend -- are you going to DO anything about it?)
Regardless of what you believe, I think if you just stop reading memes, and do other stuff instead, you won't regret it in 6 years.
I agree with the Substack in that rationalists can be curiously irrational when it comes to examining where they get their own beliefs. (Or maybe examining them in a "logical" way, but completely missing the obvious, common-sense fact -- like MacAskill writing an entire book about moral actions, while his life's work was funded by a criminal -- a close associate who he vouched for!)
Like EVERYONE ELSE, they are getting beliefs from their reptile brain ("world will definitely end"), and then they are CONFABULATING logical arguments on top to justify what the reptile brain wants.
And you can't really change your reptile brain's mind by "thinking about it" -- it responds more to actions and experiences than the neocortex. It definitely responds to social status and having friends, which can lead to cult-like behavior.
I'd say that humans have a "circuit" where they tend to believe the "average" of what people around them believe. So if you read a lot of "rationalist" stuff, you're going to believe it, even if's not grounded in experience, and there are gaping holes in the logic.
---
The big trick I see a lot of "rationalist" writing is the same one used in the Bostrom book.
1. You state your premises in a page or so. Most of the premises are reasonable, but maybe one is vague or slightly wrong. You admit doubt. You come across as reasonable.
2. You spend 300 pages extrapolating wildly from those premises, creating entire worlds. You invent really fun ideas, like talking about a trilling trillion people, a trillion years into the future, across galaxies, and what they want.
You use one of those slightly vague premises, or maybe a play on a word like "intelligence", to make these extrapolations seem like logical deductions.
You overthink it. Other people who like overthinking like what you wrote.
You lack the sense of proportion that people who have to act must acquire -- people with skin the game.
Following this methodology is a good way to lead to self-contained thoughts, divorced from reality. The thoughts feed on themselves both individually and socially. If you're wrong, and you frequently are, you can just draw attention to some new fun ideas.
---
Anyway, glad you have followed the project. Speaking from experience, I would suggest concretely to replace the rumination time with work on an open source project. If not coding, it could be testing or documentation, etc. Having a sense of achievement may make past negative thoughts of things far away seem silly.
They are grabbing headlines, and moving the conversation from the real issues, which are how AI is used in education, health care, law enforcement, securities and housing markets, the government, the military, and more
https://www.hackerneue.com/item?id=36100525