Preferences

It's amusing that people still complain about AI ethics and debiasing (aka "alignment") being an early focus of EA's long after it's become an increasingly relevant research field, even with controversy regularly making the tech news. If anything, that AI focus counts as a success story for effective altruism, as much as the similar case of pandemic preparedness.

One thing I've wondered is when Effective Altruists are going to stop calling AI ethics neglected. AI ethics as a field is now mainstream, in my view, yet people still keep calling it neglected. I personally think Effective Altruists should focus less on AI ethics now as it has gained mainstream attention and focus more on other currently neglected topics.
By neglected, they mean that, instead of dealing with real ethics of AI in practice and the future, they want others to instead entertain their fantasies about scary overlord general AI as if it's just around the corner and coming to get them. They're upset that people aren't as spooked about science fiction as they are.
I think it's telling that the people most critical of AI alignment as cause for concern so rarely engage substantively with the very reasonable case for taking it seriously.
I read GP trying to draw a distinction between “real ethics of AI in practice” and “fantasies about scary overlord general AI.”

The former sounds like it would be full of reasonable cases for taking it seriously.

I know you don't appreciate now how poorly this comment will age, but I at least hope you remember to reflect back on it sometime.
I'll have all the time in the world to reflect on it when the singularity happens, as I'm tortured infinitely by the Basilisk I called science fiction.
> One thing I've wondered is when Effective Altruists are going to stop calling AI ethics neglected.

AI ethics isn’t neglected because AI ethics doesn’t matter. AI alignment matters. If a self-bootstrapping generally intelligent AI emerges the only important thing about it is “Is this aligned with human values?”. If the answer is no we’re all going to die because we’re made of atoms at the bottom of a gravity well and it’s going to use all the atoms there before getting out. The ethics of AI is completely irrelevant in comparison.

I tend to use the terms "AI ethics", "AI safety", and "AI alignment" interchangeably. This may not be technically correct.
It’s not; they’re pretty much completely unrelated fields. AI ethics focuses very little on AI alignment issues, which tends to worry about much bigger and more general problems.

The fact that you’re grouping AI alignment with is kind of an indication of the problem; most people have heard so little about AI alignment problems that they assume it’s the same thing as AI ethics.

> they’re pretty much completely unrelated fields

Completely unrelated? Even you must accept that they are related through AI.

But, I now see that AI ethics and AI alignment are different. Thank you. However, I think my larger point still stands as I was thinking mostly about efforts under the banners of "AI safety" and "AI alignment" when I wrote my comments here. I do not believe these efforts are "neglected" in the sense 80,000 Hours uses.

A miniscule fraction of global GDP going into preventing misaligned AI would always seem like neglect if you thought the end of the world was at stake.
> AI ethics as a field is now mainstream

Let me guess, you work in tech and in a city? I personally believe that if a super intelligent AI were to be created, then it would have the same effect on society as the advent of nuclear weapons. So I respectfully disagree that it's not neglected, I don't think the common American thinks about how close we are to human level intelligence or what that would mean for society.

When I say AI safety is not neglected, I do not mean AI safety is not important. I mean that it is not neglected in the sense that increasing the amount of money and people directed at the problem is not likely to help. That's a common definition among effective altruists. (And it's part of why 80,000 Hours says that nuclear security, the example you gave, is not particularly neglected.) I meant that AI safety is mainstream in the sense that it appears in popular journalism and has a large following, nothing more. I did not mean that most people would be aware of the problem.

Also: I'm not a professional programmer, though programming has been one of my job responsibilities in the past, but never all of my job. And I live in a rural area at the moment. Not that any of these are relevant.

By all available evidence it's horribly neglected at Big Tech firms, we keep seeing examples of AI researchers there not being taken seriously despite the quality of their work.
> we keep seeing examples of AI researchers there not being taken seriously despite the quality of their work

This phenomena is not unique to AI safety. It often is driven by management incentives.

I agree that AI safety is not a solved problem in practice, but that doesn't mean it's neglected. AI safety gets a lot of attention, and it is important, but it's not "neglected" in the sense that too few people work on it or that there isn't enough money in it [0]. I think the marginal impact of a new person in AI safety is roughly zero, all else equal. AI safety folks would probably do best to change their priorities away from "ivory tower" sort of issues towards the practical issues you bring up.

[0] EAs typically use people or money to measure neglectedness. https://80000hours.org/articles/problem-framework/#how-to-as...

What do you think are currently neglected topics?
To give just one example, I think Effective Altruists focus far too little on meta-science. Much of what they want to do depends on meta-science, and in science it's quite difficult to fund that sort of work, so it seems odd to me that it's not considered a top priority on 80,000 Hours.

I'm sure there are other areas, but I haven't put an effort into listing them. Global priorities work is pretty hard.

Also, apparently 80,000 Hours now considers AI safety only "somewhat neglected", so perhaps the EAs behind 80,000 Hours agree with me more than I thought. https://80000hours.org/problem-profiles/positively-shaping-a...

- Buying rainforest land and paying locals to guard it. - Greening energy production in developing countries. Their electrical grids are already unstable, so even improving energy just when the sun is shining could help. - Research into oceanic biologic collapse. - Finding and destroying illegal fishing operations. (Heck, make an AI boat do it, and you'll drum up even more support for AI alignment).
The first point is exactly what I donated to for a few years based on an EA recommendation 10 years ago!
AI alignment research is still extremely neglected. There’s a handful of researches looking at it and that’s about it. There’s plenty of coverage/criticism about AI, but it tends to be very different than the kinds of things EAs worry about.

The only place I know of alignment actually being covered in normal media is Vox’s Future Perfect (and by Matt Yglesias, who used to work there) and that’s because EAs literally pay them to cover it.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal