The former sounds like it would be full of reasonable cases for taking it seriously.
AI ethics isn’t neglected because AI ethics doesn’t matter. AI alignment matters. If a self-bootstrapping generally intelligent AI emerges the only important thing about it is “Is this aligned with human values?”. If the answer is no we’re all going to die because we’re made of atoms at the bottom of a gravity well and it’s going to use all the atoms there before getting out. The ethics of AI is completely irrelevant in comparison.
The fact that you’re grouping AI alignment with is kind of an indication of the problem; most people have heard so little about AI alignment problems that they assume it’s the same thing as AI ethics.
Completely unrelated? Even you must accept that they are related through AI.
But, I now see that AI ethics and AI alignment are different. Thank you. However, I think my larger point still stands as I was thinking mostly about efforts under the banners of "AI safety" and "AI alignment" when I wrote my comments here. I do not believe these efforts are "neglected" in the sense 80,000 Hours uses.
Let me guess, you work in tech and in a city? I personally believe that if a super intelligent AI were to be created, then it would have the same effect on society as the advent of nuclear weapons. So I respectfully disagree that it's not neglected, I don't think the common American thinks about how close we are to human level intelligence or what that would mean for society.
Also: I'm not a professional programmer, though programming has been one of my job responsibilities in the past, but never all of my job. And I live in a rural area at the moment. Not that any of these are relevant.
This phenomena is not unique to AI safety. It often is driven by management incentives.
I agree that AI safety is not a solved problem in practice, but that doesn't mean it's neglected. AI safety gets a lot of attention, and it is important, but it's not "neglected" in the sense that too few people work on it or that there isn't enough money in it [0]. I think the marginal impact of a new person in AI safety is roughly zero, all else equal. AI safety folks would probably do best to change their priorities away from "ivory tower" sort of issues towards the practical issues you bring up.
[0] EAs typically use people or money to measure neglectedness. https://80000hours.org/articles/problem-framework/#how-to-as...
I'm sure there are other areas, but I haven't put an effort into listing them. Global priorities work is pretty hard.
Also, apparently 80,000 Hours now considers AI safety only "somewhat neglected", so perhaps the EAs behind 80,000 Hours agree with me more than I thought. https://80000hours.org/problem-profiles/positively-shaping-a...
The only place I know of alignment actually being covered in normal media is Vox’s Future Perfect (and by Matt Yglesias, who used to work there) and that’s because EAs literally pay them to cover it.