Preferences

No. Even then. You may know assholes. User accounts may be compromised. Users may have different tolerances for gore that you don’t.

Not gotchas, I’m not arguing for the sake of it, but these are pretty common situations.

I always urge people to volunteer as mods for a bit.

At least you may see a different way to approach thing, or else you might be able to articulate the reasons the rule can’t be followed better.


Would not a less draconian solution then to be to hide the link requiring the user to click through a [This link has been hidden due to linking to [potential malware/sexually explicit content/graphically violent content/audio of a loud Brazilian orgasm/an image that has nothing to do with goats/etc] Type "I understand" here ________ to reveal the link.]?

You get the benefits of striving to warn users, without the downsides of it being abusive, or seen as abusive.

It’s not a bad option, and there may be some research that suggests this will reduce friction between mod teams and users.

If I were to build this… well first I would have to ensure no link shorteners, then I would need a list of known tropes and memes, and a way to add them to the list over time.

This should get me about 30% of the way there, next.. even if I ignore adversaries, I would still have to contend with links which have never been seen before.

So for these links, someone would have to be the sacrificial lamb and go through it to see what’s on the other side. Ideally this would be someone on the mod team, but there can never be enough mods to handle volume.

I guess we’re at the mod coverage problem - take volunteer mods; it’s very common for mods to be asleep, when a goat related link is shared. When you get online 8 hours later, theres a page of reports.

That is IF you get reports. People click on a malware infection, but aren’t aware of it, so they don’t report. Or they encounter goats, and just quit the site, without caring to report.

I’m actually pulling my punches here, because many issues, eg. adversarial behavior, just nullify any action you take. People could decide to say that you are applying the label incorrectly, and that the label itself is censorship.

This also assumes that you can get engineering resources applied - and it’s amazing if you can get their attention. All the grizzled T&S folk I know, develop very good mediating and diplomatic skills to just survive.

thats why I really do urge people to get into mod teams, so that the work gets understood by normal people. The internet is banging into the hard limits of our older free speech ideas, and people are constantly taking advantage of blind spots amongst the citizenry.

> I guess we’re at the mod coverage problem - take volunteer mods; it’s very common for mods to be asleep, when a goat related link is shared. When you get online 8 hours later, theres a page of reports.

When I consider my colleagues who work in the same department: they really have very different preferred schedules concerning what their preferred work hours are (one colleague would even love to work from 11 pm to 7 am - and then getting to sleep - if he was allowed to). If you ensure that you have both larks and "nightowls" among your (voluntary) moderation team, this problem should become mitigated.

Then this comes back to size of the network. HN for example is small enough that we have just a few moderators here and it works.

But once the network grows to a large size it requires a lot of moderators and you start running into problems of moderation quality over large groups of people.

This is a difficult and unsolved problem.

I admit that ensuring consistent moderation quality is the harder problem than the moderation coverage (sleep pattern ;-) ) problem.

Nevertheless, I do believe that there do exist at least partial solutions for this problem, and a lot of problems concerning moderation quality are in my opinion actually self-inflicted by the companies:

I see the central issue that the companies have deeply inconsistent goals what they want vs not want on their websites. Also, even if there is some consistency, they commonly don't clearly communicate these boundaries to the users (often for "political" or reputation reasons).

Keeping this in mind, I claim that all of the following strategies can work (but also each one will infuriate at least one specific group of users, which you will thus indirectly pressure to leave your platform), and have (successfully) been used by various platforms:

1. Simply ban discussions of some well-defined topics that tend to stir up controversies and heated discussion (even though "one side may be clearly right"). This will, of course, infuriate users who are on the "free speech" side. Also people who have a "currently politically accepted" stance on the controversial topic will be angry that they are not allowed to post about their "right" opinion on this topic, which is a central part of their life.

2. Only allow arguments for one side of some controversial topics ("taking a stance"): this will infuriate people who are in the other camp, or are on the free speech side. Also consider that for a lot of highly controversial topics, which side is "right" can change every few years "when the political wind changes direction". The infuriated users likely won't come back.

3. Mostly allow free speech, but strongly moderate comments where people post severe insults. This needs moderators who are highly trustable by the users. Very commonly, moderators are more tolerant towards insults from one side than from the other (or consider comments that are insulting, but within their Overton window, to be acceptable). As a platform, you have to give such moderators clear warnings, or even get rid of them.

While this (if done correctly) will pacify many people who are on the "free speech" side, be aware that 3 likely leads to a platform with "more heated" and "controversial" discussions, which people who are more on the "sensitive" and "nice" side likely won't like. Also advertisers are often not fond of an environment where there are "heated" and "controversial" discussions (even if the users of the platform actually like these).

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal