Preferences

[flagged]

I'll be right back - I need to tell my local pub and parks authority that they didn't think through the basic consequences of their actions when they made a place people could meet random strangers. I'm sure they just haven't thought through what a moral failing offering public spaces are.

Edit to respond to OP's Edit: Your quote you're replying to includes "And prevent it". Which made us assume you were directly implying that prevention of bad consequences was your point. Yes, people should clearly think through consequences. OP wasn't implying otherwise and specifically structured their statement to include "the worst possible thing" and "prevent it".

Does the local pub allow minors? I’m confused
Many of them do around me. Does that matter to the point I was making? I'm not claiming there's literally no rules or laws around public spaces that have to be abided by.
And beyond that, pubs (or, well, bars) in the US can be held accountable if they let someone get _too drunk_.
So car manufacturers need to stop making cars because someone might drive drunk? Kitchen knife manufacturers need to somehow prevent their products being used as weapons? This line of thinking is appropriate to a degree, but not really all that useful. Many products can be abused, and there's no feasible way to prevent that abuse.
Seems to me there is an obvious qualitative difference between a company that manufacturers cars or kitchen knives and a company that creates a service that uses random matchmaking to repeatedly introduce a serial pedophile to unsuspecting underage victims.

Sure, people _can_ abuse any service, this seems like a service that wasn’t just ripe for abuse, it was essentially perfectly designed to enable it. Moreover in this case there absolutely appears to be numerous feasible ways to have headed off that particular avenue of abuse.

This is the same ludicrously weak argument that is constantly and erroneously applied to guns… neither a car nor a kitchen knife are primarily (much less exclusively) instruments of harm, guns are. Well, turns out, so are random matchmaking services that link adult users with children and let them view each other, and that appears kind of obvious in hindsight.

Ford might manufacture a car driven by a serial drunk driver. Perhaps they need to install breathalyzers in all their cars, by default.

I'm not really sure how much more ripe for abuse Omegle was compared to, say, Discord. Pretty much any video chat service can be abused to send or receive illegal content, and to abuse and manipulate other people. These are risks inherent to anything enabling communication. Short of a panopticon where all communications are manually approved by a human moderator, there's no sure way to prevent abuse (and even then human moderators are fallible).

There ought to be some reasonable attempts to mitigate abuse, like a reporting functionality. But beyond that I don't see much more Omegle could have reasonably done.

Again the pointless and frankly silly comparison to cars… they’re categorically unrelated product classes: Ford‘s cars didn’t intentionally, as a feature of the vehicle, randomly put you into head on collision situations with others, which is essentially what Omegle did by design, while also NOT (in any sense) being fruitfully comparable to a manufacturer of vehicles.

Same with the equally pointless comparison to Discord… Omegle wasn’t merely a video chat service, it made random matches that the user could narrow by identifying their own interests; an adult male user identifying as being deeply interested in things only children would be interested in could readily and easily (and obviously) weaponize the platform, and Omegle absolutely could have (and should have) used the many and obvious means available for profiling and identifying such incongruous users, which (sure) would include human moderators.

There’s an enormous ethical difference between not doing anything whatsoever to prevent abuse and perfectly preventing abuse, and you seem to think they had no obligation to prevent any because they couldn’t prevent all… they don’t exist any more (thankfully) because lawyers started to (correctly) point out that that isn’t how either ethics or tort law work.

Omegle didn't intentionally put people on a collision course with abusers either. If Omegle was intentionally facilitating abuse as you put it, then so is IRC and effectively any other public communications mechanism: because anyone could be an abuser.

Even just preventing 1% of abuse would probably have been beyond the capabilities of this site. You write that they should flag adult men listing interest in topics associated with children. How are they supposed to identify the gender and age of users? People under 18 are prohibited from the site, yet that clearly failed. Human moderation can't even monitor a fraction of one percent The "many obvious" ways of preventing abuse were in fact attempted [1]:

> Omegle implemented a "monitored" video chat, to monitor misbehavior and protect people under the age of 18 from potentially harmful content, including nudity or sexual content. However, the monitoring is not very effective, and users can often skirt around bans.

Sure, Omegle "randomly put you into head on collision situations with others", but so is every other public communications: IRC, discord, Xbox Live, pretty much anywhere you can meet random people on the Internet fits into this category.

1. https://en.m.wikipedia.org/wiki/Omegle

What ways could've headed off abuse thar wouldn't have defeated the entire purpose of the service - matching random people with each other?
Age verifying every user and not allowing children on the service at all, or only matching children with children, as an obvious first. Alternatively, maybe just randomly sampling video feeds and running them through an ML classifier to see if, I dunno, “adult male penis” was a high probability on one side and “extremely uncomfortable looking child” was a high probability on the other?

In hindsight the entire purpose of the service was a bad idea, absent minimal efforts to avoid its trivial weaponization by users with an obvious motive for using the means and opportunity the service was providing by design.

So one of your solutions to a service aimed at randomly match anonymous people is to get rid of anonymity?

I asked you how do you solve the problem without defeating the purpose of Omegle. Your solution is the equivalent to someone asking how to solve world hunger and you responding with "Just feed them. Duh."

> So car manufacturers need to stop making cars because someone might drive drunk?

In the US, at least, bars can be found liable for patrons drunk driving. That's probably a closer analogy to Omegle than a car manufacturer, since its patrons hang out at the "establishment," engaging in potentially risky behavior. That's not a comment on the validity of the lawsuit, but the situation isn't as simple as you make out.

https://www.washingtoninjurylaw.com/can-bartenders-be-liable...

> The drunk driver can sue the bar or bartender for allowing them to become intoxicated to a dangerous level. Individuals may file a lawsuit, but this does not always mean it will hold up in court. Dram shop laws in most states make clear definitions to avoid false liability. Washington is one of those states.

It looks like it's more the case that the drunk driver tries to sue the bar, not that the government is pursuing the bar. Furthermore, some States specifically forbid attempts to hold the bar liable for drunk drivers.

And even if it were, it's vastly easier for a bar to monitor the conduct of patrons than a web service. The scale of the latter is too great to make non-automated moderation feasible.

By that standard I cannot see how any reasonable person could justify building anything at all; most of us, not being evil bastards, have imaginations which will simply fail to suggest such uses.
I think the people that created smartphones and social media had the best of intentions, but the resulting effects on mental health are profound.

At the same time, it is hard to imagine someone letting go of implementing an idea because of vague negative future effects that are not real in the present. And there is a lot of money incentivizing betting on lots of new ideas to see what takes off.

So it's like, if we uncover the next transformative technology that we know little about the future effects of, we just have to eat the cost of proliferating it everywhere before countermeasures can be figured out, if they can be created at all?

Sometimes I think the ease of virality in software could be a Great Filter. If not something farfetched like human extinction, then the Great Filter of human isolation, or of lasting intergenerational conflict, or something else that's profound but not totally catastrophic. Not only is new tech too tempting to spontaneously put down, but it's nearly impossible to know when to put it down. I think maybe if information overload and the like was hypothesized about like AI is starting to be today, we would still not be able to leave social media uninvented, because nobody had tried it and witnessed it fail yet. But the Great Filter comes in when maybe you can only witness certain failures once.

No one is expecting anyone to be an oracle to be honest.

> By that standard I cannot see how any reasonable person could justify building anything at all;

This is hyperbolic, don't you think? If I were to write a new, let's say, operating system - why would this thinking block me?

There's a ton of things you can create that really won't pose a moral question. Especially when we have prior art to look at.

Won’t your new operating system have security issues that other operating systems don’t? How do you propose accounting for the additional harm you’re bringing into the world?
It’s nothing that we haven’t dealt with?

Did I ever state that everything we do has to be harm free?

He's not saying don't build anything. He's saying think about the potential for misuse (and implying, take reasonable steps to prevent it). This seems completely sensible to me.
He is saying don’t build anything.

I am aware I cannot think of every bad way a service could be misused . I also am aware that there is always a way to misuse a service. Therefore, in order to prevent a service I use from being misused, I must not build anything.

Obviously that conclusion is wrong, so one of the premises must be wrong. Specifically, liability should be on the abuser not on the service.

And, perhaps, for people to think through the basic implications of their comments.
how far do we take this? would we have motor vehicles if we knew how many people would be killed by/with them every year?
Motor vehicles are a great example. They are HEAVILY regulated and have a gigantic focus on safety.
They aren’t nearly as safe as they could be, and manufacturers have had to be dragged kicking and screaming to implement even basic safety features like seatbelts: some manufacturers even refused on the basis that adding safety features implied their products were unsafe!

Airplanes are probably the example you want - heavily regulated and very focused on safety. Not motor vehicles. After all, if they really were focused on safety, they’d have banned giant trucks and SUVs for personal use a long, long time ago.

i think it holds up, but I get your point. Still, if 18-year-old me designed and built a new car in my parent's garage, I'm not going to be able to send it out into the world and make millions of copies of it for everyone in the world who wants one.
That's a pointed question as the infrastructure to support individual cars continues to eat the cities in the US and the climate continues to change from the CO2 released into the air, actually.

Had we known what the challenges would be, would we have made them so easy to own and operate? You need a license to fly a plane solo, and it takes years to get one.

And removing the option for unsupervised outdoor play for kids.
I'm not an evil bastard. I have no idea what evil bastards will want to do with anything I create. I literally don't think like this and can't predict their behaviour.

Why is it my responsibility to do this?

Won’t these pedophiles just move to Roblox or start trading kids’ phone numbers or something? Cut off one head and three more emerge. Can’t the answer to people breaking the laws be law enforcement?
They will.

... and Roblox has a financial vested interest in finding them, outing them to authorities, and keeping them off the site, so it'll be worth it to them to do so and to keep the site humming.

Omegle's owner decided to shut it down because it wasn't worth it to them to do that work. That's all. Price not worth paying.

No, they were finding them, outing them and reporting them. As he says in the post. There were doing that work.

They're being sued by a victim for not preventing the victim from being manipulated by an abuser. The fact that the abuser was caught and convicted with Omegle's help is apparently irrelevant.

It's hard to see how Omegle could have done this differently; no crime occurred until the victim was abused, and once the abuse happened any subsequent action is irrelevant because the victim was still abused.

Roblox will have more ability to hire lawyers to win the victim's court case. That's the only difference.

Fending off legal challenges is also part of the work.

If it's not worth it to him it's not worth it. I too remember the feeling of "this startup is going to kill me and I'm only 30."

> Won’t these pedophiles just move to Roblox

Many there already, unfortunate to report.

The era of tech people thinking of themselves as Prometheus bringing fire, without a care as to how the mortals use it, is over.

... or rather, perhaps the era of being chained to a rock for a bird to peck at our livers has begun.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal