Edit to respond to OP's Edit: Your quote you're replying to includes "And prevent it". Which made us assume you were directly implying that prevention of bad consequences was your point. Yes, people should clearly think through consequences. OP wasn't implying otherwise and specifically structured their statement to include "the worst possible thing" and "prevent it".
Sure, people _can_ abuse any service, this seems like a service that wasn’t just ripe for abuse, it was essentially perfectly designed to enable it. Moreover in this case there absolutely appears to be numerous feasible ways to have headed off that particular avenue of abuse.
This is the same ludicrously weak argument that is constantly and erroneously applied to guns… neither a car nor a kitchen knife are primarily (much less exclusively) instruments of harm, guns are. Well, turns out, so are random matchmaking services that link adult users with children and let them view each other, and that appears kind of obvious in hindsight.
I'm not really sure how much more ripe for abuse Omegle was compared to, say, Discord. Pretty much any video chat service can be abused to send or receive illegal content, and to abuse and manipulate other people. These are risks inherent to anything enabling communication. Short of a panopticon where all communications are manually approved by a human moderator, there's no sure way to prevent abuse (and even then human moderators are fallible).
There ought to be some reasonable attempts to mitigate abuse, like a reporting functionality. But beyond that I don't see much more Omegle could have reasonably done.
Same with the equally pointless comparison to Discord… Omegle wasn’t merely a video chat service, it made random matches that the user could narrow by identifying their own interests; an adult male user identifying as being deeply interested in things only children would be interested in could readily and easily (and obviously) weaponize the platform, and Omegle absolutely could have (and should have) used the many and obvious means available for profiling and identifying such incongruous users, which (sure) would include human moderators.
There’s an enormous ethical difference between not doing anything whatsoever to prevent abuse and perfectly preventing abuse, and you seem to think they had no obligation to prevent any because they couldn’t prevent all… they don’t exist any more (thankfully) because lawyers started to (correctly) point out that that isn’t how either ethics or tort law work.
Even just preventing 1% of abuse would probably have been beyond the capabilities of this site. You write that they should flag adult men listing interest in topics associated with children. How are they supposed to identify the gender and age of users? People under 18 are prohibited from the site, yet that clearly failed. Human moderation can't even monitor a fraction of one percent The "many obvious" ways of preventing abuse were in fact attempted [1]:
> Omegle implemented a "monitored" video chat, to monitor misbehavior and protect people under the age of 18 from potentially harmful content, including nudity or sexual content. However, the monitoring is not very effective, and users can often skirt around bans.
Sure, Omegle "randomly put you into head on collision situations with others", but so is every other public communications: IRC, discord, Xbox Live, pretty much anywhere you can meet random people on the Internet fits into this category.
In hindsight the entire purpose of the service was a bad idea, absent minimal efforts to avoid its trivial weaponization by users with an obvious motive for using the means and opportunity the service was providing by design.
I asked you how do you solve the problem without defeating the purpose of Omegle. Your solution is the equivalent to someone asking how to solve world hunger and you responding with "Just feed them. Duh."
In the US, at least, bars can be found liable for patrons drunk driving. That's probably a closer analogy to Omegle than a car manufacturer, since its patrons hang out at the "establishment," engaging in potentially risky behavior. That's not a comment on the validity of the lawsuit, but the situation isn't as simple as you make out.
> The drunk driver can sue the bar or bartender for allowing them to become intoxicated to a dangerous level. Individuals may file a lawsuit, but this does not always mean it will hold up in court. Dram shop laws in most states make clear definitions to avoid false liability. Washington is one of those states.
It looks like it's more the case that the drunk driver tries to sue the bar, not that the government is pursuing the bar. Furthermore, some States specifically forbid attempts to hold the bar liable for drunk drivers.
And even if it were, it's vastly easier for a bar to monitor the conduct of patrons than a web service. The scale of the latter is too great to make non-automated moderation feasible.
At the same time, it is hard to imagine someone letting go of implementing an idea because of vague negative future effects that are not real in the present. And there is a lot of money incentivizing betting on lots of new ideas to see what takes off.
So it's like, if we uncover the next transformative technology that we know little about the future effects of, we just have to eat the cost of proliferating it everywhere before countermeasures can be figured out, if they can be created at all?
Sometimes I think the ease of virality in software could be a Great Filter. If not something farfetched like human extinction, then the Great Filter of human isolation, or of lasting intergenerational conflict, or something else that's profound but not totally catastrophic. Not only is new tech too tempting to spontaneously put down, but it's nearly impossible to know when to put it down. I think maybe if information overload and the like was hypothesized about like AI is starting to be today, we would still not be able to leave social media uninvented, because nobody had tried it and witnessed it fail yet. But the Great Filter comes in when maybe you can only witness certain failures once.
> By that standard I cannot see how any reasonable person could justify building anything at all;
This is hyperbolic, don't you think? If I were to write a new, let's say, operating system - why would this thinking block me?
There's a ton of things you can create that really won't pose a moral question. Especially when we have prior art to look at.
I am aware I cannot think of every bad way a service could be misused . I also am aware that there is always a way to misuse a service. Therefore, in order to prevent a service I use from being misused, I must not build anything.
Obviously that conclusion is wrong, so one of the premises must be wrong. Specifically, liability should be on the abuser not on the service.
Airplanes are probably the example you want - heavily regulated and very focused on safety. Not motor vehicles. After all, if they really were focused on safety, they’d have banned giant trucks and SUVs for personal use a long, long time ago.
Had we known what the challenges would be, would we have made them so easy to own and operate? You need a license to fly a plane solo, and it takes years to get one.
Why is it my responsibility to do this?
... and Roblox has a financial vested interest in finding them, outing them to authorities, and keeping them off the site, so it'll be worth it to them to do so and to keep the site humming.
Omegle's owner decided to shut it down because it wasn't worth it to them to do that work. That's all. Price not worth paying.
They're being sued by a victim for not preventing the victim from being manipulated by an abuser. The fact that the abuser was caught and convicted with Omegle's help is apparently irrelevant.
It's hard to see how Omegle could have done this differently; no crime occurred until the victim was abused, and once the abuse happened any subsequent action is irrelevant because the victim was still abused.
Roblox will have more ability to hire lawyers to win the victim's court case. That's the only difference.
If it's not worth it to him it's not worth it. I too remember the feeling of "this startup is going to kill me and I'm only 30."
Many there already, unfortunate to report.
... or rather, perhaps the era of being chained to a rock for a bird to peck at our livers has begun.