That unsubstantiated supposition is doing a lot of heavy lifting and that’s a dangerous and unproductive way to frame the argument.
I’ll make a purposefully exaggerated example. Say a school wants to add cyanide to every meal and defends the decision with “supposing it helps students concentrate and be quieter in the classroom, why not?”. See the problem? The supposition is wrong and the suggestion is dangerous, but by framing it as “supposing” with a made up positive outcome, we make it sound non-threatening and reasonable.
Or for a more realistic example, “suppose drinking bleach could cure COVID-19”.
First understand if the idea has the potential to do the thing, only then (with considerably more context) consider if it’s worth implementing.
The demonstrable harms include assisting suicide, there's is no way to ethically continue the measurement because continuing the measurements in their current form will with certainty result in further deaths.
And working to set a threshold for what we would consider acceptable? No thanks
If you pull the lever, some people on this track will die (by sucide). If you don't pull the lever, some people will still die from suicide. By not pulling the lever, and simply banning discussion of suicide entirely, your company gets to avoid a huge PR disaster, and you get more money because line go up. If you pull the lever and let people talk about suicide on your platform, you may avoid prevent some suicides, but you can never discuss that with the press, your company gets bad PR, and everyone will believe you're a murderer. Plus, line go down and you make less money while other companies make money off of selling AI therapy apps.
What do you chose to do?
There is, at this time, no way to determine how the number it would contribute to would compare to the number it would prevent.
Individuals and companies with mind boggling levels of investment want to push this tech into every corner of our lives and and the public are the lab rats.
Unreasonable. Unacceptable.
Because a human, esp. a confused and depressive human being is a complex thing. Much more complex than a stable, healthy human.
Words encouraging a healthy person can break a depressed person further. Statistically positive words can deepen wounds, and push people more to the edge.
Dark corners of human nature is twisted, hard to navigate and full of distortions. Simple words don't and can't help.
Humans are not machines, brains are not mathematical formulae. We're not deterministic. We need to leave this fantasy behind.
Also it's side-stepping the question, isn't it? "Supposing that the advice it provides does more good than harm" already supposes that LLMs navigate this somehow. Maybe because they are so great, maybe by accident, maybe because just having someone nonjudgmental to talk to has a net-positive effect. The question posed is really "if LLMs lead some people to suicide but saved a greater number of people from suicide, and we verify this hypothesis with studies, would there still be an argument against LLMs talking to suicidal people"
A human can also say the wrong things to push someone in a certain direction. A psychologist, or anyone else for that matter, can't stop someone from committing suicide if they've already made up their mind about it. They're educated on human psychology, but they're not miracle workers. The best they can do is raise a flag if they suspect self-harm, but then again, so could a machine.
As you say, humans are complex. But I agree with GP: whether the words are generated by a machine or coming from a human, there is no way to blame the source for any specific outcome. There are probably many other cases where the machine has helped someone with personal issues, yet we'll never hear about it. I'm not saying we should rely on these tools as if we would on a human, but the technology can be used for good or bad.
If anything, I would place blame on the person who decides to blindly follow anything the machine generates in the first place. AI companies are partly responsible for promoting these tools as something more than statistical models, but ultimately the decision to treat them as reliable sources of information is on the user. I would say that as long as the person has an understanding of what these tools are, interacting with them can be healthy and helpful.
>AI companies are partly responsible for promoting these tools as something more than statistical models,[...]
This might be exactly the issue. Just today I've read people complaining that newest ChatGPT can't solve letter counting riddles. Companies just don't speak loud enough about LLM-based-AI shortcomings that result from their architecture and are bound to happen.
>AI companies are partly responsible for promoting these tools as something more than statistical models,[...]
This might be exactly the issue. Just today I've read people complaining that newest ChatGPT can't solve letter counting riddles. Companies just don't speak loud enough about LLM based AI shortcomings that result from their architecture and are bound to happen.
Depending on where you live, this may well result in the vulnerable person being placed under professional supervision that actively prevents them from dying.
That's a fair bit more valuable than when you describe it as raising a flag.
ChatGPT essentially encouraged a kid not to take a cry-for-help step that might have saved their lives. This is not a question of a bad psychologist; it's a question of a sociopathic one that may randomly encourage harm.
A child thinking about suicide is clearly a sign that there are far greater problems in their life than taking advice from a machine. Let's address those first instead of demonizing technology.
To be clear: I'm not removing blame from any AI company. They're complicit in the ways they market these tools and how they make them accessible. But before we vilify them for being responsible for deaths, we should consider that there are deeper societal problems that should be addressed first.
TBH kids tend to be edgy for a bit when puberty hits. The emo generation had a ton of girls cutting themselves for attention for example.
It is the issue at least in the sense that it's the one I was personally responding to, thanks. And there are many issues, not just the one you are choosing to focus on.
"Deeper societal problems" is a typical get-out clause for all harmful technology.
It's not good enough. Like, in the USA they say "deeper societal problems" about guns; other countries ban them and have radically fewer gun deaths while they are also addressing those problems.
It's not an either-we-ban-guns-or-we-help-mentally-ill-people. Por qué no los dos? Deeper societal problems are not represented by a neat dividing line between cause and symptom; they are cyclical.
The current push towards LLMs and other technologies is one of the deepest societal problems humans have ever had to consider.
ChatGPT engaged in an entire line of discussion that no human counsellor would engage in, leading to an outcome that no human intervention (except that of a psychopath) would cause. Because it was sycophantic.
Just saying "but humans also" is wholly irrational in this context.
This is more "sometimes it will seemingly actively encourage them to kill themselves and it's basically a roll of the dice what words come out at any one time".
If a counsellor does that they can be prosecuted and jailed for it, no matter how many other patients they help.