Preferences

Supposing that the advice it provides does more good than harm, why? What's the objective reason? If it can save lives, who cares if the advice is based on intelligence and understanding or on regurgitating internet content?

> Supposing that the advice it provides does more good than harm

That unsubstantiated supposition is doing a lot of heavy lifting and that’s a dangerous and unproductive way to frame the argument.

I’ll make a purposefully exaggerated example. Say a school wants to add cyanide to every meal and defends the decision with “supposing it helps students concentrate and be quieter in the classroom, why not?”. See the problem? The supposition is wrong and the suggestion is dangerous, but by framing it as “supposing” with a made up positive outcome, we make it sound non-threatening and reasonable.

Or for a more realistic example, “suppose drinking bleach could cure COVID-19”.

First understand if the idea has the potential to do the thing, only then (with considerably more context) consider if it’s worth implementing.

In my previous post up the thread I said that we should measure whether in fact it does more good than harm or not. That's the context of my comment, I'm not saying we should just take it for granted without looking.
> we should measure whether in fact it does more good than harm or not

The demonstrable harms include assisting suicide, there's is no way to ethically continue the measurement because continuing the measurements in their current form will with certainty result in further deaths.

Thank you! On top of that, it’s hard to measure “potential suicides averted,” and comparing that with “actual suicides caused/assisted with” would be incommnsurable.

And working to set a threshold for what we would consider acceptable? No thanks

Real life trolly problem!

If you pull the lever, some people on this track will die (by sucide). If you don't pull the lever, some people will still die from suicide. By not pulling the lever, and simply banning discussion of suicide entirely, your company gets to avoid a huge PR disaster, and you get more money because line go up. If you pull the lever and let people talk about suicide on your platform, you may avoid prevent some suicides, but you can never discuss that with the press, your company gets bad PR, and everyone will believe you're a murderer. Plus, line go down and you make less money while other companies make money off of selling AI therapy apps.

What do you chose to do?

....but if you pull the lever and let people talk about suicide on your platform, your platform will actively contribute to some unknowable number of suicides.

There is, at this time, no way to determine how the number it would contribute to would compare to the number it would prevent.

You mean lab test it in a clininal environment where the actual participants are not in danger of self-harm due to an LLM session? That is fine but that is not what we are discussing, or where we are atm.

Individuals and companies with mind boggling levels of investment want to push this tech into every corner of our lives and and the public are the lab rats.

Unreasonable. Unacceptable.

The key difference in your example and the comment you are replying to is that the commenter is not "defending the decision" via a logical implication. Obviously the implication can be voided by showing the assumption false.
I think you missed the thread here
> Supposing that the advice it provides does more good than harm, why?

Because a human, esp. a confused and depressive human being is a complex thing. Much more complex than a stable, healthy human.

Words encouraging a healthy person can break a depressed person further. Statistically positive words can deepen wounds, and push people more to the edge.

Dark corners of human nature is twisted, hard to navigate and full of distortions. Simple words don't and can't help.

Humans are not machines, brains are not mathematical formulae. We're not deterministic. We need to leave this fantasy behind.

You could make the same arguments to say that humans should never talk to suicidal people. And that really sounds counterproductive

Also it's side-stepping the question, isn't it? "Supposing that the advice it provides does more good than harm" already supposes that LLMs navigate this somehow. Maybe because they are so great, maybe by accident, maybe because just having someone nonjudgmental to talk to has a net-positive effect. The question posed is really "if LLMs lead some people to suicide but saved a greater number of people from suicide, and we verify this hypothesis with studies, would there still be an argument against LLMs talking to suicidal people"

That sounds like a pretty risky and irresponsible sort of study to conduct. It would also likely be extremely complicate to actually get a reliable result, given that people with suicidal ideations are not monolithic. You'd need to do a significant amount of human counselling with each study participant to be able to classify and control all of the variations - at which point you would be verging on professional negligence for not then actually treating them in those counselling sessions.
I agree with your concerns, but I think you're overestimating the value of a human intervening in these scenarios.

A human can also say the wrong things to push someone in a certain direction. A psychologist, or anyone else for that matter, can't stop someone from committing suicide if they've already made up their mind about it. They're educated on human psychology, but they're not miracle workers. The best they can do is raise a flag if they suspect self-harm, but then again, so could a machine.

As you say, humans are complex. But I agree with GP: whether the words are generated by a machine or coming from a human, there is no way to blame the source for any specific outcome. There are probably many other cases where the machine has helped someone with personal issues, yet we'll never hear about it. I'm not saying we should rely on these tools as if we would on a human, but the technology can be used for good or bad.

If anything, I would place blame on the person who decides to blindly follow anything the machine generates in the first place. AI companies are partly responsible for promoting these tools as something more than statistical models, but ultimately the decision to treat them as reliable sources of information is on the user. I would say that as long as the person has an understanding of what these tools are, interacting with them can be healthy and helpful.

There are really good psychologists out there that can do much more. It's a little luck and a little of good fit, but it can happen.

>AI companies are partly responsible for promoting these tools as something more than statistical models,[...]

This might be exactly the issue. Just today I've read people complaining that newest ChatGPT can't solve letter counting riddles. Companies just don't speak loud enough about LLM-based-AI shortcomings that result from their architecture and are bound to happen.

There are really good psychologists out there that can do much more. It's a little luck and a little of good fit, but it can happen.

>AI companies are partly responsible for promoting these tools as something more than statistical models,[...]

This might be exactly the issue. Just today I've read people complaining that newest ChatGPT can't solve letter counting riddles. Companies just don't speak loud enough about LLM based AI shortcomings that result from their architecture and are bound to happen.

I should add that the persons responding to calls on suicide help lines are often just volunteers rather than psychologists.
Of the people I have known to call the helplines, the results have been either dismally useless or those people were arrested, involuntarily committed, subjected to inhumane conditions, and then hit with massive medical bills. In which, some got “help” and some still killed themselves anyway.
And they know not to give advice like ChatGPT gave. They wouldn't even be entertaining that kind of discussion.
> The best they can do is raise a flag

Depending on where you live, this may well result in the vulnerable person being placed under professional supervision that actively prevents them from dying.

That's a fair bit more valuable than when you describe it as raising a flag.

Yeah... I have been in a locked psychiatric ward many times before and never in my life I came out better. They only address the physical part there for a few days and kick you out until next time. Or do you think people should be physically restrained for a long time without any actual help?
> A human can also say the wrong things to push someone in a certain direction. A psychologist, or anyone else for that matter, can't stop someone from committing suicide if they've already made up their mind about it. They're educated on human psychology, but they're not miracle workers. The best they can do is raise a flag if they suspect self-harm, but then again, so could a machine.

ChatGPT essentially encouraged a kid not to take a cry-for-help step that might have saved their lives. This is not a question of a bad psychologist; it's a question of a sociopathic one that may randomly encourage harm.

But that's not the issue. The issue is that a kid is talking to a machine without supervision in the first place, and presumably taking advice from it. The main questions are: where are the guardians of this child? What is the family situation and living environment?

A child thinking about suicide is clearly a sign that there are far greater problems in their life than taking advice from a machine. Let's address those first instead of demonizing technology.

To be clear: I'm not removing blame from any AI company. They're complicit in the ways they market these tools and how they make them accessible. But before we vilify them for being responsible for deaths, we should consider that there are deeper societal problems that should be addressed first.

> A child thinking about suicide is clearly a sign that there are far greater problems in their life

TBH kids tend to be edgy for a bit when puberty hits. The emo generation had a ton of girls cutting themselves for attention for example.

> But that's not the issue.

It is the issue at least in the sense that it's the one I was personally responding to, thanks. And there are many issues, not just the one you are choosing to focus on.

"Deeper societal problems" is a typical get-out clause for all harmful technology.

It's not good enough. Like, in the USA they say "deeper societal problems" about guns; other countries ban them and have radically fewer gun deaths while they are also addressing those problems.

It's not an either-we-ban-guns-or-we-help-mentally-ill-people. Por qué no los dos? Deeper societal problems are not represented by a neat dividing line between cause and symptom; they are cyclical.

The current push towards LLMs and other technologies is one of the deepest societal problems humans have ever had to consider.

ChatGPT engaged in an entire line of discussion that no human counsellor would engage in, leading to an outcome that no human intervention (except that of a psychopath) would cause. Because it was sycophantic.

Just saying "but humans also" is wholly irrational in this context.

This is not so much "more good than harm" like a counsellor that isn't very good.

This is more "sometimes it will seemingly actively encourage them to kill themselves and it's basically a roll of the dice what words come out at any one time".

If a counsellor does that they can be prosecuted and jailed for it, no matter how many other patients they help.

First, do no harm.
That relates more to purposefully harming some people to safe other people. Doing something that has the potential to harm a person but statistically has a greater likelihood of helping them is something doctors do all the time. They will even use methods that are guaranteed to do harm to the patient, as long as they have a sufficient chance to also bring a major benefit to the same patient
An example being: surgery. You cut into the patient to remove the tumor.
The Hippocratic oath originated from Hippocratic medicine forbidding surgery, which is why surgeons are still not referred to as "doctor" today.
Do no harm or no intentional harm?
When evaluating good vs harm for drugs or other treatments the risk for lethal side effects must be very small for the treatment to be approved. In this case it is also difficult to get reliable data on how much good and harm is done.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal