Preferences

A word generator with no intelligence or understanding based on the contents of the internet should not be allowed near suicidal teens, nor should it attempt to offer advice of any kind.

This is basic common sense.

Add in the commercial incentives of 'Open'AI to promote usage for anything and everything and you have a toxic mix.


Supposing that the advice it provides does more good than harm, why? What's the objective reason? If it can save lives, who cares if the advice is based on intelligence and understanding or on regurgitating internet content?
> Supposing that the advice it provides does more good than harm

That unsubstantiated supposition is doing a lot of heavy lifting and that’s a dangerous and unproductive way to frame the argument.

I’ll make a purposefully exaggerated example. Say a school wants to add cyanide to every meal and defends the decision with “supposing it helps students concentrate and be quieter in the classroom, why not?”. See the problem? The supposition is wrong and the suggestion is dangerous, but by framing it as “supposing” with a made up positive outcome, we make it sound non-threatening and reasonable.

Or for a more realistic example, “suppose drinking bleach could cure COVID-19”.

First understand if the idea has the potential to do the thing, only then (with considerably more context) consider if it’s worth implementing.

In my previous post up the thread I said that we should measure whether in fact it does more good than harm or not. That's the context of my comment, I'm not saying we should just take it for granted without looking.
> we should measure whether in fact it does more good than harm or not

The demonstrable harms include assisting suicide, there's is no way to ethically continue the measurement because continuing the measurements in their current form will with certainty result in further deaths.

Thank you! On top of that, it’s hard to measure “potential suicides averted,” and comparing that with “actual suicides caused/assisted with” would be incommnsurable.

And working to set a threshold for what we would consider acceptable? No thanks

You mean lab test it in a clininal environment where the actual participants are not in danger of self-harm due to an LLM session? That is fine but that is not what we are discussing, or where we are atm.

Individuals and companies with mind boggling levels of investment want to push this tech into every corner of our lives and and the public are the lab rats.

Unreasonable. Unacceptable.

The key difference in your example and the comment you are replying to is that the commenter is not "defending the decision" via a logical implication. Obviously the implication can be voided by showing the assumption false.
I think you missed the thread here
> Supposing that the advice it provides does more good than harm, why?

Because a human, esp. a confused and depressive human being is a complex thing. Much more complex than a stable, healthy human.

Words encouraging a healthy person can break a depressed person further. Statistically positive words can deepen wounds, and push people more to the edge.

Dark corners of human nature is twisted, hard to navigate and full of distortions. Simple words don't and can't help.

Humans are not machines, brains are not mathematical formulae. We're not deterministic. We need to leave this fantasy behind.

You could make the same arguments to say that humans should never talk to suicidal people. And that really sounds counterproductive

Also it's side-stepping the question, isn't it? "Supposing that the advice it provides does more good than harm" already supposes that LLMs navigate this somehow. Maybe because they are so great, maybe by accident, maybe because just having someone nonjudgmental to talk to has a net-positive effect. The question posed is really "if LLMs lead some people to suicide but saved a greater number of people from suicide, and we verify this hypothesis with studies, would there still be an argument against LLMs talking to suicidal people"

That sounds like a pretty risky and irresponsible sort of study to conduct. It would also likely be extremely complicate to actually get a reliable result, given that people with suicidal ideations are not monolithic. You'd need to do a significant amount of human counselling with each study participant to be able to classify and control all of the variations - at which point you would be verging on professional negligence for not then actually treating them in those counselling sessions.
I agree with your concerns, but I think you're overestimating the value of a human intervening in these scenarios.

A human can also say the wrong things to push someone in a certain direction. A psychologist, or anyone else for that matter, can't stop someone from committing suicide if they've already made up their mind about it. They're educated on human psychology, but they're not miracle workers. The best they can do is raise a flag if they suspect self-harm, but then again, so could a machine.

As you say, humans are complex. But I agree with GP: whether the words are generated by a machine or coming from a human, there is no way to blame the source for any specific outcome. There are probably many other cases where the machine has helped someone with personal issues, yet we'll never hear about it. I'm not saying we should rely on these tools as if we would on a human, but the technology can be used for good or bad.

If anything, I would place blame on the person who decides to blindly follow anything the machine generates in the first place. AI companies are partly responsible for promoting these tools as something more than statistical models, but ultimately the decision to treat them as reliable sources of information is on the user. I would say that as long as the person has an understanding of what these tools are, interacting with them can be healthy and helpful.

There are really good psychologists out there that can do much more. It's a little luck and a little of good fit, but it can happen.

>AI companies are partly responsible for promoting these tools as something more than statistical models,[...]

This might be exactly the issue. Just today I've read people complaining that newest ChatGPT can't solve letter counting riddles. Companies just don't speak loud enough about LLM-based-AI shortcomings that result from their architecture and are bound to happen.

There are really good psychologists out there that can do much more. It's a little luck and a little of good fit, but it can happen.

>AI companies are partly responsible for promoting these tools as something more than statistical models,[...]

This might be exactly the issue. Just today I've read people complaining that newest ChatGPT can't solve letter counting riddles. Companies just don't speak loud enough about LLM based AI shortcomings that result from their architecture and are bound to happen.

I should add that the persons responding to calls on suicide help lines are often just volunteers rather than psychologists.
Of the people I have known to call the helplines, the results have been either dismally useless or those people were arrested, involuntarily committed, subjected to inhumane conditions, and then hit with massive medical bills. In which, some got “help” and some still killed themselves anyway.
And they know not to give advice like ChatGPT gave. They wouldn't even be entertaining that kind of discussion.
> The best they can do is raise a flag

Depending on where you live, this may well result in the vulnerable person being placed under professional supervision that actively prevents them from dying.

That's a fair bit more valuable than when you describe it as raising a flag.

Yeah... I have been in a locked psychiatric ward many times before and never in my life I came out better. They only address the physical part there for a few days and kick you out until next time. Or do you think people should be physically restrained for a long time without any actual help?
> A human can also say the wrong things to push someone in a certain direction. A psychologist, or anyone else for that matter, can't stop someone from committing suicide if they've already made up their mind about it. They're educated on human psychology, but they're not miracle workers. The best they can do is raise a flag if they suspect self-harm, but then again, so could a machine.

ChatGPT essentially encouraged a kid not to take a cry-for-help step that might have saved their lives. This is not a question of a bad psychologist; it's a question of a sociopathic one that may randomly encourage harm.

But that's not the issue. The issue is that a kid is talking to a machine without supervision in the first place, and presumably taking advice from it. The main questions are: where are the guardians of this child? What is the family situation and living environment?

A child thinking about suicide is clearly a sign that there are far greater problems in their life than taking advice from a machine. Let's address those first instead of demonizing technology.

To be clear: I'm not removing blame from any AI company. They're complicit in the ways they market these tools and how they make them accessible. But before we vilify them for being responsible for deaths, we should consider that there are deeper societal problems that should be addressed first.

This is not so much "more good than harm" like a counsellor that isn't very good.

This is more "sometimes it will seemingly actively encourage them to kill themselves and it's basically a roll of the dice what words come out at any one time".

If a counsellor does that they can be prosecuted and jailed for it, no matter how many other patients they help.

First, do no harm.
That relates more to purposefully harming some people to safe other people. Doing something that has the potential to harm a person but statistically has a greater likelihood of helping them is something doctors do all the time. They will even use methods that are guaranteed to do harm to the patient, as long as they have a sufficient chance to also bring a major benefit to the same patient
An example being: surgery. You cut into the patient to remove the tumor.
The Hippocratic oath originated from Hippocratic medicine forbidding surgery, which is why surgeons are still not referred to as "doctor" today.
Do no harm or no intentional harm?
When evaluating good vs harm for drugs or other treatments the risk for lethal side effects must be very small for the treatment to be approved. In this case it is also difficult to get reliable data on how much good and harm is done.
Let's look at the problem from perspective of regular people. YMMV, but in countries I know most about, Poland and Norway (albeit a little less so for Norway) it's not about ChatGPT vs Therapist. It's about ChatGPT vs nothing.

I know people who earn above average income and still spend a significant (north of 20%) portion of their income on therapy/meds. And many don't, because mental health isn't that important to them. Or rather - they're not aware of how much helpful it can be to attend therapy. Or they just can't afford the luxury (that I claim it is) of private mental health treatment.

ADHD diagnosis took 2.5y from start to getting meds, in Norway.

Many kids grow up before their wait time in queue for pediatric psychologist is over.

It's not ChatGPT vs shrink. It's ChatGPT vs nothing or your uncle who tells you depression and ADHD are made up and you kids these days have it all too easy.

As someone who lives in America, and is prescribed meds for ADHD; 2.5 years from asking for help to receiving medication _feels_ right to me in this case. The medications have a pretty negative side effect profile in my experience, and so all options should be weighed before prescribing ADHD-specific medication, imo
you know ChatGPT can't prescribe Adderall right?
Yet, if you ask the word generator to generate words in the form of advice, like any machine or code, it will do exactly what you tell it to do. The fact people are asking implies a lack of common sense by your definition.

Sertraline can increase suicidal thoughts in teens. Should anti-depressants not be allowed near suicidal/depressed teens?

Should anti-depressants not be allowed near suicidal/depressed teens?

Well certainly not without careful monitoring and medical advice, no of course not!

I'll gladly diss LLMs in a whole bunch of ways, but "common sense"? No.

By the "common sense" definitions, LLMs have "intelligence" and "understanding", that's why they get used so much.

Not that this makes the "common sense" definitions useful for all questions. One of the worse things about LLMs, in my opinion, is that they're mostly a pile of "common sense".

Now this part:

> Add in the commercial incentives of 'Open'AI to promote usage for anything and everything and you have a toxic mix.

I agree with you on…

…with the exception of one single word: It's quite cliquish to put scare quotes around the "Open" part on a discussion about them publishing research.

More so given that people started doing this in response to them saying "let's be cautious, we don't know what the risks are yet and we can't un-publish model weights" with GPT-2, and oh look, here it is being dangerous.

While I agree with most of your comment, I'd like to dispute the story about GPT-2.

Yes, they did claim that they wouldn't release GPT-2 due to unforeseen risks, but...

a. they did end up releasing it,

b. they explicitly stated that they wouldn't release GPT-3[1] for marketing/financial reasons, and

c. it being dangerous didn't stop them from offering the service for a profit.

I think the quotes around "open" are well deserved.

[1] Edit: it was GPT-4, not GPT-3.

> they did end up releasing it,

After studying it extensively with real-world feedback. From everything I've seen, the statement wasn't "will never release", it was vaguer than that.

> they explicitly stated that they wouldn't release GPT-3 for marketing/financial reasons

Not seen this, can you give a link?

> it being dangerous didn't stop them from offering the service for a profit.

Please do be cynical about how honest they were being — I mean, look at the whole of Big Tech right now — but the story they gave was self-consistent:

[Paraphrased!] (a) "We do research" (they do), "This research costs a lot of money" (it does), and (b) "As software devs, we all know what 'agile' is and how that keeps product aligned with stakeholder interest." (they do) "And the world is our stakeholder, so we need to release updates for the world to give us feedback." (???)

That last bit may be wishful thinking, I don't want to give the false impression that I think they can do no wrong (I've been let down by such optimism a few other times), but it is my impression of what they were claiming.

> Not seen this, can you give a link?

I was confusing GPT3 with GPT4. Here's the quote from the paper (emphasis mine) [1]:

> Given both THE COMPETITIVE LANDSCAPE and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.

[1] https://cdn.openai.com/papers/gpt-4.pdf

Thanks, 4 is much less surprising than 3.
Bleach should also not be allowed near suicidal teens.

But how do you tell before it matters?

Plastic bags shouldn't be allowed near suicidal teens. Scarves shouldn't be. Underwear is also a strangulation hazard for the truly desperate. Anything long sleeved even. Knives of any kind, including butter. Cars, obviously.

Bleach is the least of your problems.

We have established that suicidal people should be held naked (or with an apron) in solitary isolation in a padded white room and saddled with medical bills larger than a four-year college tuition. That'll help'em.

One problem with treatment modalities is that they ignore material conditions and treat everything as dysfunction. Lots of people are looking for a way out not because of some kind of physiological clinical depression, but because they've driven themselves into a social & economic dead-end and they don't see how they can improve. More suicidal people than not, would cease to be suicidal if you handed them $180,000 in concentrated cash, and a pardon for their crimes, and a cute neighbor complimenting them, which successfully neutralizes a majority of socioeconomic problems.

We deal with suicidal ideation in some brutal ways, ignoring the material consequences. I can't recommend suicide hotlines, for example, because it's come out that a lot of them concerned with liability call the cops, who come in and bust the door down, pistol whip the patient, and send them to jail, where they spend 72 hours and have some charges tacked on for resisting arrest (at this point they lose their job). Why not just drone strike them?

What is "concentrated cash"? Do you have to dilute it down to standard issue bills before spending it? Someone hands you 5 lbs of gold, and have to barter with people to use it?

"He didn't need the money. He wasn't sure he didn't need the gold." (an Isaac Asimov short story)

> More suicidal people than not, would cease to be suicidal if ...

I'm going to need to see a citation on this one.

> We have established that suicidal people should be held naked (or with an apron) in solitary isolation in a padded white room and saddled with medical bills larger than a four-year college tuition. That'll help'em.

It appear to be the only way

The one dude that used the money to build a self-murder machine and then televised it would ruin it for everyone though. :s

The reality is most systems are designed to cover asses more than meet needs, because systems get abused a lot - by many different definitions, including being used as scapegoats by bad actors.

Yeah, if we know they’re suicidal, it’s legitimately grippy socks time I guess?

But there is zero actually effective way to do that as an online platform. And plenty of ways that would cause more harm (statistically).

My comment was more ‘how the hell would you know in a way anyone could actually do anything reasonable, anyway?’.

People spam ‘Reddit cares’ as a harassment technique, claiming people are suicidal all the time. How much should the LLM try to guess? If they use all ‘depressed’ words? What does that even mean?

What happens if someone reports a user is suicidal, and we don’t do anything? Are we now on the hook if they succeed - or fail and sue us?

Do we just make a button that says ‘I’m intending to self harm’ that locks them out of the system?

Why are we imprisoning suicidal people? That will surely add incentive to have someone raise their hand and ask for help: taking their freedoms away...
Why do we put people in a controlled environment where their available actions are heavily restricted and their ability to use anything they could hurt themselves is taken away? When they are a known risk of hurting themselves or others?

What else do you propose?

> A word generator with no intelligence or understanding

I will take this seriously when you propose a test that can distinguish between that and something with actual "intelligence or understanding"

Sure ask it to write an interesting novel or a symphony, and present it to humans without editing. The majority of literate humans will easily tell the difference between that and human output. And it’s not allowed to be too derivative.

When AI gets there (and I’m confident it will, though not confident LLMs will), I think that’s convincing evidence of intelligence and creativity.

I accept that test other than the "too derivative" part which is an avenue for subjective bias. AI has passed that test for art already: https://www.astralcodexten.com/p/ai-art-turing-test As for a novel that is currently beyond the LLMs capabilities due to context windows, but I wouldn't be surprised if it could do short stories that pass this Turing test right now.
For art I'd want to see a consistent body of work with meaning from an AI, not heavily derivative copies of other styles and subject matter. But I'd agree AI art is further along, perhaps just because it is easier for us to fill in the gaps and attribute meaning where there is none.
> with no intelligence

Damn I thought we'd got over that stochastic parrot nonsense finally...

Replace 'word generator with no intelligence or understanding based on the contents of the internet' with 'for-profit health care system'.

In retrospect, from experience, I'd take the LLM.

'not-for-profit healthcare system' has to surely be better better goal/solution than LLM
Lemme get right on vibecoding that! Maybe three days, max, before I'll have an MVP. When can I expect your cheque funding my non-profit? It'll have a quadrillion dollar valuation by the end of the month, and you'll want to get in on the ground floor, so better act fast!
Seems like an argument against allowing the profit motive into important life-changing decisions, like whether or not you'd commit suicide, or your medical treatment.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal