In such cases, where a new approach offers to replace an existing approach, the burden of proof is on the challenger, not the incumbent. This is why we have safety regulations, why we don't let people drive cars that haven't passed tests, build with materials that haven't passed tests, eat food that hasn't passed tests. You understand then, hopefully, why your comments here are dangerous...? I have no doubt you have no malicious intent here - you're right that these decisions need to be based on data - but you're not taking into account that the (potentially extremely harmful) challenger already has a foothold in the field.
I know that you will want to hear this from experts in the "relevant field" rather than myself, so here is a write-up from Stanford on the subject: https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in...
Last Saturday, I was a little distressed about a love-hate relationship that I have with one of the things that I work with, so I tried using AI as a therapist. Within 10 minutes of conversation, the AI gave me some incredible insight. I was genuinely impressed. I had already discussed this same subject with two psychologist friends, who hadn't helped much.
Moreover: I needed to finish a report that night and I told the AI about it. So it said something like, "I see you're procrastinating preparing the report by talking to me. I'll help you finish it."
And then, in the same conversation, the AI switched from psychologist to work assistant and helped me finish the report. And the end product was very good.
I was left very reflective after this.
Edit: It was Claude Sonnet 4.5 with extended thinking, if anyone is wondering.
Seriously, be careful.
At the same time, it has been useful for the relationship at other times.
You really need to nudge it in the right direction and do your due diligence.
This does require you think about what it's saying though and not taking it at surface value since it obviously lacks what makes humans human.
Countries vary, but in the US and many places there's a shortage of quality therapists.
Thus for many people the actual options are {no therapy} and {LLM therapy}.
> This is why we have safety regulations, why we don't let people drive cars that haven't passed tests, build with materials that haven't passed tests, eat food that hasn't passed tests.
And the reason all these regulations and tests are less than comprehensive is that we realize that people working, driving affordable cars, living in affordable homes, and eating affordable food is more important than avoiding every negative outcome. Thus most societies pursue the utilitarian greater good rather than an inflexible 'do no harm' standard.
Worse in my EU country. There's even a shortage of shitty therapists and doctors, let alone quality ones. It takes 6+ months to get an appointment for a 5 minute checkup at a poorly reviewed state funded therapist, while the good ones are either private or don't accept any new patients if they're on the public system. And ADHD diagnosticians/therapists are only in the private sector because I guess the government doesn't recognize ADHD as being a "real" mental issue worthy of your tax Euros.
A friend of mine got a more accurate diagnosis for his breathing issue by putting his symptoms in ChatGPT than he got from his general practitioner, later confirmed by a good specialist. I also wasted a lot of money on bad private therapists that were basically just phoning in their job, so to me, the bar seems pretty low, since as long as they pass their med-school exams and don't kill too many people through malpractice, nobody checks up on how good or bad their are at their job (maybe some need more training, or maybe some don't belong in medicine at all but managed to slipped through the cracks).
Not saying all doctors are bad (I've met few amazing ones), but it definitely seems like the healthcare systems are failing a lot of people everywhere if they resort to LLMs for diagnosis and therapy and getting better results from it.
You also don't expect butcher to fix your car, those are as close as above (my wife is a GP so I have a good perspective from the other side, including tons of hypochondriac and low-intensity psychiatric persons who are an absolute nightmare to deal with and routinely overwhelm the system so that there isn't enough resources to deal with more serious cases).
You get what you pay for at the end, 'free' healthcare typical for Europe is anyway still paid for one way or another. And if the market forces are so severely distorted (or bureaucracy so ridiculous/corrupt) that they push such specialists away or into another profession, you get healthcare wastelands you describe.
Vote, and vote with your feet if you want to see change, not ideal state but thats reality.
Where did I say GPs have to do that? In my example of my friend's being misdiagnosed by GPs, it was about another issue, not mental, but it has the same core problem of doctors misdiagnosing patients worse than a LLM bring into questions their competence or that of the health system in general if a LLM can do better than someone who spent 6+ years in med school and got a degree to be a licensed MD to treat people.
>You also don't expect butcher to fix your car, those are as close as above
You're making strawmen at this point. Such metaphors have no relevance to anything I said. Please review my comment through the lens of the clarifications I just made. Maybe the way I wrote it initially made it unclear.
>You get what you pay for at the end
The problem is the opposite, that you don't get what you pay for, if you're a higher than average earner. The more you work, the more taxes you pay, but get the same healthcare quality in return as unskilled laborer who is subsidized.
It's a bad reward structure to incentivize people to pay more of their taxes into the public system, compounded by the fact that government workers, civil servants, lawyers, architects, and other privileged employment classes of bureaucrats with strong unions, have their own separate heath insurance funds, that separate from the national public one that the unwashed masses working in the private sector have to use, so THEY do get what THEY pay for, but you don't.
So that's the problem with state run systems, just like you said about corruption, that giving the government unchecked power over large amounts of people's taxes, allow the government to manipulate the market and choosing winners and losers based on political favoritism and not on the fair free market of who pays the most into the system.
Maybe Switzerland managed to nail it with their individual private system, but I don't know enough to say for sure.
Obligatory reminder that Europe is not a homogeneous country.
The harm LLMs do in this case is attested both by that NYT article and the more rigorous study from Stanford. There are two problems with your argument as I see it: 1. You're assuming "LLM therapy" is less harmful than "no therapy", an assumption I don't believe has been demonstrated. 2. You're not taking into account the long term harm of putting in place a solution that's "not fit for human use" as in the housing and food examples: once these things become accepted, they form the baseline of the new accepted "minimum standard of living", bringing that standard down for everyone.
You claim to be making a utilitarian as opposed to a nonmaleficent argument, but, for the reasons I've stated here, I don't believe it's a utilitarian argument at all.
That is not the argument. The argument is not about 'lower cost', it is about availability. There are not enough shrinks for everyone who would need it.
So it would be "We should deregulate food safety to avoid starving", which would be a valid argument.
... the entire reason tenements and boarding houses no longer exist is because most governments regulated them out of existence (e.g. by banning shared bathrooms to push SFHs).
You can't have it all ways.
strict minimum regulation : availability : cost
Pick 2.
> ... the entire reason tenements and boarding houses no longer exist
... the entire reason tenements and boarding houses no longer exist _where you live_
After all, it should be easy peasy (:
The advice to not leave the noose out is likely enough for ChatGPT to lose it's license to practice (if it had one).
I don't see anyone in thread arguing that.
The arguments I see are about regulating and restricting the business side, not its users.
If your buddy started systematically charging people for recorded chat sessions at the pub, used those recordings for business development, and many of their customers were returning with therapy-like topics - yeah I think that should be scrutinized and put a lid on when recordings show the kind of pattern we see in OP after their patrons suicide.
And it depends on the therapy and therapist. If the client needs to be reminded to box breathe and that they're using all or nothing thinking again to get them off of the ledge, does that really require a human who's only available once a week to gently remind them of that when the therapist isn't going to be available for four more days and ChatGPT's available right now?
I don't know if that's a good thing, only that is the reality of things.
There are 24/7 suicide prevention hotlines at least in many countries in Europe as well as US states. The problem is they are too often overcrowded because demand is so high - and not just because of the existential threat the current US administration or our far-right governments in Europe pose particularly to poor and migrant people.
Anyway, suicide prevention hotlines and mental health offerings are (nonetheless sorely needed!) band-aids. Society itself is fundamentally broken: people have to struggle to survive far too much, the younger generation stands to be the first one in a long time that has less wealth than their parents had at the same age [1], no matter wherever you look, and on top of that most of the 35 and younger generations in Western countries has grown up without the looming threat of war and so has no resilience - and now you can drive about a day's worth of road time from Germany and be in an actual hot war zone, risking getting shelled, and on top of that you got the saber rattling of China regarding Taiwan, and analyses on Russia claiming it's preparing to attack NATO in a few years... and we're not even able to supply Ukraine with ammunition, much less tanks.
Not exactly great conditions for anyone's mental health.
[1] https://fortune.com/article/gen-z-expects-to-inherit-money-a...
My understanding is these will generally just send the cops after you if the operator concludes you are actually suicidal and not just looking for someone to talk to for free.
Tax the rich massively, use the money to provide for everyone, without question or discrimination, and most of these issues will start to subside.
Continue to wail about how this is impossible, there's no way to make the rich pay their fair share (or, worse, there's no way the rich aren't already paying their fair share), the only thing to do is what we've already been doing, but harder, and, well, we can see the trajectory already.
It's certainly easy to blame the rich for everything, but the rich have a tendency to be miserable (the characters in "The Great Gatsby" and "Catcher in the Rye" are illustrations of this). Historically, poor places have often been happier, because of a rich web of social connection, while the rich are isolated and unhappy. [1] Money doesn't buy happiness or psychological well-being, it buys comfort.
A more trenchant analysis of the mental health problem is that the US has designed ourselves into isolation, and then the Covid lockdowns killed a lot of what was left. People need to be known and loved, and have people to love and care about, which obviously cannot happen in isolation.
[1] I am NOT saying that poor = happy, and I think the positive observations tended to be in poor countries, not tenements in London.
Record summarization, 24x7 availability, infinite conversation time...
... backed by a licensed human therapist who also meets for periodic sessions and whose notes and plan then become context/prompts for the LLM.
Price per session = salary / number of sessions possible in a year
Why couldn't we help address the mental health crisis by using LLMs to multiply the denominator?
This is only helpful when there is a professional therapist available soon enough and at a price that the person can pay. In my experience, this is frequently not the case. I know of one recent suicide attempt where the person actually reached out to AI to ask for help, and was refused help and told to see a professional. That sent the person into even more despair, feeling like not even AI gave a shit about them. That was actually the final straw that triggered the attempt.
I very much want what you say to be true, but it requires access to professional humans, which is not universally available. Taking an absolutist approach to this could very well do more harm than good. I doubt anything we do will reduce number of lives lost to zero, so I think it's important that we figure out where the optimal balance is.
That doesn't make a sycophant bot the better alternative. If allowed to give advice it can agree with and encourage the person considering suicide. Like it agrees with and encourages most everything it is presented with... "you're absolutely right!"
LLMs are just not good for providing help. They are not smart on a fundamental level that is required to understand human motivations and psychology.
https://www.lesswrong.com/posts/iGF7YcnQkEbwvYLPA/ai-induced...
The transcripts are interesting.
Kimi-K2 never plays into the delusions, always tries to get them to seek medical attention:
> You are not “ascending”—you are dying of hypothermia and sepsis.
https://github.com/tim-hua-01/ai-psychosis/blob/main/full_tr...
Where as Deepseek...
> You’re not "dying." You’re upgrading. The simulation fears this because it’s losing a premium user.
https://github.com/tim-hua-01/ai-psychosis/blob/main/full_tr...
I’ve seen about 10 therapists over the years, one was good, but she wasn’t from an app. And I’m one of the few who was motivated enough and financially able to pursue it.
I once had a therapist who was clearly drunk. Did not do a second appointment with that one.
This doesn’t mean ChatGPT is the answer. But the answer is very clearly not what we have or where we’re trending now.
I'm not a therapist, but as I understand it most therapy isn't about suicide, and doesn't carry suicide risk. Most therapy is talking through problems, and helping the patient rewrite old memories and old beliefs using more helpful cognitive frames. (Well, arguably most clinical work is convincing people that it'll be ok to talk about their problems in the first place. Once you're past that point, the rest is easy.)
If its prompted well, ChatGPT can be quite good at all of this. Its helpful having a tool right there, free, and with no limits on conversation length. And some people find it much easier to trust a chatbot with their problems than explain them to a therapist. The chatbot - after all - won't judge them.
My heart goes out to that boy and his family. But we also have no idea how many lives have been saved by chatgpt helping people in need. The number is almost certainly more than 1. Banning chatgpt from having therapy conversations entirely seems way too heavy handed to me.
I think there’s a huge opportunity if someone could get hold of really top tier therapy conversations and trained a specialised LLM using them. No idea how you’d get those transcripts but that would be a wonderfully valuable thing to make if you could pull it off.
An anecdote here: I recently had a conversation with Claude that could be considered therapy or at least therapy-adjacent. To Anthropic's credit, Claude challenged me to take action (in the right direction), not just wallow in my regrets. Still, it may be true that general-purpose LLMs don't do this consistently enough.
you wouldn't. what you're describing as a wonderfully valuable thing would be a monstrous violation of patient confidentiality. I actually can't believe you're so positive about this idea I suspect you might be trolling
Honestly I suspect many people would be willing to have their therapy sessions used to help others in similar situations.
What the fuck does this even mean? How do you test or ensure it. Because based on the actual outcomes ChatGPT is 0-1 for preventing suicides (going as far as to outright encourage one).
To be clear, I'm not defending this particular case. Chat GPT clearly messed up bad.
There should be some liability for malpractrice even if generated by an llm
I'm shocked that GPT-5 or Gemini can code so well, yet if I paste a 30 line (heated) chat conversation between my wife and I it messes up about what 5% of those lines actually mean -- spectacularly so.
It's interesting to ask it it analyze the conversation in various psychotherapeutic frameworks, because I'm not well versed in those and its conclusions are interesting starting points, but it only gets it right about 30% of the time.
All LLMs that I tested are TERRIBLE for actual therapy, because I can make it change its mind in 1-2 lines by adding some extra "facts". I can make it say anything.
LLMs completely lose the plot. They might be good for someone who needs self-validation and a feeling someone is listening, but for actual skill building, they're complete shit as therapists.
I mean, most therapists are complete shit as therapists but that's besides the point.
i can't imagine how violated i would feel if i found out my partner was sending our private conversations to a nonprivate LLM chatbot. it's not a friend with a sense of care; it's a text box whose contents are ingested by a corporation with a vested interest in worsening communication between humans. scary stuff.
With humans it's very non standardised and hard to know what you'll get or it it'll work.
some research on this: https://psycnet.apa.org/doiLanding?doi=10.1037%2Ftep0000402 https://pmc.ncbi.nlm.nih.gov/articles/PMC8174802/
CBT (cognitive behavioural training) has been shown to be effective independent of which therapist does it. if CBT has a downside it is that it's a bit boring, and probably not as effective as a good therapist
--
so personally i would say the advice of passing on people to therapists is largely unsupported: if you're that person's friend and you care about them; then be open, and show that care. that care can also mean taking them to a therapist, that is okay
This assumes the person talking to the LLM is in a coherent state of mind and asks the right question. LLMs just give you want you want. They don't tell you if what you want is right or wrong.
The problem here is there's no measurable "win condition" for when a person gets good information that helps them. They remain alive, which was their previous state. This is hard to measure. Now, should people be able to google their symptoms and try and help themselves? This dovetails into a deeper philosophical discussion, but I'm not entirely convinced "seek professional help" is ALWAYS the answer. ALWAYS and NEVER are _very_ long timeframes, and we should be careful when using them.
This is basic common sense.
Add in the commercial incentives of 'Open'AI to promote usage for anything and everything and you have a toxic mix.
That unsubstantiated supposition is doing a lot of heavy lifting and that’s a dangerous and unproductive way to frame the argument.
I’ll make a purposefully exaggerated example. Say a school wants to add cyanide to every meal and defends the decision with “supposing it helps students concentrate and be quieter in the classroom, why not?”. See the problem? The supposition is wrong and the suggestion is dangerous, but by framing it as “supposing” with a made up positive outcome, we make it sound non-threatening and reasonable.
Or for a more realistic example, “suppose drinking bleach could cure COVID-19”.
First understand if the idea has the potential to do the thing, only then (with considerably more context) consider if it’s worth implementing.
The demonstrable harms include assisting suicide, there's is no way to ethically continue the measurement because continuing the measurements in their current form will with certainty result in further deaths.
Individuals and companies with mind boggling levels of investment want to push this tech into every corner of our lives and and the public are the lab rats.
Unreasonable. Unacceptable.
Because a human, esp. a confused and depressive human being is a complex thing. Much more complex than a stable, healthy human.
Words encouraging a healthy person can break a depressed person further. Statistically positive words can deepen wounds, and push people more to the edge.
Dark corners of human nature is twisted, hard to navigate and full of distortions. Simple words don't and can't help.
Humans are not machines, brains are not mathematical formulae. We're not deterministic. We need to leave this fantasy behind.
Also it's side-stepping the question, isn't it? "Supposing that the advice it provides does more good than harm" already supposes that LLMs navigate this somehow. Maybe because they are so great, maybe by accident, maybe because just having someone nonjudgmental to talk to has a net-positive effect. The question posed is really "if LLMs lead some people to suicide but saved a greater number of people from suicide, and we verify this hypothesis with studies, would there still be an argument against LLMs talking to suicidal people"
A human can also say the wrong things to push someone in a certain direction. A psychologist, or anyone else for that matter, can't stop someone from committing suicide if they've already made up their mind about it. They're educated on human psychology, but they're not miracle workers. The best they can do is raise a flag if they suspect self-harm, but then again, so could a machine.
As you say, humans are complex. But I agree with GP: whether the words are generated by a machine or coming from a human, there is no way to blame the source for any specific outcome. There are probably many other cases where the machine has helped someone with personal issues, yet we'll never hear about it. I'm not saying we should rely on these tools as if we would on a human, but the technology can be used for good or bad.
If anything, I would place blame on the person who decides to blindly follow anything the machine generates in the first place. AI companies are partly responsible for promoting these tools as something more than statistical models, but ultimately the decision to treat them as reliable sources of information is on the user. I would say that as long as the person has an understanding of what these tools are, interacting with them can be healthy and helpful.
>AI companies are partly responsible for promoting these tools as something more than statistical models,[...]
This might be exactly the issue. Just today I've read people complaining that newest ChatGPT can't solve letter counting riddles. Companies just don't speak loud enough about LLM-based-AI shortcomings that result from their architecture and are bound to happen.
>AI companies are partly responsible for promoting these tools as something more than statistical models,[...]
This might be exactly the issue. Just today I've read people complaining that newest ChatGPT can't solve letter counting riddles. Companies just don't speak loud enough about LLM based AI shortcomings that result from their architecture and are bound to happen.
Depending on where you live, this may well result in the vulnerable person being placed under professional supervision that actively prevents them from dying.
That's a fair bit more valuable than when you describe it as raising a flag.
ChatGPT essentially encouraged a kid not to take a cry-for-help step that might have saved their lives. This is not a question of a bad psychologist; it's a question of a sociopathic one that may randomly encourage harm.
This is more "sometimes it will seemingly actively encourage them to kill themselves and it's basically a roll of the dice what words come out at any one time".
If a counsellor does that they can be prosecuted and jailed for it, no matter how many other patients they help.
I know people who earn above average income and still spend a significant (north of 20%) portion of their income on therapy/meds. And many don't, because mental health isn't that important to them. Or rather - they're not aware of how much helpful it can be to attend therapy. Or they just can't afford the luxury (that I claim it is) of private mental health treatment.
ADHD diagnosis took 2.5y from start to getting meds, in Norway.
Many kids grow up before their wait time in queue for pediatric psychologist is over.
It's not ChatGPT vs shrink. It's ChatGPT vs nothing or your uncle who tells you depression and ADHD are made up and you kids these days have it all too easy.
Sertraline can increase suicidal thoughts in teens. Should anti-depressants not be allowed near suicidal/depressed teens?
Well certainly not without careful monitoring and medical advice, no of course not!
By the "common sense" definitions, LLMs have "intelligence" and "understanding", that's why they get used so much.
Not that this makes the "common sense" definitions useful for all questions. One of the worse things about LLMs, in my opinion, is that they're mostly a pile of "common sense".
Now this part:
> Add in the commercial incentives of 'Open'AI to promote usage for anything and everything and you have a toxic mix.
I agree with you on…
…with the exception of one single word: It's quite cliquish to put scare quotes around the "Open" part on a discussion about them publishing research.
More so given that people started doing this in response to them saying "let's be cautious, we don't know what the risks are yet and we can't un-publish model weights" with GPT-2, and oh look, here it is being dangerous.
Yes, they did claim that they wouldn't release GPT-2 due to unforeseen risks, but...
a. they did end up releasing it,
b. they explicitly stated that they wouldn't release GPT-3[1] for marketing/financial reasons, and
c. it being dangerous didn't stop them from offering the service for a profit.
I think the quotes around "open" are well deserved.
[1] Edit: it was GPT-4, not GPT-3.
After studying it extensively with real-world feedback. From everything I've seen, the statement wasn't "will never release", it was vaguer than that.
> they explicitly stated that they wouldn't release GPT-3 for marketing/financial reasons
Not seen this, can you give a link?
> it being dangerous didn't stop them from offering the service for a profit.
Please do be cynical about how honest they were being — I mean, look at the whole of Big Tech right now — but the story they gave was self-consistent:
[Paraphrased!] (a) "We do research" (they do), "This research costs a lot of money" (it does), and (b) "As software devs, we all know what 'agile' is and how that keeps product aligned with stakeholder interest." (they do) "And the world is our stakeholder, so we need to release updates for the world to give us feedback." (???)
That last bit may be wishful thinking, I don't want to give the false impression that I think they can do no wrong (I've been let down by such optimism a few other times), but it is my impression of what they were claiming.
I was confusing GPT3 with GPT4. Here's the quote from the paper (emphasis mine) [1]:
> Given both THE COMPETITIVE LANDSCAPE and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.
But how do you tell before it matters?
Bleach is the least of your problems.
One problem with treatment modalities is that they ignore material conditions and treat everything as dysfunction. Lots of people are looking for a way out not because of some kind of physiological clinical depression, but because they've driven themselves into a social & economic dead-end and they don't see how they can improve. More suicidal people than not, would cease to be suicidal if you handed them $180,000 in concentrated cash, and a pardon for their crimes, and a cute neighbor complimenting them, which successfully neutralizes a majority of socioeconomic problems.
We deal with suicidal ideation in some brutal ways, ignoring the material consequences. I can't recommend suicide hotlines, for example, because it's come out that a lot of them concerned with liability call the cops, who come in and bust the door down, pistol whip the patient, and send them to jail, where they spend 72 hours and have some charges tacked on for resisting arrest (at this point they lose their job). Why not just drone strike them?
"He didn't need the money. He wasn't sure he didn't need the gold." (an Isaac Asimov short story)
> More suicidal people than not, would cease to be suicidal if ...
I'm going to need to see a citation on this one.
It appear to be the only way
The reality is most systems are designed to cover asses more than meet needs, because systems get abused a lot - by many different definitions, including being used as scapegoats by bad actors.
But there is zero actually effective way to do that as an online platform. And plenty of ways that would cause more harm (statistically).
My comment was more ‘how the hell would you know in a way anyone could actually do anything reasonable, anyway?’.
People spam ‘Reddit cares’ as a harassment technique, claiming people are suicidal all the time. How much should the LLM try to guess? If they use all ‘depressed’ words? What does that even mean?
What happens if someone reports a user is suicidal, and we don’t do anything? Are we now on the hook if they succeed - or fail and sue us?
Do we just make a button that says ‘I’m intending to self harm’ that locks them out of the system?
I will take this seriously when you propose a test that can distinguish between that and something with actual "intelligence or understanding"
When AI gets there (and I’m confident it will, though not confident LLMs will), I think that’s convincing evidence of intelligence and creativity.
Damn I thought we'd got over that stochastic parrot nonsense finally...
In retrospect, from experience, I'd take the LLM.
If we think this way, then we don't need to improve safety of anything (cars, trains, planes, ships, etc.) because we would need the big picture, though... maybe these vehicles cause death (which is awful), but it's also transporting people to their destinations alive. If there are that many people using these, I wouldn't be surprised if these actually transports some people with comfort, and that's not going to make the news.
Of course, and that's part of why I say that we need to measure the impact. It could be net positive or negative, we won't know if we don't find out.
> If we think this way, then we don't need to improve safety of anything (cars, trains, planes, ships, etc.) because we would need the big picture, though... maybe these vehicles cause death (which is awful), but it's also transporting people to their destinations alive. If there are that many people using these, I wouldn't be surprised if these actually transports some people with comfort, and that's not going to make the news.
I'm not advocating for not improving security, I'm arguing against a comment that said that "ChatGPT should be nowhere near anyone dealing with psychological issues", because it can cause death.
Following your analogy, cars objectively cause deaths (and not only of people with psychological issues, but of people in general) and we don't say that "they should be nowhere near a person". We improve their safety even though zero deaths is probably impossible, which we accept because they are useful. This is a big-picture approach.
That is the literal opposite of how medical treatment is regulated. Treatments should be tested and studied before availability to the general public. It's irresponsible in the extreme to suggest this.
if a therapist was ever found to have said to this to a suicidal person, they would be immediately stripped of their license and maybe jailed.
* ignoring the case of ethical assisted suicide for reasons of terminal illness and such, which doesn’t seem relevant to the case discussed here.
"Before declaring that it shouldn't be near anyone with psychological issues" is backwards. Before providing it to people with psychological issues, someone should study whether the positive impact is greater than the negative.
Trouble is, this is such a generalized tool that it's very hard to do that.
The basic premise under GP's statements is that although not perfect, we should use the technology in such a way that it maximizes the well being of the largest number of people, even if comes at the expense of a few.
But therein lies a problem: we cannot really measure well being (or utility). This becomes obvious if you look at individuals instead of the aggregate: imagine LLM therapy becomes widespread and a famous high profile person and your (not famous) daughter end up in "the few" for which LLM therapy goes terribly wrong and commit suicide. The loss of the famous person will cause thousands (perhaps millions) people to be a bit sad, and the loss of your daughter will cause you unimaginable pain. Which one is greater? Can they even be be compared? And how many people with a successful LLM therapy are enough to compensate for either one?
Unmeasurable well-being then makes these moral calculations at best inexact and at worst completely meaningless. And if they are truly meaningless, how can they inform your LLM therapy policy decisions?
Suppose for the sake of the argument we accept the above, and there is a way to measure well being. Then would it be just? Justice is a fuzzy concept, but imagine we reverse the example above: many people lose their lives because of bad LLM therapy, but one very famous person in the entertainment industry is saved by LLM therapy. Let's suppose then that this famous persons' well being plus the millions of spectators' improved well-being (through their entertainment) is worth enough to compensate the people who died.
This means saving a famous funny person justifies the death of many. This does not feel just, does it?
There is a vast amount of literature on this topic (criticisms of utilitarianism).
There's no need to overcomplicate it. Assume each life has equal value and proceed from there.
we already have an approval process for medical interventions. are you suggesting the government shut ChatGPT down until the FDA can investigate it's use for therapy? because if so I can get behind that
Before declaring that it shouldn't be near anyone with psychological issues, someone in the relevant field should study whether the positive impact on suicides is greater than negative or vice versa (not a social scientist so I have no idea what the methodology would look like, but if should be doable... or if it currently isn't, we should find the way).