Preferences

I suspect you've never done therapy yourself. Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help. AIs are really good at doing something to about 80%. When the stakes are life or death, as they are with someone who is suicidal, that is a good example of a time when 80% isn't good enough.

In such cases, where a new approach offers to replace an existing approach, the burden of proof is on the challenger, not the incumbent. This is why we have safety regulations, why we don't let people drive cars that haven't passed tests, build with materials that haven't passed tests, eat food that hasn't passed tests. You understand then, hopefully, why your comments here are dangerous...? I have no doubt you have no malicious intent here - you're right that these decisions need to be based on data - but you're not taking into account that the (potentially extremely harmful) challenger already has a foothold in the field.

I know that you will want to hear this from experts in the "relevant field" rather than myself, so here is a write-up from Stanford on the subject: https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in...


A bit of a counterpoint. I've done 3 years of therapy with an amazing professional. I can't exaggerate how much good it did; I'm a different person, I'm not an anxious person anymore. I think I have a good idea of how good human therapy is. I was discharged about 2 years ago.

Last Saturday, I was a little distressed about a love-hate relationship that I have with one of the things that I work with, so I tried using AI as a therapist. Within 10 minutes of conversation, the AI gave me some incredible insight. I was genuinely impressed. I had already discussed this same subject with two psychologist friends, who hadn't helped much.

Moreover: I needed to finish a report that night and I told the AI about it. So it said something like, "I see you're procrastinating preparing the report by talking to me. I'll help you finish it."

And then, in the same conversation, the AI switched from psychologist to work assistant and helped me finish the report. And the end product was very good.

I was left very reflective after this.

Edit: It was Claude Sonnet 4.5 with extended thinking, if anyone is wondering.

You're allowing yourself to think of it like a person, which is a scary risk. A person, it is not.
You learned skills your trained therapist guided you to develop over a three year period of professional interaction. These skills likely influenced your interaction with this product.
I believe so!
Be careful though, because if I were to listen to Claude Sonnet 4.5, it would have ruined my relationship. It kept telling me how my girlfriend is gaslighting me, manipulating me, and that I need to end the relationship and so forth. I had to tell the LLM that my girlfriend is nice, not manipulative, and so on, and it told me that it understands why I feel like protecting her, BUT this and that.

Seriously, be careful.

At the same time, it has been useful for the relationship at other times.

You really need to nudge it in the right direction and do your due diligence.

That would be all the Reddit "AmIOverreacting" in training data... :/
I had a similar thing throughout last week dealing with relationship anxiety and I used that same model for help. It really did provide great insight into managing my emotions at the time, provided useful tactics to manage everything and encouraged me to see my therapist. You can ask it to play devil's advocate or take on different viewpoints as a cynic or use Freudian methodology, etc... You can really dive into an issue you're having and then have it give you the top three bullet points to talk with your therapist about.

This does require you think about what it's saying though and not taking it at surface value since it obviously lacks what makes humans human.

You're holding up a perfect status quo that doesn't correspond to reality.

Countries vary, but in the US and many places there's a shortage of quality therapists.

Thus for many people the actual options are {no therapy} and {LLM therapy}.

> This is why we have safety regulations, why we don't let people drive cars that haven't passed tests, build with materials that haven't passed tests, eat food that hasn't passed tests.

And the reason all these regulations and tests are less than comprehensive is that we realize that people working, driving affordable cars, living in affordable homes, and eating affordable food is more important than avoiding every negative outcome. Thus most societies pursue the utilitarian greater good rather than an inflexible 'do no harm' standard.

>Countries vary, but in the US and many places there's a shortage of quality therapists.

Worse in my EU country. There's even a shortage of shitty therapists and doctors, let alone quality ones. It takes 6+ months to get an appointment for a 5 minute checkup at a poorly reviewed state funded therapist, while the good ones are either private or don't accept any new patients if they're on the public system. And ADHD diagnosticians/therapists are only in the private sector because I guess the government doesn't recognize ADHD as being a "real" mental issue worthy of your tax Euros.

A friend of mine got a more accurate diagnosis for his breathing issue by putting his symptoms in ChatGPT than he got from his general practitioner, later confirmed by a good specialist. I also wasted a lot of money on bad private therapists that were basically just phoning in their job, so to me, the bar seems pretty low, since as long as they pass their med-school exams and don't kill too many people through malpractice, nobody checks up on how good or bad their are at their job (maybe some need more training, or maybe some don't belong in medicine at all but managed to slipped through the cracks).

Not saying all doctors are bad (I've met few amazing ones), but it definitely seems like the healthcare systems are failing a lot of people everywhere if they resort to LLMs for diagnosis and therapy and getting better results from it.

Not sure where you are based, but in general GPs shouldn't be doing psychological evaluation, period. I am in Europe, and this is the default. If you live in utter shithole (even if only health-care wise), move elsewhere if its important for you - it has never been easier, Europe is facing many issues and massive improvement of healthcare is not in the work pipeline, more like the opposite.

You also don't expect butcher to fix your car, those are as close as above (my wife is a GP so I have a good perspective from the other side, including tons of hypochondriac and low-intensity psychiatric persons who are an absolute nightmare to deal with and routinely overwhelm the system so that there isn't enough resources to deal with more serious cases).

You get what you pay for at the end, 'free' healthcare typical for Europe is anyway still paid for one way or another. And if the market forces are so severely distorted (or bureaucracy so ridiculous/corrupt) that they push such specialists away or into another profession, you get healthcare wastelands you describe.

Vote, and vote with your feet if you want to see change, not ideal state but thats reality.

>but in general GPs shouldn't be doing psychological evaluation, period. I am in Europe, and this is the default.

Where did I say GPs have to do that? In my example of my friend's being misdiagnosed by GPs, it was about another issue, not mental, but it has the same core problem of doctors misdiagnosing patients worse than a LLM bring into questions their competence or that of the health system in general if a LLM can do better than someone who spent 6+ years in med school and got a degree to be a licensed MD to treat people.

>You also don't expect butcher to fix your car, those are as close as above

You're making strawmen at this point. Such metaphors have no relevance to anything I said. Please review my comment through the lens of the clarifications I just made. Maybe the way I wrote it initially made it unclear.

>You get what you pay for at the end

The problem is the opposite, that you don't get what you pay for, if you're a higher than average earner. The more you work, the more taxes you pay, but get the same healthcare quality in return as unskilled laborer who is subsidized.

It's a bad reward structure to incentivize people to pay more of their taxes into the public system, compounded by the fact that government workers, civil servants, lawyers, architects, and other privileged employment classes of bureaucrats with strong unions, have their own separate heath insurance funds, that separate from the national public one that the unwashed masses working in the private sector have to use, so THEY do get what THEY pay for, but you don't.

So that's the problem with state run systems, just like you said about corruption, that giving the government unchecked power over large amounts of people's taxes, allow the government to manipulate the market and choosing winners and losers based on political favoritism and not on the fair free market of who pays the most into the system.

Maybe Switzerland managed to nail it with their individual private system, but I don't know enough to say for sure.

> I am in Europe, and this is the default.

Obligatory reminder that Europe is not a homogeneous country.

I don't accept the unregulated and uncontrolled use of LLMs for therapy for the same reason I don't accept arguments like "We should deregulate food safety because it means more food at lower cost to consumers" or "We should waive minimum space standards for permitted developments on existing buildings because it means more housing." We could "solve" the homeless problem tomorrow simply by building tenements (that is why they existed in the first place after all).

The harm LLMs do in this case is attested both by that NYT article and the more rigorous study from Stanford. There are two problems with your argument as I see it: 1. You're assuming "LLM therapy" is less harmful than "no therapy", an assumption I don't believe has been demonstrated. 2. You're not taking into account the long term harm of putting in place a solution that's "not fit for human use" as in the housing and food examples: once these things become accepted, they form the baseline of the new accepted "minimum standard of living", bringing that standard down for everyone.

You claim to be making a utilitarian as opposed to a nonmaleficent argument, but, for the reasons I've stated here, I don't believe it's a utilitarian argument at all.

> I don't accept the unregulated and uncontrolled use of LLMs for therapy for the same reason I don't accept arguments like "We should deregulate food safety because it means more food at lower cost to consumers"

That is not the argument. The argument is not about 'lower cost', it is about availability. There are not enough shrinks for everyone who would need it.

So it would be "We should deregulate food safety to avoid starving", which would be a valid argument.

I think the reason you don't believe the GP argument, is because you are misunderstanding it. The utilitarian argument is not calling for complete deregulation. I think you're taking your absolutist view of not allowing llms to do any therapy, and assuming the other side must have a similarly absolutist view of allowing it to do any therapy with no regulations. Certainly nothing in the GP comment suggests complete deregulation as you have said. In fact, I got explicitly the opposite out of it. They are comparing it to cars and food, which are pretty clearly not entirely deregulated.
I bet you don't accept that because you can afford the expensive regulated version.
> "We should waive minimum space standards for permitted developments on existing buildings because it means more housing." We could "solve" the homeless problem tomorrow simply by building tenements (that is why they existed in the first place after all).

... the entire reason tenements and boarding houses no longer exist is because most governments regulated them out of existence (e.g. by banning shared bathrooms to push SFHs).

You can't have it all ways.

strict minimum regulation : availability : cost

Pick 2.

Small edit:

> ... the entire reason tenements and boarding houses no longer exist

... the entire reason tenements and boarding houses no longer exist _where you live_

Ok then, the LLMs must pass the same tests and be as regulated as therapists.

After all, it should be easy peasy (:

What tests? The term “therapist” is not protected in most jurisdictions. No regulation required. Almost anyone can call themselves a therapist.
In every state you have to have a license to practice.

The advice to not leave the noose out is likely enough for ChatGPT to lose it's license to practice (if it had one).

LLMs can pass the bar now, so I don't think they would have any problems here.
If the choice is between no food and food then your standard for food goes way down.
The unfortunate reality though is that people are going to use whatever resources they have available to them, and ChatGPT is always there, ready to have a conversation, even at 3am on a Tuesday while the client is wasted. You don't need any credentials to see that.

And it depends on the therapy and therapist. If the client needs to be reminded to box breathe and that they're using all or nothing thinking again to get them off of the ledge, does that really require a human who's only available once a week to gently remind them of that when the therapist isn't going to be available for four more days and ChatGPT's available right now?

I don't know if that's a good thing, only that is the reality of things.

> If the client needs to be reminded to box breathe and that they're using all or nothing thinking again to get them off of the ledge, does that really require a human who's only available once a week to gently remind them of that when the therapist isn't going to be available for four more days and ChatGPT's available right now?

There are 24/7 suicide prevention hotlines at least in many countries in Europe as well as US states. The problem is they are too often overcrowded because demand is so high - and not just because of the existential threat the current US administration or our far-right governments in Europe pose particularly to poor and migrant people.

Anyway, suicide prevention hotlines and mental health offerings are (nonetheless sorely needed!) band-aids. Society itself is fundamentally broken: people have to struggle to survive far too much, the younger generation stands to be the first one in a long time that has less wealth than their parents had at the same age [1], no matter wherever you look, and on top of that most of the 35 and younger generations in Western countries has grown up without the looming threat of war and so has no resilience - and now you can drive about a day's worth of road time from Germany and be in an actual hot war zone, risking getting shelled, and on top of that you got the saber rattling of China regarding Taiwan, and analyses on Russia claiming it's preparing to attack NATO in a few years... and we're not even able to supply Ukraine with ammunition, much less tanks.

Not exactly great conditions for anyone's mental health.

[1] https://fortune.com/article/gen-z-expects-to-inherit-money-a...

> There are 24/7 suicide prevention hotlines at least in many countries in Europe as well as US states.

My understanding is these will generally just send the cops after you if the operator concludes you are actually suicidal and not just looking for someone to talk to for free.

I mean that's clearly a good thing. If you are actually suicidal then you need someone to intervene. But there is a large gulf between depressed and suicidal and those phone lines can help without outside assistance in those cases.
> just send the cops after you

> > that's clearly a good thing

You might want to read up on how interactions between police and various groups in the US tend to go. Sending the cops after someone is always going to be dangerous and often harmful.

If the suicidal person is female, white and sitting in a nice house in the suburbs, they'll likely survive with just a slightly traumatizing experience.

If the suicidal person is male, black or has any appearance of being lower class, the police are likely to treat them as a threat, and they're more likely to be assaulted, arrested, harassed or killed than they are to receive helpful medical treatment.

If I'm ever in a near-suicidal state, I hope no one calls the cops on me, that's a worst nightmare situation.

> If you are actually suicidal then you need someone to intervene.

Yeah, trained medics, not "cops" that barely had a few weeks worth of training and only know how to operate guns.

And the reason for this brokenness is all too easy to identify: the very wealthy have been increasingly siphoning off all gains in productivity since the Reagan era.

Tax the rich massively, use the money to provide for everyone, without question or discrimination, and most of these issues will start to subside.

Continue to wail about how this is impossible, there's no way to make the rich pay their fair share (or, worse, there's no way the rich aren't already paying their fair share), the only thing to do is what we've already been doing, but harder, and, well, we can see the trajectory already.

I guess if all you have is a hammer...

It's certainly easy to blame the rich for everything, but the rich have a tendency to be miserable (the characters in "The Great Gatsby" and "Catcher in the Rye" are illustrations of this). Historically, poor places have often been happier, because of a rich web of social connection, while the rich are isolated and unhappy. [1] Money doesn't buy happiness or psychological well-being, it buys comfort.

A more trenchant analysis of the mental health problem is that the US has designed ourselves into isolation, and then the Covid lockdowns killed a lot of what was left. People need to be known and loved, and have people to love and care about, which obviously cannot happen in isolation.

[1] I am NOT saying that poor = happy, and I think the positive observations tended to be in poor countries, not tenements in London.

When the story about the ChatGPT suicide originally popped up, it seemed obvious that the answer was professional, individualized LLMs as therapist multipliers.

Record summarization, 24x7 availability, infinite conversation time...

... backed by a licensed human therapist who also meets for periodic sessions and whose notes and plan then become context/prompts for the LLM.

Price per session = salary / number of sessions possible in a year

Why couldn't we help address the mental health crisis by using LLMs to multiply the denominator?

> Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help.

This is only helpful when there is a professional therapist available soon enough and at a price that the person can pay. In my experience, this is frequently not the case. I know of one recent suicide attempt where the person actually reached out to AI to ask for help, and was refused help and told to see a professional. That sent the person into even more despair, feeling like not even AI gave a shit about them. That was actually the final straw that triggered the attempt.

I very much want what you say to be true, but it requires access to professional humans, which is not universally available. Taking an absolutist approach to this could very well do more harm than good. I doubt anything we do will reduce number of lives lost to zero, so I think it's important that we figure out where the optimal balance is.

> This is only helpful when there is a professional therapist available soon enough and at a price that the person can pay. In my experience, this is frequently not the case.

That doesn't make a sycophant bot the better alternative. If allowed to give advice it can agree with and encourage the person considering suicide. Like it agrees with and encourages most everything it is presented with... "you're absolutely right!"

LLMs are just not good for providing help. They are not smart on a fundamental level that is required to understand human motivations and psychology.

Yeah, you'd need an LLM that doesn't do that.

https://www.lesswrong.com/posts/iGF7YcnQkEbwvYLPA/ai-induced...

The transcripts are interesting.

Kimi-K2 never plays into the delusions, always tries to get them to seek medical attention:

> You are not “ascending”—you are dying of hypothermia and sepsis.

https://github.com/tim-hua-01/ai-psychosis/blob/main/full_tr...

Where as Deepseek...

> You’re not "dying." You’re upgrading. The simulation fears this because it’s losing a premium user.

https://github.com/tim-hua-01/ai-psychosis/blob/main/full_tr...

This is nothing but an appeal to authority and fear of the unknown. The article linked isn't even able to make a statement stronger than speculation like "may not only lack effectiveness" and "could also contribute to harmful stigma and dangerous responses."
We’re increasingly switching to an “Uber for therapy” model with services like Better Help and a plethora of others.

I’ve seen about 10 therapists over the years, one was good, but she wasn’t from an app. And I’m one of the few who was motivated enough and financially able to pursue it.

I once had a therapist who was clearly drunk. Did not do a second appointment with that one.

This doesn’t mean ChatGPT is the answer. But the answer is very clearly not what we have or where we’re trending now.

> Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help.

I'm not a therapist, but as I understand it most therapy isn't about suicide, and doesn't carry suicide risk. Most therapy is talking through problems, and helping the patient rewrite old memories and old beliefs using more helpful cognitive frames. (Well, arguably most clinical work is convincing people that it'll be ok to talk about their problems in the first place. Once you're past that point, the rest is easy.)

If its prompted well, ChatGPT can be quite good at all of this. Its helpful having a tool right there, free, and with no limits on conversation length. And some people find it much easier to trust a chatbot with their problems than explain them to a therapist. The chatbot - after all - won't judge them.

My heart goes out to that boy and his family. But we also have no idea how many lives have been saved by chatgpt helping people in need. The number is almost certainly more than 1. Banning chatgpt from having therapy conversations entirely seems way too heavy handed to me.

I feel like this begs another question. If there are proven approaches and well established practices of professionals how good would chatgpt be in that profession? After all chstgpt has a vast knowledge base and probably knows a good amount of textbooks on psychology. Then again actually performing the profession probably takes skil and experience chatgpt can't learn.
I think a well trained LLM could be amazing at being a therapist. But general purpose LLMs like ChatGPT have a problem: They’re trained to be far too user led. They don’t challenge you enough. Or steer conversations appropriately.

I think there’s a huge opportunity if someone could get hold of really top tier therapy conversations and trained a specialised LLM using them. No idea how you’d get those transcripts but that would be a wonderfully valuable thing to make if you could pull it off.

> They’re trained to be far too user led. They don’t challenge you enough.

An anecdote here: I recently had a conversation with Claude that could be considered therapy or at least therapy-adjacent. To Anthropic's credit, Claude challenged me to take action (in the right direction), not just wallow in my regrets. Still, it may be true that general-purpose LLMs don't do this consistently enough.

> No idea how you’d get those transcripts

you wouldn't. what you're describing as a wonderfully valuable thing would be a monstrous violation of patient confidentiality. I actually can't believe you're so positive about this idea I suspect you might be trolling

I'm serious. You would have to do it with the patient's consent of course. And of course anonymize any transcripts you use - changing names and whatnot.

Honestly I suspect many people would be willing to have their therapy sessions used to help others in similar situations.

Knowing the theory is a small part of it. Dealing with irrational patients is the main part. For example, you could go to therapy and be successful. Five years later something could happen and you face a reoccurrence of the issue. It is very difficult to just apply the theory that you already know again. You're probably irrational. A therapist prodding you in the right direction and encouraging you in the right way is just as important as the theory.
it's imperative that we as a society make decisions based on what we know to be true, rather than what some think might be true.
“If it is prompted well”

What the fuck does this even mean? How do you test or ensure it. Because based on the actual outcomes ChatGPT is 0-1 for preventing suicides (going as far as to outright encourage one).

If you're going to make the sample size one, and use the most egregious example, you make pretty much anything that has ever been born or built look terrible. Given there are millions of people using chat, GPT and others for therapy every week, maybe even everyday, citing a record of being 0-1 is pretty ridiculous.

To be clear, I'm not defending this particular case. Chat GPT clearly messed up bad.

If I had to guess (I don't know) the absolute majority of people considering suicide never go to a therapist. Thus while I absolutely agree that therapist is better than AI, but the question is whether 95% of people not doing therapy + 5% people doing therapy is better or not than 50% not doing therapy, 45% using AI, 5% doing therapy. I don't know the answer to this question.
I presume you’ve done therapy. You may remember the large difference in quality between individual therapists, multi-month long waiting lists, a tendency for the best professionals to not even accept insurance, and one or two along the way that were downright dangerous.
I would take it a step back and posit simply, "why does a human require certification to practice therapy, and a computer program does not?"

There should be some liability for malpractrice even if generated by an llm

> I suspect you've never done therapy yourself. Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help. AIs are really good at doing something to about 80%.

I'm shocked that GPT-5 or Gemini can code so well, yet if I paste a 30 line (heated) chat conversation between my wife and I it messes up about what 5% of those lines actually mean -- spectacularly so.

It's interesting to ask it it analyze the conversation in various psychotherapeutic frameworks, because I'm not well versed in those and its conclusions are interesting starting points, but it only gets it right about 30% of the time.

All LLMs that I tested are TERRIBLE for actual therapy, because I can make it change its mind in 1-2 lines by adding some extra "facts". I can make it say anything.

LLMs completely lose the plot. They might be good for someone who needs self-validation and a feeling someone is listening, but for actual skill building, they're complete shit as therapists.

I mean, most therapists are complete shit as therapists but that's besides the point.

Not surprising, given that there's (hopefully, given the privacy implications) much more training data available for successful coding than for successful therapy/counseling.
> if I paste a 30 line (heated) chat conversation between my wife and I

i can't imagine how violated i would feel if i found out my partner was sending our private conversations to a nonprivate LLM chatbot. it's not a friend with a sense of care; it's a text box whose contents are ingested by a corporation with a vested interest in worsening communication between humans. scary stuff.

My partner is ok with it *
that's good! sorry for implying you were doing something like that without their knowledge; i was just thinking On Line about how i'd feel.
I tried therapy once and it was terrible. The ones I got were based on some not very scientific stuff like Freudian and mostly just sat there and didn't say anything. At least with an LLM type therapist you could AB test different ones to see what was effective. It would be quite easy to give an LLM instructions to discourage suicide and get them to look on the bright side. In fact I made a "GPT" "relationship therapist" with OpenAI in about five minutes but just giving it a sensible article on relationships and saying advise this.

With humans it's very non standardised and hard to know what you'll get or it it'll work.

the 'therapist effect' says that therapy quality is largely independent of training

some research on this: https://psycnet.apa.org/doiLanding?doi=10.1037%2Ftep0000402 https://pmc.ncbi.nlm.nih.gov/articles/PMC8174802/

CBT (cognitive behavioural training) has been shown to be effective independent of which therapist does it. if CBT has a downside it is that it's a bit boring, and probably not as effective as a good therapist

--

so personally i would say the advice of passing on people to therapists is largely unsupported: if you're that person's friend and you care about them; then be open, and show that care. that care can also mean taking them to a therapist, that is okay

Yeah. Also at the time I tried it what I really needed was common sense advice like move out of mum's, get a part time job to meet people and so on. While you could argue it's not strictly speaking therapy, I imagine a lot of people going to therapists could benefit from that kind of thing.
> It would be quite easy to give an LLM instructions to discourage suicide

This assumes the person talking to the LLM is in a coherent state of mind and asks the right question. LLMs just give you want you want. They don't tell you if what you want is right or wrong.

What are you talking about? I can grow food myself, and I can build a car from scratch and take it on the highway. Are there repercussions? Sure, but nothing inherently stops me from doing it.

The problem here is there's no measurable "win condition" for when a person gets good information that helps them. They remain alive, which was their previous state. This is hard to measure. Now, should people be able to google their symptoms and try and help themselves? This dovetails into a deeper philosophical discussion, but I'm not entirely convinced "seek professional help" is ALWAYS the answer. ALWAYS and NEVER are _very_ long timeframes, and we should be careful when using them.

What if professional help is outside their means? Or they have encountered the worst of the medical profession and decided against repeat exposure? Just saying.

This item has no comments currently.