Anyone with half a brain complaining about hypothetical future privacy violations on some random platform just makes me spit milk out my nose. What privacy?! Privacy no longer exists, and worrying that your chat logs are gonna get sent to the authorities seems to me like worrying that the cops are gonna give you a parking ticket after your car blew up because you let the mechanic put a bomb in the engine.
Just not a very good argument.
To play devils advocate for a second, what if someone that’s mentally ill uses a local LLM for therapy and doesn’t get the help they need? Even if it’s against their will? And they commit suicide or kill someone because the LLM said it’s the right thing to do…
Is being dead better, or is having complete privacy better? Or does it depend?
I use local LLMs too, but it’s disingenuous to act like they solve the _real_ problem here. Mentally ill people trying to use an LLM for therapy. It can end catastrophically.
> Is being dead better, or is having complete privacy better? Or does it depend?
I know you're being provocative, but this feels like a false dichotomy. Mental health professionals are pro-privacy AND have mandatory reporting laws based on their best judgement. Do we trust LLMs to report a suicidal person that has been driven there by the LLM itself?
LLMs can't truly be controlled and can't be designed to not encourage mentally ill people to kill themselves.
> Mentally ill people trying to use an LLM for therapy
Yes indeed this is one of the core problems. I have experimented with this myself and the results were highly discouraging. Others that don't have the same level of discernment for LLM usage may mistake the confidence of the output for a well-trained therapist.
1. Profound tone-deafness about appropriate contexts for privacy messaging
2. Intentional targeting of users who want to avoid safety interventions
3. A fundamental misunderstanding of your ethical obligations as an AI provider
None of these interpretations reflect well on AgentSea's judgment or values.