Preferences

Given that this is in response to a ChatGPT user who killed his mother and then himself, I'm not sure that positioning your product as being more secure than ChatGPT is wise, because your marketing here suggests either:

1. Profound tone-deafness about appropriate contexts for privacy messaging

2. Intentional targeting of users who want to avoid safety interventions

3. A fundamental misunderstanding of your ethical obligations as an AI provider

None of these interpretations reflect well on AgentSea's judgment or values.


I disagree. The fact that the crimes done by a mentally ill person are going to be used as a justification for surveillance on the wider population of users is a strong ethical reason to advocate for more security.
Yeah, it'd be terrible if all our emails, DNS queries, purchase histories, messages, Facebook posts, Google searches, in store purchase, driving and GPS info were being tracked, cataloged, and sold to anyone who wants it! Why, people would never stand for such surveillance!

Anyone with half a brain complaining about hypothetical future privacy violations on some random platform just makes me spit milk out my nose. What privacy?! Privacy no longer exists, and worrying that your chat logs are gonna get sent to the authorities seems to me like worrying that the cops are gonna give you a parking ticket after your car blew up because you let the mechanic put a bomb in the engine.

Things suck therefore it doesn't matter if things suck even more.

Just not a very good argument.

Or maybe I just want to be able to talk to an LLM without worrying about if its going to report me to the authorities.
that’s a good point, privacy is important.

To play devils advocate for a second, what if someone that’s mentally ill uses a local LLM for therapy and doesn’t get the help they need? Even if it’s against their will? And they commit suicide or kill someone because the LLM said it’s the right thing to do…

Is being dead better, or is having complete privacy better? Or does it depend?

I use local LLMs too, but it’s disingenuous to act like they solve the _real_ problem here. Mentally ill people trying to use an LLM for therapy. It can end catastrophically.

I don't want to deal with prompt injection attacks leading to being swatted. That's where all this reporting to the authorities is leading and it's not looking fun.

> Is being dead better, or is having complete privacy better? Or does it depend?

I know you're being provocative, but this feels like a false dichotomy. Mental health professionals are pro-privacy AND have mandatory reporting laws based on their best judgement. Do we trust LLMs to report a suicidal person that has been driven there by the LLM itself?

LLMs can't truly be controlled and can't be designed to not encourage mentally ill people to kill themselves.

> Mentally ill people trying to use an LLM for therapy

Yes indeed this is one of the core problems. I have experimented with this myself and the results were highly discouraging. Others that don't have the same level of discernment for LLM usage may mistake the confidence of the output for a well-trained therapist.

I too think there should be no rules or attempts to derisk any situation, just let us die
Are you in America? Do you also support banning guns?

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal