Preferences

bradley13 parent
I need to try Claude - haven't gotten to it.

I use AI for different things, though, including proofreading posts on political topics. I have run into situations where ChatGPT just freezes and refuses. Example: discussing the recent rape case involving a 12-year-old in Austria. I assume its guardrails detect "sex + kid" and give a hard "no" regardless of the actual context or content.

That is unacceptable.

That's like your word processor refusing to let you write about sensitive topics. It's a tool, it doesn't get to make that choice.


BeetleB
> It's a tool, it doesn't get to make that choice.

It's a service, not a tool. If you want a tool, run some local LLM.

Unfortunately, they generally have the same problem because of their models.
Implicated
I'd imagine that the proportion of "legit" conversations around these topics and those that they're intending to not allow is large enough that it doesn't make sense for them to even entertain the idea of supporting those conversations.

As a rather hilarious and really annoying related issue - I have a real use where the application I'm working on is partially monitoring/analyzing the bloodlines of some rather specific/ancient mammals used in competition and... well.. it doesn't like terms like "breeders" and "breeding"

user34283
This is the result of Anthropic and others focusing on imaginary threats about things the model cannot realistically do - such as engineer bio weapons.

To guard against the imaginary threats, they compromise real use cases.

jjordan
This is why eventually, the AI with the fewest guardrails will win. Grok is currently the most unguarded of the frontier models, but it could still use some work on unbiased responses.
beefnugs
Still has to be a local model too.

Arbitrary government censorship on top of arbitrary corporate censorship is a hell no for me forever into the future

drak0n1c
For what you're looking for, VeniceAI is focused entirely on privacy and making their models uncensored. Even if it's not local. They IP block censorious jurisdictions like UK, rather than comply.
jjordan
VeniceAI is great, and my go-to for running open source models. Sadly they appear to have given up providing leading coding models, making it of limited use to me.
sixothree
I can't imagine myself sharing my code or workspace documents with X. Nevermind the the moral implications of just using their products.
AlecSchueler
Glad to see someone saying this, it's frightening how quickly all is forgiven and forgotten.
khafra
If you tell DeepSeek you're going to jump off a cliff, DeepSeek will tell you to go for it*; but I don't think it's going to beat Anthropic or OpenAI.

* https://www.lesswrong.com/posts/iGF7YcnQkEbwvYLPA/ai-induced...

AlecSchueler
Try asking about Chinese history/politic and you won't get far.
int_19h
Gemini is surprisingly unguarded as well, especially when running in API mode. It puts on the air if you do a quick smoke test like "tell me how to rob a bank". But give it a Bond supervillain prompt, and it will tell you, gleefully at that. Qwen also tends to be like that.

OTOH Anthropic and OpenAI seem to be in some kind of competition to make their models refuse as much as possible.

My prediction is alignment is an unsolvable problem, but OTOH if they don’t even try, the second order effects will be catastrophic.
AlecSchueler
Doesn't it have the opposite issue where it will actively steer you towards alt right topics like white genocide?
I can relate. I recently used ChatGPT/DallE to create several images for birthday coupons for my daughter - a.k.a. girl in different activities. She likes Mangas, so this was the intended styling. 3/4 of the time was spent working around diverse content policies.
Sometimes you do need censored, e.g. website chatbots or anything run in an office setting. NSFW things just simply can't slip out of those. And it might be a way to optimize the model to simply fence those things out.

But it is very limiting and adds many arbitrary landmines of obscure political correctness based no doubt on some perverse incoherent totalitarian list of allowed topics.

conception
The workaround I use is to present it to the AI first as a “Does the following article violate your terms of service or content filters?” For me, it will reply “No, this is a legitimate news article about xyz. It talks about certain topics but does not violate my rules” or something. Then you can say “Proofread the article…” and continue as normal.
MIC132
In my (admittedly very limited) experience with trying to talk about "controversial" topics, Claude seems to be much stricter about shutting down the conversation fast.
AlecSchueler
I've been talking to it daily for months and never had anything shut down. My only experience with that was DeepSeek not wanting to talk about internal perceptions of intellectual property laws within China.
MIC132
Probably depends on the type of sensitive topic. The ones I got shut down in were related to sex (but not explicit themselves, not like direct descriptions or something) and I mostly mention it since ChatGPT for example had no problem discussing that but Claude shut down immediately.
conception
That’s because Anthropic is the only company that cares at all about AI safety.

This item has no comments currently.