It's a service, not a tool. If you want a tool, run some local LLM.
As a rather hilarious and really annoying related issue - I have a real use where the application I'm working on is partially monitoring/analyzing the bloodlines of some rather specific/ancient mammals used in competition and... well.. it doesn't like terms like "breeders" and "breeding"
Arbitrary government censorship on top of arbitrary corporate censorship is a hell no for me forever into the future
* https://www.lesswrong.com/posts/iGF7YcnQkEbwvYLPA/ai-induced...
OTOH Anthropic and OpenAI seem to be in some kind of competition to make their models refuse as much as possible.
But it is very limiting and adds many arbitrary landmines of obscure political correctness based no doubt on some perverse incoherent totalitarian list of allowed topics.
I use AI for different things, though, including proofreading posts on political topics. I have run into situations where ChatGPT just freezes and refuses. Example: discussing the recent rape case involving a 12-year-old in Austria. I assume its guardrails detect "sex + kid" and give a hard "no" regardless of the actual context or content.
That is unacceptable.
That's like your word processor refusing to let you write about sensitive topics. It's a tool, it doesn't get to make that choice.