And while meriam-webster's definition is "the act of causing someone to accept as true or valid what is false or invalid", which might exclude LLMs, Oxford simply defines deception as "the act of hiding the truth, especially to get an advantage", no requirement that the deceived is sentient
At some point, the purely reductionist view stops being very useful.
And "lying" to it is not morally equivalent to lying to a human.
I never claimed as much.
This is probably a problem of definitions: To you, "lying" seems to require the entity being lied to being a moral subject.
I'd argue that it's enough for it to have some theory of mind (i.e. be capable of modeling "who knows/believes what" with at least some fidelity), and for the liar to intentionally obscure their true mental state from it.
“Lying” traditionally requires only belief capacity on the receiver’s side, not qualia/subjective experiences. In other words, it makes sense to talk about lying even to p-zombies.
I think it does make sense to attribute some belief capacity to (the entity role-played by) an advanced LLM.
No need to say he "lied" and then use an analogy of him lying to a human being, as did the comment I originally objected to.
I can lie to a McDonalds cashier about what food I want, or I can lie to a kiosk.. but in either circumstance I'll wind up being served the food that I asked for and didn't want, won't I?
Ok, I'm with you so far..
> Better to think of it as a mentally disturbed minor...
Proceeds to use emotive, anthropomorphic language about a software tool..
Or perhaps that is point and I got whooshed. Either way I found it humorous!
Another is that this is a new and poorly understood (by the public at least) technology that giant corporations make available to minors. In ChatGPT's case, they require parental consent, although I have no idea how well they enforce that.
But I also don't think the manufacturer is solely responsible, and to be honest I'm not that interested in assigning blame, just keen that lessons are learned.
Using emotive, anthropomorphic language about software tool is unhelpful, in this case at least. Better to think of it as a mentally disturbed minor who found a way to work around a tool's safety features.
We can debate whether the safety features are sufficient, whether it is possible to completely protect a user intent on harming themselves, whether the tool should be provided to children, etc.