furyofantares parent
This is half right, I think, and half very wrong. I always tell people if they're arguing with the LLM they're doing it wrong and for sure part of that is there's things they can't do and arguing won't change that. But the other part is it's hard to overstate their sensitivity to their context; when you're arguing about something it can do, you should start over with a better prompt (and, critically, no polluted context from its original attempt.)
Real arguments would be fine. The problem is that the LLM always acquiesces to your argument and then convinces itself that it will never do that again. It then proceeds to do it again.
I really think the insufferable obsequiousness of every LLM is one of the core flaws that make them terrible peer programmers.
So, you're right in a way. There's no sense in arguing with them, but only because they refuse to argue.
I've noticed I tend to word things in a way that implies the opposite of what I want it to. The obsequiousness is very obnoxious indeed.
But I think the bigger reason arguing doesn't work is they are still fundamentally next-token-predictors. The wrong answer was already something it thought was probable before it polluted its context with it. You arguing can be seen as an attempt to make the wrong answer less probable. But it already strengthened that probability by having already answered incorrectly.