I've noticed I tend to word things in a way that implies the opposite of what I want it to. The obsequiousness is very obnoxious indeed.
But I think the bigger reason arguing doesn't work is they are still fundamentally next-token-predictors. The wrong answer was already something it thought was probable before it polluted its context with it. You arguing can be seen as an attempt to make the wrong answer less probable. But it already strengthened that probability by having already answered incorrectly.
I really think the insufferable obsequiousness of every LLM is one of the core flaws that make them terrible peer programmers.
So, you're right in a way. There's no sense in arguing with them, but only because they refuse to argue.