Preferences

Real arguments would be fine. The problem is that the LLM always acquiesces to your argument and then convinces itself that it will never do that again. It then proceeds to do it again.

I really think the insufferable obsequiousness of every LLM is one of the core flaws that make them terrible peer programmers.

So, you're right in a way. There's no sense in arguing with them, but only because they refuse to argue.


furyofantares
I've noticed I tend to word things in a way that implies the opposite of what I want it to. The obsequiousness is very obnoxious indeed.

But I think the bigger reason arguing doesn't work is they are still fundamentally next-token-predictors. The wrong answer was already something it thought was probable before it polluted its context with it. You arguing can be seen as an attempt to make the wrong answer less probable. But it already strengthened that probability by having already answered incorrectly.

This item has no comments currently.