Preferences

Again, is it possible you and the other party have (perhaps significantly) different mental models of the domain—or maybe different perspectives of the issues involved? I get that folks can be contrarian (sadly, contrariness is probably my defining trait) but it seems unlikely that someone would argue that you’re wrong by using output they didn’t read. I see impedance mismatches regularly yet folks seem often to assume laziness/apathy/stupidity/pride is the reason for the mismatch. Best advice I ever received is “Assume folks are acting rationally, with good intention, and with a willingness to understand others.” — which for some reason, in my contrarian mind, fits oddly nicely with Hanlon’s razor but I tend to make weird connections like that.

kevmo314
> is it possible you and the other party have (perhaps significantly) different mental models of the domain—or maybe different perspectives of the issues involved?

Yes, however typically if that's the case they will respond with some variant of "ChatGPT mentioned xyz so I started poking in that direction, does that make sense?" There is a markedly different response when people are using ChatGPT to try to understand better and that I have no issue with.

I get what you're suggesting but I don't think people are being malicious, it's more that the discussion has gotten too deep and they're exhausted so they'd rather opt out. In some cases yes it does mean the discussion could've been simplified, but sometimes when it's a pretty deep, technical reason it's hard to avoid.

A concrete example is we had to figure out a bug in some assembly code once and we were looking at a specific instruction. I didn't believe that instruction was wrong and I pointed at the docs suggesting it lined up with what we were observing it doing. Someone responded with "I asked ChatGPT and here's what it said: ..." without even a subsequent opinion on the output of ChatGPT. In fact, reading the output it basically restated what I said, but said engineer used that as justification to rewrite the instruction to something else. And at that point I was like y'know what, I just don't care enough.

Unsurprisingly, it didn't work, and the bug never got fixed because I lost interest in continuing the discussion too.

I think what you're describing does happen in good faith, but I think people also use the wall of text that ChatGPT produces as an indirect way to say "I don't care about your opinion on this matter anymore."

This item has no comments currently.