Preferences

The thing that bothers me about "warmer, more conversational" is that it isn't just a cosmetic choice. The same feedback loop that rewards "I hear you, that must be frustrating" also shapes when the model is willing to say "I don’t know" or "you’re wrong". If your reward signal is mostly "did the user feel good and keep talking?", you’re implicitly telling the model that avoiding friction is more valuable than being bluntly correct.

I'd much rather see these pulled apart into two explicit dials: one for social temperature (how much empathy / small talk you want) and one for epistemic temperature (how aggressively it flags uncertainty, cites sources, and pushes back on you). Right now we get a single, engagement-optimized blend, which is great if you want a friendly companion, and pretty bad if you’re trying to use this as a power tool for thinking.


This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal