Preferences

But what does that mean? If you tell the LLM "Say just 'hi' without any extra words or explanations", do you not get "hi" back from it?

TeMPOraL
That's literally the wrong way to use LLMs though.

LLMs think in tokens, the less they emit the dumber they are, so asking them to be concise, or to give the answer before explanation, is extremely counterproductive.

diggan OP
I was trying to make a point regarding "reliability", not a point about how to prompt or how to use them for work.
TeMPOraL
This is relevant. Your example may be simple enough, but for anything more complex, letting the model have its space to think/compute is critical to reliability - if you starve it for compute, you'll get more errors/hallucinations.
diggan OP
Yeah I mean I agree with you, but I'm still not sure how it's relevant. I'd also urge people to have unit tests they treat as production code, and proper system prompts, and X and Y, but it's really beyond the original point of "LLMs aren't reliable" which is the context in this sub-tree.
Sometimes I get "Hi!", sometimes "Hey!".
diggan OP
Which model? Just tried a bunch of ChatGPT, OpenAI's API, Claude, Anthropic's API and DeepSeek's API with both chat and reasonee, every single one replied with a single "hi".
throwdbaaway
o3-mini-2025-01-31 with high reasoning effort replied with "Hi" after 448 reasoning tokens.

gpt-4.5-preview-2025-02-27 replied with "Hi!"

diggan OP
> o3-mini-2025-01-31 with high reasoning effort replied with "Hi" after 448 reasoning tokens.

I got "hi", as expected. What is the full system prompt + user message you're using?

https://i.imgur.com/Y923KXB.png

> gpt-4.5-preview-2025-02-27

Same "hi": https://i.imgur.com/VxiIrIy.png

throwdbaaway
Ah right, my bad. Somehow I thought the prompt was only:

    Say just 'hi'
while the "without any extra words or explanations" part was for the readers of your comment. Perhaps kubb also made a similar mistake.

I used empty system prompt.

This item has no comments currently.