diggan parent
They're reliable already if you change the way you approach them. These probabilistic token generators probably never will be "reliable" if you expect them to 100% always output exactly what you had in mind, without iterating in user-space (the prompts).
I also think they might never become reliable.
There is a bar below which they are reliable.
"Write a Python script that adds three numbers together".
Is that bar going up? I think it probably is, although not as fast/far as some believe. I also think that "unreliable" can still be "useful".
But what does that mean? If you tell the LLM "Say just 'hi' without any extra words or explanations", do you not get "hi" back from it?
That's literally the wrong way to use LLMs though.
LLMs think in tokens, the less they emit the dumber they are, so asking them to be concise, or to give the answer before explanation, is extremely counterproductive.
I was trying to make a point regarding "reliability", not a point about how to prompt or how to use them for work.
This is relevant. Your example may be simple enough, but for anything more complex, letting the model have its space to think/compute is critical to reliability - if you starve it for compute, you'll get more errors/hallucinations.
Sometimes I get "Hi!", sometimes "Hey!".
Which model? Just tried a bunch of ChatGPT, OpenAI's API, Claude, Anthropic's API and DeepSeek's API with both chat and reasonee, every single one replied with a single "hi".
o3-mini-2025-01-31 with high reasoning effort replied with "Hi" after 448 reasoning tokens.
gpt-4.5-preview-2025-02-27 replied with "Hi!"