There are shitloads of ambiguities. Most of the problems people have with LLMs is the implicit assumptions being made.
Phrased differently, telling the model to ask questions before responding to resolve ambiguities is an extremely easy way to get a lot more success.
This item has no comments currently.
It looks like you have JavaScript disabled. This web app requires that JavaScript is enabled.
Please enable JavaScript to use this site (or just go read Hacker News).
There are shitloads of ambiguities. Most of the problems people have with LLMs is the implicit assumptions being made.
Phrased differently, telling the model to ask questions before responding to resolve ambiguities is an extremely easy way to get a lot more success.