“It's not a recognized acronym, code, or term in any common context I’m aware of” is pretty similar to “I don't know what that is”. I would assume that a model could be trained to output the latter.
drsim
Right. I’ve had a lot of success using structured output to force LLMs to make Boolean choices, like can they reply or not.