selcuka parent
I'm seeing the same behaviour. It's as if they have a post-processor that evaluates the quality of the response after a certain number of tokens have been generated, and reverts the response if it's below a threshold.
I've noticed Gemini exhibiting similar behaviour. It will start to answer, for example, a programming question - only to delete the answer and replace it with something along the lines of "I'm only a language model, I don't know how to do that"
This seems like a bizarre way to handle this. Unless there's some level of malicious compliance, I don't see why they wouldn't just hide the output until the filtering step is completed. Maybe they're incredibly concerned about it appearing responsive in the average case.
Would not be surprised if there were browser extensions/userscripts to keep a copy of the text when it gets deleted and mark it as such.
They have both pre and post-LLM filters.
The linked article mentions these safeguards as the post-processing step.
I've seen the exact same thing! Gemini put together an impressive bash one liner then deleted it.
Always very frustrating when it happens.
It might be copyright related and not quality related. What if X% of it is a direct ripoff an existing song?