> The fact that all models at present credulously accept their training
Is this true?
So many on HN make these absolute statements about how LLMs operate and what they can and can't do, that it seems like they fail harder at this test than any other.
Is this true?
So many on HN make these absolute statements about how LLMs operate and what they can and can't do, that it seems like they fail harder at this test than any other.
It is just autocomplete.
They can't generalize.
They can't do anything not in their training set.
All of which are false.