Because the training data does not have it. So we have the label "rock" which intersect with some other label like "hard" and "earth". But the item itself have more attributes that we don't bother assigning label to them. Instead, we just experience them. So the label get linked to some qualia. We can assume that there's a collective intersection of the qualia that the label "rock" refers to.
LLMs don't have access to these hidden attributes (think how to describe "blue" to someone born blind). They may understand that color is a property of object, or that "black" is the color you wear for funerals in some locations. But ask them how to describe the color of a specific object and the output is almost guaranteed to be wrong. Unless they are in a funeral in the above location, so he can predict that most people wear black. But it's a guess, not an informed answer.
Why would "membership in a set" not show up as a relationship between the items and the set?
In fact, it's not obvious to me that there's any semantic meaning not contained in the relationship between labels.