The surprising thing is inter-modality shared variation. I wouldn't have bet against it but I also wouldn't have guessed it.
I would like to see model interpretability work into whether these subspace vectors can be interpreted as low level or high level abstractions. Are they picking up low level "edge detectors" that are somehow invariant to modality (if so, why?) or are they picking up higher level concepts like distance vs. closeness?
The "human" part of that matters. This is all human-made data, collected from human technology, which was created to assist human thinking and experience.
So I wonder if this isn't so much about universals or Platonic ideals. More that we're starting to see the outlines of the shapes that define - perhaps constrict - our own minds.