> Does it apply to completely novel tasks? No, that would be magic.
Are there novel tasks? Inside the limits of physics, tasks are finite, and most of them are pointless. One can certainly entertain tasks that transcend physics, but that isn't necessary if one merely wants an immortal and indomitable electronic god.
Within the context of this paper, novel just means anything that’s not a vision transformer.
It does seem to be working for novel tasks.
Here's a very cool analogy from GPT 5.1 which hits the nail in the head in explaining the role of subspace in learning new tasks by analogy with 3d graphics.