Suppose you have some interface with fields a and c. If your function takes in an object with that interface and operates on the c field, what you want is to be able to do is compile that function to access c at “the address pointed to by the pointer to the object, plus 8” (assuming 64-bit fields). Your CPU supports such addressing directly.
Because of structural subtyping, you can’t do that. It’s not unrecognized member. But your caller might pass in an object with fields a, b, and c. This is entirely idiomatic. Now c is at offset 16, not 8. Because the physical layout of the object is different, you no longer have a statically known offset to the known field.
Outside of this, you can unify the types. You would take every interface used to access the object and create a new type that has all of the members of both. You can then either create vtables or monomorphize where it is used in calls.
At any point that analysis cannot determine the actual underlying shape, you drop to the default any.
In practice v8 does exactly what you're saying can't be done, virtually all the time for any hot function. What you mean to say is that typescript type declarations alone don't give you enough information to safely do it during a static compile step. But modern JS engines, that track object maps and dynamically recompile, do what you described.
> This gives you worse-than-JIT performance on all field access, since JITs can perform dynamic shape analysis.
We're talking about using types to guide static compilation. Dynamic recompilation is moot.
A compiler might be able to wring some things out of it (I'm skeptical about obviouslynotme's suggestions in a cousin comment, but they seem insistent) or suppress some checks if you're happy with a segfault when someone did a cast...but it's just not a type system like, say, C's, which is more rigid and thus gives the compiler more to work with.