What I was trying to get at in the post is that net new experiences is where I see a massive delta
The 'LSP' that would allow new frameworks or languages to shine with coding agents is already mostly here, and it's things like hooks, MCPs, ACP, etc. They keep the code generation aligned with the final intent, and syntactically correct from the get go, with the help of very advanced compilers/linters that explain to the LLM the context it's missing.
That's without hypothesising on future model upgrades where fine-tuning becomes simple and cheap, local, framework-specific models become the norm. Then, React's advantage (its presence in the training data) becomes a toll (conflicting versions, fragmented ecosystem).
I also have a huge bias against the javascript/typescript ecosystem, it gives me headaches. So I could be wrong.
And LLMs can create idiomatic CRUD pages using it. I just needed to include one example in AGENTS.md
In front-end as well—I've been able to go much farther for simple projects using alpine than more complex frameworks. For big products I use Elm, which isn't exactly the most common front-end choice but it provides a declarative programming style that forces the LLM to write more correct code faster.
In general, I think introspectible frameworks have a better case, and whether they're present in training data or not becomes more irrelevant as the introspectibility increases. Wiring the Elm compiler to a post-write hook means I basically have not written front-end code in 4 or 5 months. Using web standards and micro frameworks with no build step means the LLM can inspect the behaviour using the chrome dev tools MCP and check its work much more effectively than having to deal with the React loop. The ecosystem is so fragmented there, I'm not sure about the "quality because of quantity of training data" argument.