Preferences

I don't buy it either. I've been building my own backend framework for the past 2.5 years, and even though it's a DSL over Python and there's no documentation online and barely one in my computer, Claude Code understands it with enough usage examples in my codebase.

In front-end as well—I've been able to go much farther for simple projects using alpine than more complex frameworks. For big products I use Elm, which isn't exactly the most common front-end choice but it provides a declarative programming style that forces the LLM to write more correct code faster.

In general, I think introspectible frameworks have a better case, and whether they're present in training data or not becomes more irrelevant as the introspectibility increases. Wiring the Elm compiler to a post-write hook means I basically have not written front-end code in 4 or 5 months. Using web standards and micro frameworks with no build step means the LLM can inspect the behaviour using the chrome dev tools MCP and check its work much more effectively than having to deal with the React loop. The ecosystem is so fragmented there, I'm not sure about the "quality because of quantity of training data" argument.


Author here. This is a fair comment. If you have a corpus that can be used as context already it's not like the LLMs will be forcing you in to React, there's probably enough bias (in a good way) to ensure the tool continues to be useful.

What I was trying to get at in the post is that net new experiences is where I see a massive delta

Yeah for sure but I think frameworks will adapt. It's like going back to 2002 and saying that it's better to program in Java because of all the IDEs available and all the corporate money being poured into having the best developer experience there can be. But since LSP arrived, developers choosing a smaller language suffer much less.

The 'LSP' that would allow new frameworks or languages to shine with coding agents is already mostly here, and it's things like hooks, MCPs, ACP, etc. They keep the code generation aligned with the final intent, and syntactically correct from the get go, with the help of very advanced compilers/linters that explain to the LLM the context it's missing.

That's without hypothesising on future model upgrades where fine-tuning becomes simple and cheap, local, framework-specific models become the norm. Then, React's advantage (its presence in the training data) becomes a toll (conflicting versions, fragmented ecosystem).

I also have a huge bias against the javascript/typescript ecosystem, it gives me headaches. So I could be wrong.

Same experience here. I have a custom toy full stack framework with zero dependencies.

And LLMs can create idiomatic CRUD pages using it. I just needed to include one example in AGENTS.md

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal