The argument here is that React has permanently won because LLMs are so heavily trained on it and default to it in their answers.
I don't buy this. The big problem with React is that the compilation step is almost required - and that compilation step is a significant and growing piece of friction.
Compilation and bundling made a lot more sense before browsers got ES modules and HTTP/2. Today you can get a long way without a bundler... and in a world where LLMs are generating code that's actually a more productive way to work.
Telling any LLM "use Vanilla JS" is enough to break them out of the React cycle, and the resulting code works well and, crucially, doesn't require a round-trip through some node.js build mechanism just to start using it.
Call me a wild-eyed optimist, but I'm hoping LLMs can help us break free of React and go back to building things in a simpler way. The problems React solve are mostly around helping developers write less code and avoid having to implement their own annoying state-syncing routines. LLMs can spit out those routines in a split-second.
> The big problem with React is that the compilation step is almost required - and that compilation step is a significant and growing piece of friction.
Having a build step more than pays for itself just in terms of detecting errors without having to execute that codepath. The friction is becoming less and less as the compilation step is increasingly built into your project/dependency management tool and increasingly faster (helped by the trend towards Rust or Go now that the required functionality is relatively well-understood)
> The problems React solve are mostly around helping developers write less code and avoid having to implement their own annoying state-syncing routines. LLMs can spit out those routines in a split-second.
An LLM can probably generate the ad hoc, informally-specified, bug-ridden, slow implementation of half of React that every non-React application needs very quickly, sure. But can the LLM help you comprehend it (or fix bugs in it) any faster? That's always been the biggest cost, not the initial write.
The problem with React apologetics is that you need to only take a cursory look at literally every production app written in React to see it's terrible and must be abandoned in the long-term.
To see how fast a properly engineered app can be if it avoids using shitty js frameworks just look at fastmail. The comparison with gmail is almost comical: every UI element responds immediately, where gmail renders at 5 fps.
Well yeah, most software is bad. In fact it's so bad that's its almost unbelievable.
We're all used to it and that's fine. But it's still bad. We're still wasting, like, 10,000x more resources than we should to do basic things, and stuff still only works, like, 50% of the time.
GMail is becoming the Lotus Notes of the 21st century. It uses half a gigabyte of RAM, for evey tab. God forbid you need to handle several accounts, i.e., for monitoring DMARC reports across domains.
And IT IS SLOW, despite your experience, which is highly dependant on how much hardware can you throw at it.
> [most used web framework, powering innumerable successful businesses]
> [literally unusable]
It's gotten a lot of critique over the complexity it has over the years, the same way how Next.js also has. I've also seen a frickload of render loops and in some cases think Vue just does hooks better (Composition API) and also state management better (Pinia, closer to MobX than Redux), meanwhile their SFC compiler doesn't seem to support TypeScript types properly so if you try to do extends and need to create wrapper components around non-trivial libraries (e.g. PrimeVue) then you're in for a bunch of pain.
I don't think any mainstream options are literally unusable, but they all kinda suck in subtly different ways. Then again, so did jQuery for anything non-trivial. And also most back end options also kind of suck, just in different ways (e.g. Spring Boot version upgrades across major versions and how verbose the configuration is, the performance of Python and the dependency management at least before uv), same could be said for DBs (PostgreSQL is pretty decent, MariaDB/MySQL has its hard edges) and pretty much everything else.
Doesn't mean that you can't critique what's bad in hopes of things maybe improving a bit (that Spring Boot config is still better than Spring XML config). GMail is mostly okay as is, then again the standards for GUI software are so low they're on the floor - also extends to Electron apps.
My friend, it renders at 15 fps on a literal supercomputer. It takes 30 seconds to load. The time between clicking a button and something happening is measured in seconds. It may be successful, but it is not good.
The problem is that you’ve (and we all have) learned to accept absolute garbage. It’s clearly possible to do better, because smaller companies have managed to build well functioning software that exceeds the performance of Google’s slop by a factor of 50.
I’m not saying RETVRN to plain JS, but clearly the horrid performance of modern web apps has /something/ to do with the 2 frameworks they’re all built on.
Enshitification, I've been using Gmail for decades and it was significantly faster and more responsive in the past. It still works fine tbh, but it did work better. Whether or not something is successful has little to do with its quality or performance these days.
There was also a time where once a website or application loaded, scrolling never lagged. Now when something scrolls smoothly it's unusual, and I appreciate it. Discord has done a really good job improving their laggy scroll, but it's still unbelievably laggy for literal text and images, and they use animation tricks to cover up some of the lag.
_I'm gonna narrow in on the bit about compilation steps_.
Anyone shipping production code will one way of another have some kind of build step, whether that's bundling, minification, typechecking, linting, finger printing files, etc. At that point it makes little difference if you add a build step for compilation.
I'm sympathetic to not wanting to deal with build processes I try to avoid them where I can in my side projects. The main web project I've been working on for the last year has no build step, uses Vanilla JS & web components. But it's also not a consumer facing product.
I think there's friction for sure, but I just can't see this being an issue for most cases where a build step is already in place for other concerns. And Developers are fairly familiar with build steps especially if you do anything outside the web in C/C++ or Java/C# or Rust or whatever.
For release but not for development.
Sufficient for the build step to take a long time and you start to notice the friction.
The web/browser should not rely on bundlers and compilation steps overall. This should remain optional.
Hot-reloads in a modern bundler like Vite will typically be instantaneous. Normally in development, only dependencies are bundles, and the files you write are served as-is (potentially with a per-file compilation step for e.g. jsx or TypeScript). That means that when you save a file, the bundler will run the compiler over that single file, then notify the hot-reload component in the browser to re-fetch it. That would be quick even if it were done in JavaScript, but increasingly bundlers use parts with in Go or Rust to ensure that builds happen every more quickly.
If you've got a huge project, even very quick bundlers will end up slowing down considerably (although hot reload should still be pretty quick because it still just affects individual files). But in general, bundlers are pretty damn quick these days, and getting even quicker. And of course, they're still fully optional, even for a framework like React.
Not really optional for react since it relies so heavily on jsx...
You can write react without it but then is it react? What about the libraries you may want to import or code that a llm will generate for you?
There should be better.
There is an extra thing that the people complaining about the compilation step in react are missing: using c++, for example, if you find an issue, you have to fix the issue, rebuild the thing, then run the thing and *do all the steps required to get your state to duplicate the issue*, just to check you fixed the issue. With react and the other js inspired frameworks and adjacent tooling, you just have to save the file.
With a bundler like Vite or TSX (not the same as the .tsx file extension), you really do just save the file, and everything reloads, usually instantaneously. That said, TS is now supported by default in NodeJS, Deno, and Bun, so if you're doing server-side stuff, you probably don't need a bundler at all, at least for development.
Exactly, and I wouldn't miss a chance to give React some crap; when I was learning Java or Swift, the compilation times seemed horrendous. Web developers have it very good with fast incremental compilation, hot reload, etc.
I don't buy it either. The reality is that the people who do hiring don't understand the problems they are working on and which tech stack is appropriate. They might not understand or even like React, but they are going to pick it because they know that they can hire other people who understand it. We will end up with lots of projects in 5-10 years where people will ask "why the hell did you use React for this?" ....actually thats the reality now!
I also think the pitfall that might exist here is the base assumption that developers are allowing the LLMs to make architecture decisions either not addressing the issue at all and just prompting for end results or not making the choice before asking the LLM.
E.g., if most developers are telling their LLMs “build me a react app” or “I want to build a website with the most popular framework,” they were going to end up with a react app with or without LLMs existing.
I’m sure a lot of vibecoders are letting Jesus take the wheel, but in my vibecoding sessions I definitely tend to have some kind of discussion about my needs and requirements before choosing a framework. I’m also seeing more developers talking about using LLMs with instructions files and project requirement documents that they write and store in their repo before getting started with prompting, and once you discover that paradigm you don’t tend to go back.
Yup. The central argument seems to include an assumption that LLMs will be the same tomorrow as today.
I'd note that people learn and accumulate knowledge as new languages and frameworks develop, despite there being established practices. There is a momentum for sure, but it doesn't preclude development of new things.
Not quite. The central argument is that LLMs tomorrow will be based on what LLMs output today. If more and more people are vibe-coding their websites, and vibe-coding predominantly yields React apps, then the training data will have an ever larger share of React in it, thus making tomorrow's LLMs even more likely to produce React apps.
I share your optimism. Once you move up a conceptual layer (from writing code to guiding an LLM), the lower level almost becomes interchangeable. You can even ask the LLM to translate from one language/framework to another.
While I tend to agree, I think there's still an undercurrent of React-like paradigms being strongly preferenced in the training data so assuming LLMs continue to get much better, if you were to build a simple UI toolkit with an LLM, there's a strong chance that over time with accretion you will end up remaking React or any one other framework unless you're particularly opinionated about direction.
I think that while it may be easier to develop with LLMs in languages and frameworks the LLM may “know” best, in theory, models could be trained to code well in any language and could even promote languages that either the sponsoring company or LLM “prefers”.
yea, and models now are so good the difference between writing react or svelte code is moot. maybe 2 years ago choosing react just because an LLM would be better at it would make sense but not today.
(For the AI-sceptics, you can read this as models are equally bad at all code)
Fwiw - I'm hoping it can break out too. But one of the biggest challenges is that last bit "asking it to use vanilla JS" - unsee this all the time in developer relations: getting developers to ask for a specific thing or even have it on their mind to think about using it is one of the biggest hurdles.
> Frameworks are abstractions over a platform designed for people and teams to accelerate their teams new work and maintenance while improving the consistency and quality of the projects. [...] I was just left wondering if there will be a need for frameworks in the future? Do the architecture patterns we've learnt over the years matter? Will new patterns for software architecture appear that favour LLM management?
Are you saying that frameworks might become less important because LLMs can just generate boilerplate code instead? Or do I misunderstand? Personally, if the vibe-engineering future that some executives are trying to foist on us means that I'll be reading more code than I write directly, then I want that code to be _doubly_ succinct.
Maybe in a distant future, but why are so obsessed with the anti-framework sentiment? We don't shy away from a framework when coding in Node, PHP, Java…
Is there something about the web — with its eternal backwards compatibility, crazy array of implementations, and 3 programming languages — that seems like it's the ideal platform for a framework-free existence?
Maybe if we bake all of the ideas into JavaScript itself, but then where does it stop? Is PHP done evolving? Does Java, by itself, do everything as well as you want out of Spring?
The direct semantics of JSX are "transform this syntax into this nested sequence of function calls and this layout of arguments". That's been the case since nearly the beginning. The only real semantics "fights"/"changes"/"React-specifics" you can see in the compiler options in Babel and Typescript: what the function is named and how do you import it. Enough other libraries that aren't React use JSX that it is easy to see what the generic approach looks and find ideas for runtime configuration of "jsx function name" and an import strategy that isn't just "import these hardcoded names from these hardcoded React modules".
> The direct semantics of JSX are "transform this syntax into this nested sequence of function calls and this layout of arguments".
Not exclusively. SolidJS, for example, transforms the syntax into string templates with holes in them. The "each element is a function call" approach works really well if those calls are cheap (i.e. with a VDOM), but if you're generating DOM nodes, you typically want to group all your calls together and pass the result to the browser as a string and let it figure out how to parse it.
For example, if you've got some JSX like:
<div>
<div>
<span>{text}</span>
<div>
<div>
You don't want that to become nested calls to some wrapper around 'document.createElement`, because that's slow. What you want is to instead do something like
This lets the browser do more of the hard parsing and DOM-construction work in native code, and makes everything a lot more efficient. And it isn't possible if JSX is defined to only have the semantics that it has in React.
The library [0] I wrote that uses JSX converts expression attributes into parameter-less lambdas before providing them as function parameters or object properties. This is a different behavior than react's build tools or any of typescripts jsx options. But it's not inconsistent with the spec.
More libraries than React work just fine with the existing Babel and/or Typescript JSX options. snabbdom is a big one to mind that isn't React/Preact, but there are plenty more.
The space that the Babel/Typescript JSX options describe is a constructive space for more than just React.
jsx is not really needed. We have templates. Besides it really is a dsl with a weird syntax.
I'm doubtful it will ever become an ES standard. And for good reasons.
That should be left to the different frameworks to handle.
If you use them raw, yes. They are just the building block you can build upon.
And that's a really good building block. You can create your own parsers. I am doing exactly this for a framework that has yet to be released, full disclosure.
Makes html clearly html, and javascript fully javascript. No bastardization of either one into a chimera.
And the junction of the two is why the custom parser is required. But it is really light from a dev experience.
what about the value of abstraction to readability and maintainability? Do you really want to be stuck with debugging/upgrading and generally working with such low level vanilla js code when elegant abstractions are so much more efficient ?
Abstraction for its own sake, especially with js frameworks, doesn't make anything more readable or maintainable. React apps are some of the most spaghetti style software I've ever seen, and it takes like 10 steps to find the code actually implementing business logic.
Some of that is the coding standards rather than the framework. I think Dan Abramov did a hang-up job on React, but his naming conventions and file structure are deranged.
Unfortunately there isn't any one preferred alternative convention. But if you ignore his and roll your own it will almost certainly be better. Not great for reading other people's code but you can make your own files pretty clear.
What "naming conventions and file structures" are you referring to? I don't think Dan ever really popularized anything like that for _React_.
If you're thinking of _Redux_, are you referring to the early conventions of "folder-by-type" file structures? ie `actions/todos.js`, `reducers/todos.js`, `constants/todos.js`? If so, there's perfectly understandable reasons why we ended up there:
- as programmers we try to "keep code of different kinds in different files", so you'd separate action creator definitions from reducer logic
- but we want to have consistency and avoid accidental typos, especially in untyped plain JS, so you'd extract the string constants like `const ADD_TODO = "ADD_TODO"` into their own file for reuse in both places
To be clear that was never a requirement for using Redux, although the docs did show that pattern. We eventually concluded that the "folder-by-feature" approach was better:
which is what we later turned into "Redux slices", a single file with a `createSlice` call that has your reducer logic and generates the action creators for you:
Do they do this notably worse than say a Spring boot API or a Vue frontend? I don't think this is a React thing. Those spaghetti projects would be so with or without React.
I've been leaning more on web components as an abstraction here, once an LLM can take care of their boilerplate they're a pretty nice way to modularize frontend code.
> The argument here is that React has permanently won because LLMs are so heavily trained on it and default to it in their answers.
I can't find the author making that argument. Can you point to where they're declaring that React has permanently won?
> The big problem with React is that the compilation step is almost required - and that compilation step is a significant and growing piece of friction.
This is orthogonal to what the article is addressing.
> Call me a wild-eyed optimist, but I'm hoping LLMs can help us break free of React and go back to building things in a simpler way
If you didn't read the article, I think you should. Because this is generally the conclusion that the author comes to. That in order to break out of React's grip, LLM's can be trained to use other frameworks.
> If the industry continues its current focus on maintainability and developer experience, we’ll end up in a world where the web is built by LLMs using React and a handful of libraries entrenched in the training data. Framework innovation stagnates. Platform innovation focuses elsewhere. React becomes infrastructure—invisible and unchangeable.
So I guess I'm in agreement with the author: let's actively work to make that not happen.
I think a more interesting (and significant) question is whether there can ever be a new programming language.
Like, if you really believe that in the future 95% of code will be written by LLMs, then there can never be a Python 4, because there would be no humans to create new training data.
To me, this is evidence that LLMs won’t be writing 95% of code, unless we really do get to some sort of mythical “AGI” where the AI can learn entirely from its own output and improve itself exponentially. (In which case there would still wouldn’t be a Python 4, it would be some indecipherable LLM speak.) I’ll believe that when I see it.
My hunch is that existing LLMs make it easier to build a new programming language in a way that captures new developers.
Most programming languages are similar enough to existing languages that you only need to know a small number of details to use them: what's the core syntax for variables, loops, conditionals and functions? How does memory management work? What's the concurrency model?
For many languages you can fit all of that, including illustrative examples, in a few thousand tokens of text.
So ship your new programming language with a Claude Skills style document and give your early adopters the ability to write it with LLMs. The LLMs should handle that very well, especially if they get to run an agentic loop against a compiler or even a linter that you provide.
When LLMs write and maintain code, does the programming language they use even matter? Anyway, the inputs to LLMs are all in natural language, and what we get is what we wanted built.
Is it better to specify the parameters and metrics —aka non-functional requirements —that matter for the application, and let LLMs decide? For that matter, why even provide that? Aren't the non-functional requirements generally understood?
It is the specifics that would change—scale to 100K monthly users, keep infrastructure costs below $800K, or integrate with existing Stripe APIs.
> Most programming languages are similar enough to existing languages that you only need to know a small number of details to use them: what's the core syntax for variables, loops, conditionals and functions? How does memory management work? What's the concurrency model?
I think that’s correct in terms of the surface-level details but less true for the more abstract concepts.
If you’ve tried any of the popular AI builders that use Supabase/PostgREST as a backend, for instance Lovable, you’ll see that they are constantly failing because of how unusual PostgREST is. I’m sure these platforms have “AI cheat sheets” to try to solve this, but you still see constant problems with things like RLS, for instance.
It is not pseudocode? It doesn't have to be something with strict syntax or very limited keywords, but maybe the compiler/linter (llm) could point out when you are being ambiguous or not defining how something should be done if several alternatives are possible.
A blog named "AI Focus" is of course going to push LLMs and vibe coding. But here in the real world, people can still code without LLMs, or use them with a human in control, where the LLM can look at existing code written in a framework that is not React.
Also, React was extremely popular before any LLMs were out there. I would not ascribe much of the growth to vibe coding.
Just to push back on this a tad. Yes there's growth React, it's popular, but it was consistent up until the introduction of some of the more popular code generation tools where there is a clear acceleration (if you believe builtwith.com data) in the last 9 months or so.
As LLMs improve, it matters less what they are trained on and more what they understand. I've used codex on some very obscure code bases and frameworks. It's fine. It understands them. It broadly does the right things. It can understand from examples in your code how to use things. To give you one example, I'm using an obscure framework called fritz2 with kotlin-js. Kotlin-js is not that widely used. And I'm probably one of a handful of active users of this Fritz2 framework in the world. There isn't a whole lot of code to train on. And what little there is is probably a bit outdated.
It's fine. I've been using codex on some code bases with this with pretty alright results. I also use codex to generate typescript/react code. I'm getting similar results. I had a little wow moment when I asked it to add some buttons and then afterwards realized that it had figured out the localization framework (one of my creations) and added translations for the button labels. All unprompted. It clearly understood the code base and how I like things done. So it just went ahead and did them. The syntax is not a problem. The obscurity of the library is not a problem as long as you give it enough to work with. It does less well coding something from scratch than working on existing code.
IMHO, things like react are optimized for humans. They aren't actually that optimal for LLMs to work with. It's actually impressive that they can. Too much expressiveness and ambiguity. LLMs like things spelled out. Humans don't. We're still doing things manually so it helps if we can read and edit what the LLMs do. But that won't stay like that.
I think in a few years, we'll start seeing languages and frameworks that are more optimal for Agentic coding tools as they will be the main users. So, stronger typing. More verbosity and less ambiguity.
I don't buy it either. I've been building my own backend framework for the past 2.5 years, and even though it's a DSL over Python and there's no documentation online and barely one in my computer, Claude Code understands it with enough usage examples in my codebase.
In front-end as well—I've been able to go much farther for simple projects using alpine than more complex frameworks. For big products I use Elm, which isn't exactly the most common front-end choice but it provides a declarative programming style that forces the LLM to write more correct code faster.
In general, I think introspectible frameworks have a better case, and whether they're present in training data or not becomes more irrelevant as the introspectibility increases. Wiring the Elm compiler to a post-write hook means I basically have not written front-end code in 4 or 5 months. Using web standards and micro frameworks with no build step means the LLM can inspect the behaviour using the chrome dev tools MCP and check its work much more effectively than having to deal with the React loop. The ecosystem is so fragmented there, I'm not sure about the "quality because of quantity of training data" argument.
Author here. This is a fair comment. If you have a corpus that can be used as context already it's not like the LLMs will be forcing you in to React, there's probably enough bias (in a good way) to ensure the tool continues to be useful.
What I was trying to get at in the post is that net new experiences is where I see a massive delta
Yeah for sure but I think frameworks will adapt. It's like going back to 2002 and saying that it's better to program in Java because of all the IDEs available and all the corporate money being poured into having the best developer experience there can be. But since LSP arrived, developers choosing a smaller language suffer much less.
The 'LSP' that would allow new frameworks or languages to shine with coding agents is already mostly here, and it's things like hooks, MCPs, ACP, etc. They keep the code generation aligned with the final intent, and syntactically correct from the get go, with the help of very advanced compilers/linters that explain to the LLM the context it's missing.
That's without hypothesising on future model upgrades where fine-tuning becomes simple and cheap, local, framework-specific models become the norm. Then, React's advantage (its presence in the training data) becomes a toll (conflicting versions, fragmented ecosystem).
I also have a huge bias against the javascript/typescript ecosystem, it gives me headaches. So I could be wrong.
On the plus side, maybe this means the endless churn of JS libraries will finally slow down and as someone who isn’t a JS developer but occasionally needs to dip their toe into the ecosystem, I can actually get stuff done without having to worry about 6-month old tutorials being wrong and getting caught in endless upgrade hell.
For what it’s worth - vanilla JS is pretty darn good and if you’re only dipping in for some small functionality I highly doubt a framework brings much benefit.
I find vanilla JS unusable for anything bigger, though. It was designed for quickie scripts on throwaway web pages, but it's not great for anything you'd call a web app.
Typescript, however, does scale pretty well. But now you've added a compiler and bundler, and might as well use some framework.
Has this actually been true, though? I admit I don’t write JavaScript much recently, but to me it feels like things have pretty stabilized. React released hooks in early 2019 before Covid, and after that things don’t really change much at all.
At this point there are several large Rust UI libraries that try to replicate this pattern in web assembly, and they all had enough time to appear and mature without the underlying JSX+hooks model becoming outdated. To me it’s a clear sign that JS world slowed down.
> React released hooks in early 2019 before Covid, and after that things don’t really change much at all.
Server-side components became a thing, as well as the React compiler. And libraries in the React (and JS at large) ecosystem are pretty liberal with breaking changes, a few months is enough to have multiple libraries that are out-of-date and whose upgrade require handling different breaking changes.
React Native is it own pit of hell.
It did slow down a little since a few years ago, but it's still not great.
Yes. When I dipped my toes into the front end ecosystem in 2021 to build a portfolio site, the month old tutorial video I followed, was already out of date. React had released an update to routers and I could not find any documentation on it. Googling for the router brought me to pages that said to do what I had done, which disagreed with the error message that I was getting from react.
React had just updated and documentation hadn’t.
I then discovered that Meta owns React so I got frustrated as hell with their obfuscation and ripped out all of the React and turned what was left into vanilla html+js.
If this gets me out of the "This framework that almost everyone uses and is easy to hire for and that works well for a lot of people is literally unusable compared to this new hot framework I fell in love with recently! We need to rebuild everything!"-discussion, I'm fine with it
I try to filter out such people in hiring nowadays but sometimes you miss, or come into an existing team with these issues
I don't buy it, I've used LLMs (well, mostly sonnet 4.5 and sometimes gpt5) in a variety of front-end frameworks (react, vue, htmx) and they do just fine. As usual, requires a lot of handholding and care to get good results, but I've found this is true for react codebases just as much as anything else.
> As usual, requires a lot of handholding and care to get good results, but I've found this is true for react codebases just as much as anything else.
I think you and others in this thread have either just skimmed the article or just read the headline. The point isn't that you can't use LLMs for other languages, its that the creators of these tools AREN'T using other languages for them. Yes, LLM's can write Angular. But if there's less data to train on, the results won't be as good. And because of this, it's creating a snowball effect.
Not your parent commenter but their point was clear to me.
To me, they don't buy the argument that the snowball effect is significant enough to overcome technical merits of different frontend frameworks.
And I'll add that: older libraries like React have at least one disavantage: there's a lot of outdated React code out there that AI is being trained on.
> I wonder if React has something to keep AI on their toes about best practices.
Ahh, I wouldn't hold my breath.
And to your point, I guess another thing Svelte has is it's compatibility with just vanilla JS, meaning (I think) it doesn't necessarily have to be "Svelte" code to still work with Svelte.
I don't buy the premise - that LLMs being trained on more React code than other frameworks is going to cause the collapse of alternatives. The data presented in the article isn't very convincing to me - it's absolute numbers, it's not a zero-sum game, and besides LLM coding is the worst it's ever going to be. Hypothetically, even if the data was convincing (showing a massively increasing relative share of React usage since LLMs entered the scene), I don't think it's sensible to extrapolate from current trends about LLM coding anyway. This stuff is barely a few years old and we want to make confident predictions about it?
> I don't buy the premise - that LLMs being trained on more React code than other frameworks is going to cause the collapse of alternatives
But if less people are exposed to those frameworks, then surely that means they will be less popular? I'm struggling to understand your argument.
> he data presented in the article isn't very convincing to me - it's absolute numbers, it's not a zero-sum game,
Of course it is. If I'm using React to build a site, I'm not using Svelte to build it. It less people are using a framework, there will be less funding. If more people use it, more money.
> I don't think it's sensible to extrapolate from current trends about LLM coding anyway.
The actual tools themselves are using React. Bolt, a UI design LLM, uses React by default. i don't even think there's an option to use a different language right now. These tools have taken over the industry, and have absolutely exploded in popularity in the few years they've been available. This is going to create a snowball effect.
> This stuff is barely a few years old and we want to make confident predictions about it?
I don't think you read the article as closely as you think you do. Saying "React has probably spiked in popularity because LLM's use it be default" isn't that controversial. And it's true. And I don't think it's a long shot to say "If there's less data associated with a framework, it'll be less likely to be used by these tools and then less likely to be used at all." In fact, it feels like a pretty obvious conclusion.
We can ignore what is clearly happening (which even as a React dev I don't want because it WILL limit my future options) or work to make sure those tools are offering other defaults.
> But if less people are exposed to those frameworks, then surely that means they will be less popular?
I agree, but I don't think the data suggests that is what's happening. The data presented in the article shows only that the number of new sites made with React has increased greatly since LLMs arrived on the scene. But there's a base rate fallacy here - we aren't shown data for any other frameworks!
>Of course it is.
That's not what I mean by a zero-sum game. There isn't a fixed number of websites that different frameworks are taking a share of (this would be a zero-sum game). The number of websites itself has massively increased since LLMs arrived on the scene. You can very quickly spin up 100 new sites using your new framework without all the other frameworks "losing" 100 sites, you know what I mean? Similarly I think the number of people making websites has exploded for the same reason.
And this is another explanation for the data in the article - that there are simply way more sites being created now that it's so trivial for anyone to make one. Have a look at the StackExchange links I gave in my last comment. There isn't much evidence there that React is overwhelming the industry (especially amongst professional devs), although I grant you it would be difficult to measure if it were true.
> The actual tools themselves are using React. [...] These tools have taken over the industry.
Yes, but so have plenty of other tools that don't use React by default, like Claude Code or Codex. There are plenty of new websites being made across all of the major frameworks.
> I don't think you read the article as closely as you think you do.
Do you mind cutting it out with the ad-hominems? I've been nothing but respectful to you, and in each of your replies you've made little jabs at me about "not understanding the article". I just disagree with you, friend, be nice =)
LLMs will not abstract away framework choice. They will concrete it away. React is for humans. You'll know we're out of the AI stoneage when coding models just generate direct machine instruction, because their output won't need to be touched by humans.
I feel like there could be a loophole here for the new-framework-author. Stick to using JSX for the view; JSX is just syntax sugar for built in react functions for constructing a tree, which can be easily swappable with your own implementation. I recall years ago using a babel plugin that just emitted static HTML from JSX. I know Vue.js v2 also had JSX support that way.
I think LLMs, despite already being trained massively on React, can easily adapt their output to suit a new framework's-specific API surface with a simple adjustment to the prompt. Maybe include an abbreviated list of type/function signatures that are specific to your new framework and just tell the LLM to use JSX for the views?
What I think will definitely be a challenge for new library authors in the age of LLMs is state management. There are already tons of libraries that basically achieve the same thing but have vastly different APIs. In this case, new lib-authors may be forced to just write pluggable re-implementations of existing libraries just to enable LLMs to emit compilable/runnable code. Though I dont know of any state management library that dominates the web like React does with the view layer.
Huh - that's actually pretty interesting and I hadn't thought of that as an option.. I know Preact was built as a faster alternative while being broadly compatible, but what you are describing is maybe even blending the technologies as that short circuit. neat.
LLMs are great at HTMX and Python. Both Claude Code and Codex do well at it so I’m fine with things like that. React is fine but HTMX does well. I also frequently used to copy Claude’s generated React things into ChatGPT and ask it to rewrite them in Vanilla JS and it would work but that was a year ago when artifacts were just being launched.
Sure. Is there a competitive advantage to people who know COBOL and never bothered to learn Java?
At the moment I still consider it a tool alongside all other tool, or else a business strategy next to e.g. outsourcing. My job didn't go away because there's 1000x more of them overseas. But likewise, it also didn't go away because there's individuals 1000x better (educated, developed, paid, connected) than me in the US.
Sure it has a lot of staying power because of network effects (and qualities like backwards compatibility and gaming). But it's not a terminal, self-reinforcing snowball, force of nature like the article implies React is.
There's some properties that ensure it's moat. Developers need to learn React and there's not much transferability of knowledge from the underlying tech. If I know Debian, I can understand most of Ubuntu. However if I know HTML, I'll probably understand react even less. This is a great tactic of having a vendor lock in not only at the infrastructure level of a company, but at the programmer level.
Thanks god. The days people kept inventing new JS frameworks or even dialects (coffeescript, remember?) every three months couldn't be gone fast enough.
> Thanks god. The days people kept inventing new JS frameworks or even dialects (coffeescript, remember?) every three months couldn't be gone fast enough.
Coffeescript helped Javascript to evolve the right way, so in retrospect, it was absolutely a good thing. It's like people here don't remember the days of ES3 or ES5...
And the days? Remember Typescript right now? Typescript is not Javascript.
One of the guiding principles of typescript is that its semantics should be consistent with ES. This was not the case for coffeescript. I think TS is doing it the right way.
Coffeescript was great though, because at the time Javascript was growing fast but the language was developing slowly or not at all. There was also Atscript for a little while which added annotations because Typescript didn't want to add them; they eventually budged and Atscript was dead. Then there was a fork of Node because Node at the time was still tightly controlled by Joyent, whereas its fork (io.js or js.io or something) was an open governance model. It was eventually merged back into node.
TL;DR sometimes you need to make an alternative to get the original to move.
I noticed something similar. Even non technical clients now come with technical requirements because they use chatgpt, and it's always a react app.
Poking client reqs is such a high value skill, most freelancers will just build what the client asks, "ok here's a react frontend with a postgres db and a crud app for your ecommerce website" instead of asking what the functional requirements are, maybe it can be a shopify thing, or just post it on amazon, or maybe a plain html app (optionally with js)
It can be valid to ask for a brick house if you know what the other ways to build a house are, but if you just asked chatgpt for a house plan and it said "bricks", because it's the most housey thing and you said ok because it rings a bell and sounds housey, having a dev that asks and tells you about wooden houses or steel beams or concrete is the best that can happen.
I appreciate when it happens the other way around, I go to a lawyer and tell them I want a corp, they start off assuming I know my shit, and after 5 minutes we are like, oh I don't want a corp
Because of benchmarking LLMs have also been pushed towards fluency in Python, and related frameworks like Django and Flask. For example, SWE-Bench Verified is nearly 50% Django framework PR tasks: https://epoch.ai/blog/what-skills-does-swe-bench-verified-ev...
It will be interesting to see how durable these biases are as labs work towards developing more capable small models that are less reliant on memorized information. My naive instinct is that these biases will be less salient over time as context windows improve and models become increasingly capable of processing documentation as a part of their code writing loop, but also that, in the absence of instruction to the contrary, the models will favor working with these tools as a default for quite some time.
Yes, new frameworks will have a harder time getting uptake.
Worse, with LLM's easily generating boilerplate, there's less pressure to make old framework code concise or clear, and the superior usability of a new framework won't be a big draw.
But coding is a primary application/profit center, and you can be sure they'll reduce the latency between release and model support, and they'll start to emphasize/suggest new frameworks/paradigms as a distinguishing feature.
My concern is about gaming the system, like SEO. If LLM coding is the gatekeeper, they'll be corrupted by companies seeking access/exposure. Developer uptake used to be a reasonable measure of quality, but in this new world it might only reflect exposure.
I'm not really sure why this focuses so much on React, when it's a general "issue"/"feature"
More broadly, obviously there is some pressure to use a framework/library/programminglang/editor that has better LLM training. But even before LLMs.. you'd want to choose the one that has more SO questions, more blog posts and books published about it. The one where you can hire experienced programmers.
New players has a certain activation energy they need to overcome - which is probably good. B/c it slows down the churn of new shiny with incrementally improvements. I think a paradigm shift is sufficient though. Programmers like new shiny things - especially the good ones that are passionate about their craft
I used it as an example because I felt the data was pretty clear. I also felt that it follows a very human pattern (generative tools need customers, like other tools before, so they go with what the industry is demanding).... but now we seen an acceleration.
"You’re not competing with React’s technical merits—you’re competing with React’s statistical dominance" is the industry so bad, that it can't decide framework on technical merit?
As someone who is currently writing their own js framework, llms are able to generate code quite easily. So I am not worried that we will be able to see new frameworks.
Now, about the incentives? Probably less inference costs for llms, which probably means that they are more legible than the current state of the art for humans as well.
Less API changes than let's say react also means that the generated code as less branching although llms can adapt anyway. Cheaper.
Will probably be closer to the platform too (vanillaJS).
It would be nice if JSX was natively supported in the DOM as some kind of syntactic sugar on components by the browsers and this would fix the cycle of abuse and allow us to eventually get back to less compilation except for typescript
A built in feature with frameworks is that you constantly have to update your code to be in sync with the latest version of the framework, this create work for contractors, it's like a taxation of software.
The last time we updated react we had to do it because we were using a version so old it had been unsupported for several years. While also using modern components and libraries.
Nothing different then - before AI it would have been search engines or social networks / upvotes of the masses.
The common factor is the reader, taking what the search engine, SO commenter or AI takes as gospel. A good software developer can judge multiple inputs on their own.
And if someone doesn't care what an AI does it really isn't important what they are having it build or what tool it uses, clearly.
If you cannot code, what difference is to you the flavor of code generated? Do you also complain when the CISC instruction set is used instead of RISC?
The LLM owners also don't like to burn the knowledge into model weights or prompts, sharing some concerns of the post, e.g. freshness. RAG is the new hotness.
I embraced this when I had the same realisation that React will get reinforced the most, and vibe-coded something in it.
I had to ditch the whole thing and rewrite it in Vue when it got big enough that I couldn’t debug it without learning React.
Vibe-coding something in a stack you know or want to know means you can get off your high horse and peek into the engine.
I still agree with the sentiment that React is winning; if the competition of volume. But other frameworks won’t stop existing unless you believe that people exclusively choose what is dominant. But there will always be artisans, even after all old people who learned the alternatives were flushed out.
You could always write some code in a different framework and help spread around other types of reusable template frameworks. Like in XSLT for example...oh wait... that's been killed off too.
I don't buy this. The big problem with React is that the compilation step is almost required - and that compilation step is a significant and growing piece of friction.
Compilation and bundling made a lot more sense before browsers got ES modules and HTTP/2. Today you can get a long way without a bundler... and in a world where LLMs are generating code that's actually a more productive way to work.
Telling any LLM "use Vanilla JS" is enough to break them out of the React cycle, and the resulting code works well and, crucially, doesn't require a round-trip through some node.js build mechanism just to start using it.
Call me a wild-eyed optimist, but I'm hoping LLMs can help us break free of React and go back to building things in a simpler way. The problems React solve are mostly around helping developers write less code and avoid having to implement their own annoying state-syncing routines. LLMs can spit out those routines in a split-second.
Having a build step more than pays for itself just in terms of detecting errors without having to execute that codepath. The friction is becoming less and less as the compilation step is increasingly built into your project/dependency management tool and increasingly faster (helped by the trend towards Rust or Go now that the required functionality is relatively well-understood)
> The problems React solve are mostly around helping developers write less code and avoid having to implement their own annoying state-syncing routines. LLMs can spit out those routines in a split-second.
An LLM can probably generate the ad hoc, informally-specified, bug-ridden, slow implementation of half of React that every non-React application needs very quickly, sure. But can the LLM help you comprehend it (or fix bugs in it) any faster? That's always been the biggest cost, not the initial write.
To see how fast a properly engineered app can be if it avoids using shitty js frameworks just look at fastmail. The comparison with gmail is almost comical: every UI element responds immediately, where gmail renders at 5 fps.
> [literally unusable]
> [one of the most successful web apps]
> [look at how bad it is]
Your standards might be uncalibrated with reality
I use gmail every day and it's fine, apart from when they push AI features I don't want, but I can't blame that on the framework
We're all used to it and that's fine. But it's still bad. We're still wasting, like, 10,000x more resources than we should to do basic things, and stuff still only works, like, 50% of the time.
And IT IS SLOW, despite your experience, which is highly dependant on how much hardware can you throw at it.
> [literally unusable]
It's gotten a lot of critique over the complexity it has over the years, the same way how Next.js also has. I've also seen a frickload of render loops and in some cases think Vue just does hooks better (Composition API) and also state management better (Pinia, closer to MobX than Redux), meanwhile their SFC compiler doesn't seem to support TypeScript types properly so if you try to do extends and need to create wrapper components around non-trivial libraries (e.g. PrimeVue) then you're in for a bunch of pain.
I don't think any mainstream options are literally unusable, but they all kinda suck in subtly different ways. Then again, so did jQuery for anything non-trivial. And also most back end options also kind of suck, just in different ways (e.g. Spring Boot version upgrades across major versions and how verbose the configuration is, the performance of Python and the dependency management at least before uv), same could be said for DBs (PostgreSQL is pretty decent, MariaDB/MySQL has its hard edges) and pretty much everything else.
Doesn't mean that you can't critique what's bad in hopes of things maybe improving a bit (that Spring Boot config is still better than Spring XML config). GMail is mostly okay as is, then again the standards for GUI software are so low they're on the floor - also extends to Electron apps.
The past couple of weeks I've been having loading times up to 1 minute to open gmail.
No idea what they are up to. Loading google workshop or something like that, takes eons.
The problem is that you’ve (and we all have) learned to accept absolute garbage. It’s clearly possible to do better, because smaller companies have managed to build well functioning software that exceeds the performance of Google’s slop by a factor of 50.
I’m not saying RETVRN to plain JS, but clearly the horrid performance of modern web apps has /something/ to do with the 2 frameworks they’re all built on.
There was also a time where once a website or application loaded, scrolling never lagged. Now when something scrolls smoothly it's unusual, and I appreciate it. Discord has done a really good job improving their laggy scroll, but it's still unbelievably laggy for literal text and images, and they use animation tricks to cover up some of the lag.
Anyone shipping production code will one way of another have some kind of build step, whether that's bundling, minification, typechecking, linting, finger printing files, etc. At that point it makes little difference if you add a build step for compilation.
I'm sympathetic to not wanting to deal with build processes I try to avoid them where I can in my side projects. The main web project I've been working on for the last year has no build step, uses Vanilla JS & web components. But it's also not a consumer facing product.
I think there's friction for sure, but I just can't see this being an issue for most cases where a build step is already in place for other concerns. And Developers are fairly familiar with build steps especially if you do anything outside the web in C/C++ or Java/C# or Rust or whatever.
If you've got a huge project, even very quick bundlers will end up slowing down considerably (although hot reload should still be pretty quick because it still just affects individual files). But in general, bundlers are pretty damn quick these days, and getting even quicker. And of course, they're still fully optional, even for a framework like React.
As a recovering C++ programmer the idea that a basically instant compilation step is a source of friction is hysterical to me.
Try waiting overnight for a build to finish. Frontend devs don't know they're born. It takes like 5 minutes to set up vite.
React and TS people are making sure that is not the case anymore, allegedly for our own benefit.
E.g., if most developers are telling their LLMs “build me a react app” or “I want to build a website with the most popular framework,” they were going to end up with a react app with or without LLMs existing.
I’m sure a lot of vibecoders are letting Jesus take the wheel, but in my vibecoding sessions I definitely tend to have some kind of discussion about my needs and requirements before choosing a framework. I’m also seeing more developers talking about using LLMs with instructions files and project requirement documents that they write and store in their repo before getting started with prompting, and once you discover that paradigm you don’t tend to go back.
I'd note that people learn and accumulate knowledge as new languages and frameworks develop, despite there being established practices. There is a momentum for sure, but it doesn't preclude development of new things.
I think that while it may be easier to develop with LLMs in languages and frameworks the LLM may “know” best, in theory, models could be trained to code well in any language and could even promote languages that either the sponsoring company or LLM “prefers”.
(For the AI-sceptics, you can read this as models are equally bad at all code)
My actual long term hope is that in the future we won't need to think about frameworks at all: https://paul.kinlan.me/will-we-care-about-frameworks-in-the-...
Yes! That's exactly what I was trying to get at.
Is there something about the web — with its eternal backwards compatibility, crazy array of implementations, and 3 programming languages — that seems like it's the ideal platform for a framework-free existence?
Maybe if we bake all of the ideas into JavaScript itself, but then where does it stop? Is PHP done evolving? Does Java, by itself, do everything as well as you want out of Spring?
I sincerely doubt that either of JSX's syntax or its semantics under React's transforms would make it into a W3 or WHAT spec as they exist today.
Not exclusively. SolidJS, for example, transforms the syntax into string templates with holes in them. The "each element is a function call" approach works really well if those calls are cheap (i.e. with a VDOM), but if you're generating DOM nodes, you typically want to group all your calls together and pass the result to the browser as a string and let it figure out how to parse it.
For example, if you've got some JSX like:
You don't want that to become nested calls to some wrapper around 'document.createElement`, because that's slow. What you want is to instead do something like This lets the browser do more of the hard parsing and DOM-construction work in native code, and makes everything a lot more efficient. And it isn't possible if JSX is defined to only have the semantics that it has in React.The library [0] I wrote that uses JSX converts expression attributes into parameter-less lambdas before providing them as function parameters or object properties. This is a different behavior than react's build tools or any of typescripts jsx options. But it's not inconsistent with the spec.
[0] https://mutraction.dev/
The space that the Babel/Typescript JSX options describe is a constructive space for more than just React.
Unfortunately there isn't any one preferred alternative convention. But if you ignore his and roll your own it will almost certainly be better. Not great for reading other people's code but you can make your own files pretty clear.
If you're thinking of _Redux_, are you referring to the early conventions of "folder-by-type" file structures? ie `actions/todos.js`, `reducers/todos.js`, `constants/todos.js`? If so, there's perfectly understandable reasons why we ended up there:
- as programmers we try to "keep code of different kinds in different files", so you'd separate action creator definitions from reducer logic
- but we want to have consistency and avoid accidental typos, especially in untyped plain JS, so you'd extract the string constants like `const ADD_TODO = "ADD_TODO"` into their own file for reuse in both places
To be clear that was never a requirement for using Redux, although the docs did show that pattern. We eventually concluded that the "folder-by-feature" approach was better:
- https://redux.js.org/style-guide/#structure-files-as-feature...
and in fact the original "Redux Ducks" approach for single-file logic was created by the community just a couple months after Redux was created:
- https://github.com/erikras/ducks-modular-redux
which is what we later turned into "Redux slices", a single file with a `createSlice` call that has your reducer logic and generates the action creators for you:
- https://redux.js.org/tutorials/essentials/part-2-app-structu...
I can't find the author making that argument. Can you point to where they're declaring that React has permanently won?
> The big problem with React is that the compilation step is almost required - and that compilation step is a significant and growing piece of friction.
This is orthogonal to what the article is addressing.
> Call me a wild-eyed optimist, but I'm hoping LLMs can help us break free of React and go back to building things in a simpler way
If you didn't read the article, I think you should. Because this is generally the conclusion that the author comes to. That in order to break out of React's grip, LLM's can be trained to use other frameworks.
So I guess I'm in agreement with the author: let's actively work to make that not happen.
Like, if you really believe that in the future 95% of code will be written by LLMs, then there can never be a Python 4, because there would be no humans to create new training data.
To me, this is evidence that LLMs won’t be writing 95% of code, unless we really do get to some sort of mythical “AGI” where the AI can learn entirely from its own output and improve itself exponentially. (In which case there would still wouldn’t be a Python 4, it would be some indecipherable LLM speak.) I’ll believe that when I see it.
Most programming languages are similar enough to existing languages that you only need to know a small number of details to use them: what's the core syntax for variables, loops, conditionals and functions? How does memory management work? What's the concurrency model?
For many languages you can fit all of that, including illustrative examples, in a few thousand tokens of text.
So ship your new programming language with a Claude Skills style document and give your early adopters the ability to write it with LLMs. The LLMs should handle that very well, especially if they get to run an agentic loop against a compiler or even a linter that you provide.
Is it better to specify the parameters and metrics —aka non-functional requirements —that matter for the application, and let LLMs decide? For that matter, why even provide that? Aren't the non-functional requirements generally understood?
It is the specifics that would change—scale to 100K monthly users, keep infrastructure costs below $800K, or integrate with existing Stripe APIs.
I think that’s correct in terms of the surface-level details but less true for the more abstract concepts.
If you’ve tried any of the popular AI builders that use Supabase/PostgREST as a backend, for instance Lovable, you’ll see that they are constantly failing because of how unusual PostgREST is. I’m sure these platforms have “AI cheat sheets” to try to solve this, but you still see constant problems with things like RLS, for instance.
OK, it wasn't a Claude Skill, but it was done using Claude.
That said, part of what he did with Cursed was get LLMs to read its own documentation and use that to test and demonstrate the language.
Also, React was extremely popular before any LLMs were out there. I would not ascribe much of the growth to vibe coding.
It's fine. I've been using codex on some code bases with this with pretty alright results. I also use codex to generate typescript/react code. I'm getting similar results. I had a little wow moment when I asked it to add some buttons and then afterwards realized that it had figured out the localization framework (one of my creations) and added translations for the button labels. All unprompted. It clearly understood the code base and how I like things done. So it just went ahead and did them. The syntax is not a problem. The obscurity of the library is not a problem as long as you give it enough to work with. It does less well coding something from scratch than working on existing code.
IMHO, things like react are optimized for humans. They aren't actually that optimal for LLMs to work with. It's actually impressive that they can. Too much expressiveness and ambiguity. LLMs like things spelled out. Humans don't. We're still doing things manually so it helps if we can read and edit what the LLMs do. But that won't stay like that.
I think in a few years, we'll start seeing languages and frameworks that are more optimal for Agentic coding tools as they will be the main users. So, stronger typing. More verbosity and less ambiguity.
In front-end as well—I've been able to go much farther for simple projects using alpine than more complex frameworks. For big products I use Elm, which isn't exactly the most common front-end choice but it provides a declarative programming style that forces the LLM to write more correct code faster.
In general, I think introspectible frameworks have a better case, and whether they're present in training data or not becomes more irrelevant as the introspectibility increases. Wiring the Elm compiler to a post-write hook means I basically have not written front-end code in 4 or 5 months. Using web standards and micro frameworks with no build step means the LLM can inspect the behaviour using the chrome dev tools MCP and check its work much more effectively than having to deal with the React loop. The ecosystem is so fragmented there, I'm not sure about the "quality because of quantity of training data" argument.
What I was trying to get at in the post is that net new experiences is where I see a massive delta
The 'LSP' that would allow new frameworks or languages to shine with coding agents is already mostly here, and it's things like hooks, MCPs, ACP, etc. They keep the code generation aligned with the final intent, and syntactically correct from the get go, with the help of very advanced compilers/linters that explain to the LLM the context it's missing.
That's without hypothesising on future model upgrades where fine-tuning becomes simple and cheap, local, framework-specific models become the norm. Then, React's advantage (its presence in the training data) becomes a toll (conflicting versions, fragmented ecosystem).
I also have a huge bias against the javascript/typescript ecosystem, it gives me headaches. So I could be wrong.
And LLMs can create idiomatic CRUD pages using it. I just needed to include one example in AGENTS.md
Typescript, however, does scale pretty well. But now you've added a compiler and bundler, and might as well use some framework.
I’ve written some pretty complicated vanilla JS and it works fine. I’m not dealing with other people crappy code however so YMMV.
At this point there are several large Rust UI libraries that try to replicate this pattern in web assembly, and they all had enough time to appear and mature without the underlying JSX+hooks model becoming outdated. To me it’s a clear sign that JS world slowed down.
Server-side components became a thing, as well as the React compiler. And libraries in the React (and JS at large) ecosystem are pretty liberal with breaking changes, a few months is enough to have multiple libraries that are out-of-date and whose upgrade require handling different breaking changes.
React Native is it own pit of hell.
It did slow down a little since a few years ago, but it's still not great.
React had just updated and documentation hadn’t.
I then discovered that Meta owns React so I got frustrated as hell with their obfuscation and ripped out all of the React and turned what was left into vanilla html+js.
I also don’t ‘KTH-Trust’ Meta of all corporations to have a compile step for a web technology.
I try to filter out such people in hiring nowadays but sometimes you miss, or come into an existing team with these issues
You don't buy what, exactly?
> As usual, requires a lot of handholding and care to get good results, but I've found this is true for react codebases just as much as anything else.
I think you and others in this thread have either just skimmed the article or just read the headline. The point isn't that you can't use LLMs for other languages, its that the creators of these tools AREN'T using other languages for them. Yes, LLM's can write Angular. But if there's less data to train on, the results won't be as good. And because of this, it's creating a snowball effect.
To me, they don't buy the argument that the snowball effect is significant enough to overcome technical merits of different frontend frameworks.
And I'll add that: older libraries like React have at least one disavantage: there's a lot of outdated React code out there that AI is being trained on.
> there's a lot of outdated React code out there that AI is being trained on.
Yea, but that's better than no code as far as an LLM is concerned, which is what this article is about.
And specifically Svelte has their own MCP to help LLMs https://svelte.dev/docs/mcp/overview
I wonder if React has something to keep AI on their toes about best practices.
Ahh, I wouldn't hold my breath.
And to your point, I guess another thing Svelte has is it's compatibility with just vanilla JS, meaning (I think) it doesn't necessarily have to be "Svelte" code to still work with Svelte.
But if less people are exposed to those frameworks, then surely that means they will be less popular? I'm struggling to understand your argument.
> he data presented in the article isn't very convincing to me - it's absolute numbers, it's not a zero-sum game,
Of course it is. If I'm using React to build a site, I'm not using Svelte to build it. It less people are using a framework, there will be less funding. If more people use it, more money.
> I don't think it's sensible to extrapolate from current trends about LLM coding anyway.
The actual tools themselves are using React. Bolt, a UI design LLM, uses React by default. i don't even think there's an option to use a different language right now. These tools have taken over the industry, and have absolutely exploded in popularity in the few years they've been available. This is going to create a snowball effect.
> This stuff is barely a few years old and we want to make confident predictions about it?
I don't think you read the article as closely as you think you do. Saying "React has probably spiked in popularity because LLM's use it be default" isn't that controversial. And it's true. And I don't think it's a long shot to say "If there's less data associated with a framework, it'll be less likely to be used by these tools and then less likely to be used at all." In fact, it feels like a pretty obvious conclusion.
We can ignore what is clearly happening (which even as a React dev I don't want because it WILL limit my future options) or work to make sure those tools are offering other defaults.
I agree, but I don't think the data suggests that is what's happening. The data presented in the article shows only that the number of new sites made with React has increased greatly since LLMs arrived on the scene. But there's a base rate fallacy here - we aren't shown data for any other frameworks!
>Of course it is.
That's not what I mean by a zero-sum game. There isn't a fixed number of websites that different frameworks are taking a share of (this would be a zero-sum game). The number of websites itself has massively increased since LLMs arrived on the scene. You can very quickly spin up 100 new sites using your new framework without all the other frameworks "losing" 100 sites, you know what I mean? Similarly I think the number of people making websites has exploded for the same reason.
And this is another explanation for the data in the article - that there are simply way more sites being created now that it's so trivial for anyone to make one. Have a look at the StackExchange links I gave in my last comment. There isn't much evidence there that React is overwhelming the industry (especially amongst professional devs), although I grant you it would be difficult to measure if it were true.
> The actual tools themselves are using React. [...] These tools have taken over the industry.
Yes, but so have plenty of other tools that don't use React by default, like Claude Code or Codex. There are plenty of new websites being made across all of the major frameworks.
> I don't think you read the article as closely as you think you do.
Do you mind cutting it out with the ad-hominems? I've been nothing but respectful to you, and in each of your replies you've made little jabs at me about "not understanding the article". I just disagree with you, friend, be nice =)
I think LLMs, despite already being trained massively on React, can easily adapt their output to suit a new framework's-specific API surface with a simple adjustment to the prompt. Maybe include an abbreviated list of type/function signatures that are specific to your new framework and just tell the LLM to use JSX for the views?
What I think will definitely be a challenge for new library authors in the age of LLMs is state management. There are already tons of libraries that basically achieve the same thing but have vastly different APIs. In this case, new lib-authors may be forced to just write pluggable re-implementations of existing libraries just to enable LLMs to emit compilable/runnable code. Though I dont know of any state management library that dominates the web like React does with the view layer.
That's what I did. https://mutraction.dev/
My framework has approximately zero users and this is not a plug, but the idea is sound and it works.
At the moment I still consider it a tool alongside all other tool, or else a business strategy next to e.g. outsourcing. My job didn't go away because there's 1000x more of them overseas. But likewise, it also didn't go away because there's individuals 1000x better (educated, developed, paid, connected) than me in the US.
This too shall pass.
Sure it has a lot of staying power because of network effects (and qualities like backwards compatibility and gaming). But it's not a terminal, self-reinforcing snowball, force of nature like the article implies React is.
Coffeescript helped Javascript to evolve the right way, so in retrospect, it was absolutely a good thing. It's like people here don't remember the days of ES3 or ES5...
And the days? Remember Typescript right now? Typescript is not Javascript.
TL;DR sometimes you need to make an alternative to get the original to move.
Poking client reqs is such a high value skill, most freelancers will just build what the client asks, "ok here's a react frontend with a postgres db and a crud app for your ecommerce website" instead of asking what the functional requirements are, maybe it can be a shopify thing, or just post it on amazon, or maybe a plain html app (optionally with js)
It can be valid to ask for a brick house if you know what the other ways to build a house are, but if you just asked chatgpt for a house plan and it said "bricks", because it's the most housey thing and you said ok because it rings a bell and sounds housey, having a dev that asks and tells you about wooden houses or steel beams or concrete is the best that can happen.
I appreciate when it happens the other way around, I go to a lawyer and tell them I want a corp, they start off assuming I know my shit, and after 5 minutes we are like, oh I don't want a corp
It will be interesting to see how durable these biases are as labs work towards developing more capable small models that are less reliant on memorized information. My naive instinct is that these biases will be less salient over time as context windows improve and models become increasingly capable of processing documentation as a part of their code writing loop, but also that, in the absence of instruction to the contrary, the models will favor working with these tools as a default for quite some time.
Worse, with LLM's easily generating boilerplate, there's less pressure to make old framework code concise or clear, and the superior usability of a new framework won't be a big draw.
But coding is a primary application/profit center, and you can be sure they'll reduce the latency between release and model support, and they'll start to emphasize/suggest new frameworks/paradigms as a distinguishing feature.
My concern is about gaming the system, like SEO. If LLM coding is the gatekeeper, they'll be corrupted by companies seeking access/exposure. Developer uptake used to be a reasonable measure of quality, but in this new world it might only reflect exposure.
More broadly, obviously there is some pressure to use a framework/library/programminglang/editor that has better LLM training. But even before LLMs.. you'd want to choose the one that has more SO questions, more blog posts and books published about it. The one where you can hire experienced programmers.
New players has a certain activation energy they need to overcome - which is probably good. B/c it slows down the churn of new shiny with incrementally improvements. I think a paradigm shift is sufficient though. Programmers like new shiny things - especially the good ones that are passionate about their craft
I absolutely wouldn't be swapping because the output 'isn't good enough'.
Now, about the incentives? Probably less inference costs for llms, which probably means that they are more legible than the current state of the art for humans as well.
Less API changes than let's say react also means that the generated code as less branching although llms can adapt anyway. Cheaper.
Will probably be closer to the platform too (vanillaJS).
The common factor is the reader, taking what the search engine, SO commenter or AI takes as gospel. A good software developer can judge multiple inputs on their own.
And if someone doesn't care what an AI does it really isn't important what they are having it build or what tool it uses, clearly.
Should have made graphs testing LLMs with different frameworks.
I had to ditch the whole thing and rewrite it in Vue when it got big enough that I couldn’t debug it without learning React.
Vibe-coding something in a stack you know or want to know means you can get off your high horse and peek into the engine.
I still agree with the sentiment that React is winning; if the competition of volume. But other frameworks won’t stop existing unless you believe that people exclusively choose what is dominant. But there will always be artisans, even after all old people who learned the alternatives were flushed out.
In the meantime real engineers still use the proper tools.