People are looking for a satisfying non-leaky abstraction to build upon and they don't find it with web technologies. They get close, but those last few pieces never quite fit, and we lack the power to reshape the pieces, so we tear out all the pieces and try again. Maybe this next time we'll find a better way to fit them together.
Of course I hear plenty of people complaining that apps on top of hypertext is a fundamental mistake and so we can't expect it to ever really work, but the way you put it really made it click for me. The problem isn't that we haven't solved the puzzle, it's that the pieces don't actually fit together. Thank you.
Honestly, that's basically happened, but just a generation later.
Most modern frameworks are doing very similar things under the hood. Svelte, SolidJS, and modern Vue (i.e. with Vapor mode) all basically do the same thing: they turn your template into an HTML string with holes in it, and then update those holes with dynamic data using signals to track when that data changes. Preact and Vue without Vapor mode also use signals to track reactivity, but use VDOM to rerender an entire component whenever a signal changes. And Angular is a bit different, but still generally moving over to using signals as the core reactivity mechanism.
The largest differences between these frameworks at this point is how they do templating (i.e. JSX vs HTML vs SFCs, and syntax differences), and the size of their ecosystems. Beyond that, they are all increasingly similar, and converging on very similar ideas.
The black sheep of the family here is React, which is doing basically the same thing it always did, although I notice even there that there are increasing numbers of libraries and tools that are basically "use signals with React".
I'd argue the signal-based concept fits really well with the DOM and browsers in general. Signals ensure that the internal application data stays consistent, but you can have DOM elements (or components) with their own state, like text boxes or accordions. And because with many of these frameworks, templates directly return DOM nodes rather than VDOM objects, you're generally working directly with browser APIs when necessary.
This is all specific to frontend stuff, i.e. how do you build an application that mostly lives on the client. Generally, the answer to that seems to be signals* and your choice of templating system — unless you're using React, then the answer is pure render functions and additional mechanisms to attach state to that.
Where there's more exploration right now is how you build an application that spans multiple computers. This is generally hard — I don't think anyone has demonstrated an obvious way of doing this yet. In the old days, we had fully client-side applications, but syncing files was always done manually. Then we had applications that lived on external servers, but this meant that you needed a network round-trip to do anything meaningful. Then we moved the rendering back to the client which meant that the client could make some of its own decisions, and reduced the number of round trips needed, but this bloats the client, and makes it useless if the network goes down.
The challenge right now, then, is trying to figure out how to share code between client and server in such a way that
(a) the client doesn't have to do more than it needs to do (no need to contain the logic for rendering an entire application just to handle some fancy tabs).
(b) the client can still do lots of things that are useful (optimistic updates, frontend validation, etc
(c) both client and server can be modelled as a single application as opposed to two different ones, potentially with different tooling, languages, and teams
(d) the volatility of the network is always accounted for and does not break the application
This is where most of the innovation in frontend frameworks is going towards. React has their RSCs (React Server Components, components that are written using React but render only on the server), and other frameworks are developing their own approaches. You also see it more widely with the growth of local-first software, which approaches the problem from a different angle (can we get rid of the server altogether?) but is essentially trying to solve the same core issues.And importantly here, I don't think anyone has solved these issues yet, even in older models of software. The reason this development is ongoing has nothing to do with the web platform (which is an incredible resource in many ways: a sandboxed mini-OS with incredibly powerful primitives), it's because it's a very old problem and we still need to figure out what to do about it.
* Or some other reactivity primitive or eventing system, but mostly signals.
We are running code on servers and clients, communicating between the two (crossing the network boundary), while our code often runs on millions of distributed hostile clients that we don't control.
It's inherently complex, and inherently hostile.
From my view, RSC's are the first solution to acknowledge these complexities and redesign the paradigms closer to first principles. That comes with a tougher mental model, because the problem-space is inherently complex. Every prior or parallel solution attempts to paper over that complexity with an over-simplified abstraction.
HTMX (and rails, php, etc.) leans too heavily on the server, client-only-libraries give you no accessibility to the server, and traditional JS SSR frameworks attempt to treat the server as just another client. Astro works because it drives you towards building largely static sites (leaning on build-time and server-side routing aggressively).
RSCs balance most of these incentives, gives you the power to access each of them at build-time and at run-time (at the page level or even the component level!). It makes each environment fully powerful (server, client, and both). And manages to solve streaming (suspense and complex serialization) and diffing (navigating client-side and maintaining state or UI).
But people would rather lean on lazy tropes as if they only exist to sell server-cycles or to further frontend complexity. No! They're just the first solution to accept that complexity and give developers the power to wield them. Long-term, I think people will come to learn their mental model and understand why they exist. As some react core team members have said, this is kind of the way we should have always built websites-once you return to first principles, you end up with something that looks similar to RSCs[0]. I think others will solve these problems with simpler mental models in the future, but it's a damn good start and doesn't deserve the vitriol it gets.
Meanwhile, sync engines seem to actually solve these problems - the distributed data syncing and the client-side needs like optimistic updates, while also letting you avoid the complexity. And, you can keep your server-first rendering.
To me it's a choice between lose-lose (complex, worse UX) and win-win (simpler, better UX) and the only reason I think anyone really likes RSC is because there is so much money behind it, and so little relatively in sync engines. That said, I don't blame people for not even mentioning them as they are newer. I've been working with one for the last year and it's an absolute delight, and probably the first genuine leap forward in frontend dev in the last decade, since React.
This isn't true, because RSCs let you slide back into classic react with a simple 'use client' (or lazy for pure client). So anywhere in the tree, you have that choice. If you want to do so at the root of a page (or component) you can, without necessarily forcing all pages to do so.
> which means its server-first model leads you to slow feeling websites, or lots of glue code to compensate
Again, I don't think this is true - what makes you say it's slow feeling? Personally, I feel it's the opposite. My websites (and apps) are faster than before, with less code. Because server component data fetching solves the waterfall problem and co-locating data retrieval closer to your APIs or data stores means faster round-trips. And for slower fetches, you can use suspense and serialize promises over the wire to prefetch. Then unwrapping those promises on the client, showing loading states in the meantime as jsx and data stream from the server.
When you do want to do client-side data fetching, you still can. RSCs are also compatible with "no server"-i.e. running your "server" code at build-time.
> To me it's a choice between lose-lose (complex, worse UX) and win-win (simpler, better UX)
You say it's worse UX but that does not ring true to my experience, nor does it really make sense as RSCs are additive, not prescriptive. The DX has some downsides because it requires a more complex model to understand and adds overhead to bundling and development, but it gives you back DX gains as well. It does not lead to worse UX unless you explicitly hold it wrong (true of any web technology).
I like RSCs because they unlock UX and DX (genuinely) not possible before. I have nothing to gain from holding this opinion, I'm busy building my business and various webapps.
It's worth noting that RSCs are an entire architecture, not just server components. They are server components, client components, boundary serialization and typing, server actions, suspense, and more. And these play very nicely with the newer async client features like transitions, useOptimistic, activity, and so on.
> Meanwhile, sync engines seem to actually solve these problems
Sync engines solve a different set of problems and come with their own nits and complexities. To say they avoid complexity is shallow because syncing is inherently complex and anyone who's worked with them has experienced those pains, modern engines or not. The newer react features for async client work help to solve many of the UX problems relating to scheduling rendering and coordinating transitions.
I'm familiar with your work and I really respect what you've built. I notice you use zero (sync engine), but I could go ahead and point to this zero example as something that has some poor UX that could be solved with the new client features like transitions: https://ztunes.rocicorp.dev
These are not RSC exclusive features, but they display how sync engines don't solve all the UX problems you're espousing they do without coordinating work at the framework level. Happy to connect and walk you through what a better UX for this functionality would look like.
Many of the features like transitions and all the new concepts are workaround you just don’t really need when your data is mostly local and optimistically mutated, and the ztunes app is a tiny demo but ofc you could easily server render it and split transitions and all sorts of things to make it more of a comparable demo to what I assume you think are downsides vs RSC.
I think time will show that RSC was a bad idea, like Redux which I also predicted would not last the time of time, it’s interesting in theory but too verbose and cumbersome in practice, and other ways of doing things have too many advantages.
The problems they solve overlap more than enough, and once you have a sync engine giving you optimistic mutations free, free local caching, and free realtime sync, you look at what RSC gives you above SSR and there’s really no way to justify the immense conceptual burden and actual concrete downsides (like now having two worlds / essentially function coloring, forces server trips / lack of routing control) I just bet it won’t win. Though given the immense investment by two huge companies it may take a while for that to become clear.
People bemoan the lack of native development, but the consuming public (and the devs trying to serve them) really just want to be able to do things consistently across phones and laptops and other computing devices and the web is the most consistent thing available, and it is the most battle-tested thing available.
The difficulty is finding designers who understand web fundamentals.
Also keep in mind the web standard puzzle is also changing all the time to try make the puzzle to fit better while developers are designing abstractions to catch up.
That how you get XMLHttpRequest -> ajax -> axio -> fetch and history.replaceState situation.
In general SPA has pushed web towards not so archiving friendly place. And PWA != SPA
The history of making HTTP requests in the browser has only been two APIs: XMLHttpRequest -> fetch. Fetch is an upgraded version of XMLHttpRequest with a promise-based API and better async/streaming support.
Ajax was a word used to describe the technique of making http requests in the browser, and axios is a third party library that wraps different APIs on different platforms to provide a unified interface. These were never separate browser APIs.
You have a point but you're giving Svelte unfair criticism here.
Well, short answer is that it's been in the "figure out what works" phase for many years now. The developer experience has improved a lot over the years, but it's at the expense of constant breaking changes and dependency hell if you want to upgrade existing code.
I still prefer svelte but it's less mature and universally-known. React is still a pretty good choice if you need something that will more or less work and that anyone can write.
I know svelte/sveltekit and would want to contribute to svelte apps (a good reminder that I should)
But there are some projects that I really really want to contribute to / heck even port to sveltekit like cinny and hedgedoc and the likes and so I almost feel pressured by the system to learn react so that I can finally feel confident enough to understand their repositories as they scare me right now...
Cinny:https://github.com/cinnyapp/cinny Hedgedoc:https://github.com/hedgedoc/hedgedoc
I don’t even want to talk to somebody that defines themselves as a “React” developer. Let alone work with them.
If you know any of these frameworks, and the basics of web development you can work in any of them.
So the hiring part is not an excuse for this.
Isn't it mainly about playing nice with crawlers? SEO and the like?
(that was my understanding but I'm a backend dev).
Honestly except for the marketing page and blogs and stuff, most apps are fine without server rendering. In fact I'd say many that avoid server rendering actually feel better simply because next.js makes it really easy to screw up your performance doing it.
I see this happening in finance data sites. Say a page about Apple, has stock price, etc. when logged in, same stuff but 10x data so they layout and everything is different.
That being said, I'm waiting in the back stage, like many other folks, for tanstack to get production-ready, because of the all the weird crap being pulled by Vercel on NextJS.
Lots of newcomers are struggling and not understanding what are the options and which approach is best for their case.
Business people don’t help as they rightfully don’t care. But they want „do everything” - „pay once” approach so people bolt on static pages ona apps or other way around.
Example: This logistics SPA I was building I realized could just be single pages with some data for most of the stuff (tracking, inventory, etc...) but for admins they wanted a whole dashboard. This was a conditional on some value of the stored session user. So it ended up being kinda a website for parts of it and an SPA admin panel if the user conditionally matched some privileges. Probably should have been separate stacks but they used the same data so early on they made it the same Next app.
I don't think the whole website vs app thing is always as simple as static blog pages vs full fledged JS-heavy app. There is a spectrum and overlap even within a single "application" because of various requirements.
Your last sentence is the most accurate. I don't think its primarily ignorance, its just trying to meet all the requirements while retaining some level of organization in the codebase.
What are the reasons for not doing SSR?
My default is a small page that client then fetches any additional data it needs. If its long load time skeleton UI it. I also have not seen the SEO benefits at all.
So again _why would I_ unless I needed to do stuff on the server to make the client bundle, which I don't.
A lot of these YC companies doing this could literally just be using a fetch because their backend is dead simple REST.
EDIT: if its pure (not reactive to any other variable but other variables may react to it) they will auto memoize I guess to avoid their own reactivity engine doing a bunch of useless shit when no value was actually updated. Correct me here if I am wrong.
You have to opt-in to prop-diffing conditional re-renders (which I wouldn't call a "reactivity engine" either) per component, via React.memo.
And then you also have to opt-in to prop memoization for non-primitives for the prop-diffing not to be useless.
These re-renders might not result in any actual DOM changes since there is an additional vDOM diffing step from the resulting render (which, again, I wouldn't call a "reactivity engine").
I think it's worthwhile to compare the 2 - I'm sure one of the major contributors of React's slowness is the crazy amount of objects it generates triggering GCs - desugared jsx code turns into building up the React DOM with React.createElement
There's no reason why they couldn't have just done what Imgui did and each createElement would be a function call, that doesn't need to allocate. Considering the js API is not something most devs use, most folks wouldnt mind if it got replaced.
State management is also another issue - considering React has had a compiler due to jsx since the outset, not using that since the beginning to do a little data flow analysis and figure out which piece of state affects what UI elements is just criminal - an further exacerbates the waste in other parts of framework, by running already wasteful code more often than it needs to.
Tbf, they already fixed the compiler issue with the latest React, but only as a reaction to getting destroyed by Svelte on the performance front.
Just goes to show, that if you don't have competition, there's no guarantee positive change will happen on any time scale no matter the resources you throw at the problem.
I’ve also used Next for new projects in the last year - it just depends on the infra requirements.
Vercel’s position in the ecosystem is one we should question. Maybe it’s not good for innovation to use Next for every new project. The recent controversy with their CEO isn’t helping the situation either.
Made me just give up on web development.
I think React has an at least somewhat reasonable track record in terms of backwards compatibility? Still not perfect but much better than all the other frameworks.
That idea actually turned out to work well so others adopted it. Meanwhile in the web ecosystem elm is basically no more and react has changed enough that it's barely recognizable anymore.
And svelte was pretty new at that time, hence it would make sense that it was figuring stuff out, I think.
Though angular has gone through multiple concepts during this time between version 11 (as used in this article) and the current 20 - and especially signals and zoneless would have also massively impacted performance.
Yes, very. Perfect design upfront can eliminate the need to change it later, but you never get it perfect, so you continue to "figure it out" for many years with many failed attempts in the process.
Vue 3.4 (2023) rewrote their template rendering engine to be 2x as fast as well.
I am probably just not smart enough to get it, but it reminds me of the constant seemingly pointless rewrites I see in companies. Figure out what works and keep it, is that so hard? Why can other languages do that. Is this just the nature of web dev?