It is no secret that the past 2 years have seen the beginnings of a fairly dramatic change in frontend web technology. I write about these topics regularly. But as they enter the more mainstream vernacular I've found it has become more and more difficult to understand what these technologies are and differentiate when they are useful.
At the heart of the discussion is the topic of Hydration. The process in which a server-rendered website becomes interactive to the user in the browser. But even that is something that holds a somewhat vague understanding. What is it for an application to become interactive?
And Hydration is more significant than the amount of JavaScript we ship or execute. It impacts what data we need to serialize and send to the browser. This is an area that is not simple to build solutions for, and it is little surprise explaining them is equally challenging.
Why Efficient Hydration in JavaScript Frameworks is so Challenging
Ryan Carniato for This is Learning γ» Feb 3 '22
Now that more solutions have shipped I think it is time to revisit the 3 most promising approaches to this space.
When does a Site become Interactive?
But first I think we need to start here. For such a seemingly simple question the answer isn't so straightforward. There is a reason performance experts in the browser, like the Chrome team, have gone through several iterations on how to best capture this. TTI (Time to Interactive), FID (First Input Delay), and now INP (Input to Next Paint) can also serve as a way to understand how responsive our websites are.
Looking at framework space there has been a lot of talk about Progressive Enhancement. I.e... having elements work if the JavaScript is not available (or available yet). Is a site considered interactive if clicking a button works in the sense it does a slower full-page navigation(server round trip) where it otherwise would have done stuff in the browser only?
How about if events are captured and then replayed during hydration or even used to prioritize what gets hydrated first as in the case of React 18's Selective Hydration? If the browser doesn't miss any end-user events but just doesn't respond right away because it is loading code, is that considered interactive?
The fact that these sorts of techniques are everywhere at this point is why at least to me being interactive can't only include the ability to catch the cause, but also the time it takes to witness the expected effect. How to measure that reasonably I will leave it to the browser teams, but that should give us goalposts for our exploration.
Islands
The thing to love about Islands is they start so simple. If you have too much JavaScript, divide and conquer. The earliest days of client-side rendering involved embedding interactive widgets in server-rendered applications. Even things like Web Components have made this pretty easy to do over the years.
Well, except for one problem. These widgets were client-rendered so they came from the server blank. This could cause layout shifts and a delay in primary content showing. Islands in the basic form is just server rendering these pieces as well.
Simple, but it meant JavaScript on the server to render which is why outside of Marko(2014), we did not see much exploration here until the more common SPA (Single Page App) server rendered had proven JavaScript full-stack was viable. Not until 2021 with frameworks like Astro, and Fresh did we see a return to this.
There are some significant differences between Islands and its SPA counterparts (like Next, Nuxt, SvelteKit, and Remix). These Islands frameworks skip sending JavaScript for the root of the application. It isn't until you hit an interactive component that JavaScript is needed. This can drastically shrink bundle sizes.
Page | Full Page | Islands | Reduction |
---|---|---|---|
Home | 439kb | 72kb | 84% |
Search | 504kb | 110kb | 72% |
View Item | 532kb | 211kb | 60% |
Comparison done by Marko team on eBay.com
Islands can also shrink HTML document size as they only need to serialize the data passed as Island props instead of all the data. That blob of JSON in a script tag we are accustomed to seeing at the bottom of the server-rendered HTML can disappear when we use Islands! On data-heavy pages, I've seen it cut the page size in half.
Hackernews story page done in SolidStart with SPA SSR and Islands
How is that possible? Server-rendered children can be passed through the Islands without being hydrated themselves.
In the case above, where no state is passed to our ToggleVisibleIsland, those comments never need to be sent to the client.
It does mean though that any content passed through will be rendered eagerly even if it isn't shown ultimately at the opportunity that Island logic could display it later. So we only solve the "double data" problem if this content is only rendered once whether it be in the DOM or as a serialized prop/slot. Not both.
The most important difference is Island architected applications are MPAs(Multi-Page Apps). The optimization is based on knowing that the code for non-interactive parts is never needed in the browser. Never rendered in the client. This is something a SPA router could never guarantee.
Server Components
But what if we do want client routing? How much does that change the picture?
Adding client routing with server-rendered HTML doesn't change much on the surface. Solutions like Turbo or Flamethrower have been adding that for a bit of smoothness to MPAs. We've recently seen combining these sorts of techniques with the View Transition API to great effect.
But an MPA with client-side routing doesn't suddenly give you all the benefits of a SPA. Most notably in element and state persistence. On the surface, this might also seem straightforward but it is not.
The first thing you might do is mark elements as being persistent. And then when you swap your new markup replace the existing elements back in where an ID matches. But since the elements are temporarily removed this can lose DOM state like input focus when persisting. You could diff it and in so only replace what has changed and that might be sufficient.
Another consideration is global state in the client. Pretend you have a global counter that impacts how certain Islands render. If you load one page and increment it to 10. Then on navigation render the next page on the server, it will not know that the counter is 10 and render it as if it were 0. This could lead to hydration mismatches and break the application.
Unless you desire to send back all the global state back and forth between requests(and you really really don't), we can't ever render Islands on the server after the first-page load if we want to ensure things won't break when global state is involved.
This detail isn't important just for navigation. But any lazily inserted content prop/slot can no longer ensure hydration will work if global state has changed since it was originally server-rendered. This adds complexity to the logic for absorbing rendered templates that ensure double data doesn't happen as the Islands and static templates need to be separated at runtime.
Instead of wrestling with that, React Server Components invented their own serialization format and didn't bother solving the "double data" problem. Although it is the only non-experimental solution I know today that properly handles state persistence.
So Server Component architecture can be seen as Islands + Client Routing, but it involves more than tagging a client router or even View Transitions on an MPA. And in so deserves its own category when looking at how we build partially hydrated solutions.
Resumability
I love resumability because it does come out of left field compared to a lot of the other research that has been going on over the past decade. Instead of looking at how to reduce the amount of code/hydration, it looks at changing what code executes.
Partially Hydrated solutions above in some cases can reduce code footprints up to 80-90% but it still treats that last bit very similar to everything we've seen before. What if we didn't execute any code on the client until we needed to? What if hydration returned to just attaching event handlers?
To do that we'd need to serialize not just the application state, but the internal state of the framework so that when any part is executed it could continue where it left off. When an event handler updates some state we just propagate that change without ever running the components the first time in the browser to initialize it. Afterall we already initialized it when we rendered on the server.
This is not easy to accomplish given the way we close over state when we write components, but it is solvable:
Resumability, WTF?
Ryan Carniato for This is Learning γ» Aug 23 '22
It also opens up more interesting patterns for lazy code loading since it doesn't need to be immediately present for hydration. However, if interactivity is as defined above, you don't want to be lazy loading anything critical because we still have to wait for it. Maybe just expensive things or things offscreen. In the basic case, a pretty similar heuristic to how you would choose to lazy load for any client-side architecture.
Of course, serializing everything could be pretty costly, not unlike the "double data" problem. So we would need a way to determine what can never change in the client. To do that resumable solutions tend to use Signals-based reactivity, often augmented by compilation. By tying updates to the data rather than the view hierarchy components no longer become the unit of code that is needed to run. And more so dead code can be tree-shaken along the reactive graph of data-dependencies.
Done well that seems pretty good. Once you enter this zone, it is easier to automate the split between client and server. But that alone doesn't solve problems like client-side routing.
Resumability's knowledge is still based on knowing what will always be on the server from an MPA standpoint. Unlike Islands that are explicit, with an automatic system any descendant of stateful conditional in the rendering has the potential to end up in the browser.
If one added client-side routing (a stateful decision high in the tree) a resumable solution on its own would load the same code on navigation as an SPA and require all the serialized data client side to render it.
Conclusion
So I guess high level:
- Islands are an architecture that aims to reduce JavaScript footprint by up to ~90% by explicitly denoting what goes to the client.
- Server Components architecture extends Islands with client-side routing and proper state preservation.
- Resumability instead of focusing on how to reduce the amount that is hydrated, looks to instead remove the execution cost of hydration itself.
So while seen as competitive these are actually complementary. They don't all solve the same issue completely but focus on a certain part of the problem.
Islands have gotten incredibly optimal at solving for code and data serialization size. Server Component solutions today are the only Island-like solutions that properly account for state while client navigating. Resumability is the only approach that reduces the execution cost of the hydration that remains.
Whether these all converge is another question. Do Islands want the added complexity of Server Components? Will Server Components care about the last stage optimizations that come from Resumability? Will Resumable Solutions ever embrace explicitly calling out which parts of the view render in different locations?
I'm not sure. There is still a lot of room to explore. And honestly, it is still unclear to what extent these concerns impact final site performance or ideal developer experience. But it is an exciting time to be in web development as the future unfolds.
Top comments (31)
IΒ΄m watching this for a very long time now! The World Wide Web - as it was initially created - has some serious design issues. Many concepts were logical and useful - at a time when people were using acoustic couplers and 56k modems. But the concepts have never been really powerful. Identifiers in HTML and CSS have always a global scope, limiting their use to small projects. The whole system was intended to present scientific documents over the newly invented "internet", but not for what it is used today.
But times have changed. People watch Netflix on their mobile phone, so bandwidth should not be our greatest issue. And browsers are increadibly fast - even on a raspberry pi. So, whatΒ΄s the problem?
We are still using HTML like Tim Berners Lee did. Libraries like React or Tailwind are just solutions to problems, that should not exists. With every new tool and solution, the complexity rises - making the whole system less useable. Now we have partial hydration and islands - it will not take long until we see a solution for the problems, that are caused by this technologies.
For me, there are only two possible ways to go:
a) run your app on the server and use HTML and CSS only as a vehicle only. This is the way, wordpress works. It widely ignores the all the web standards and creates a world of itΒ΄s own.
b) run your app in the browser and play on the DOM directly without any HTML. This is the way, libraries like DML or VanJS go, even Svelte goes that way - under the hood. This approach is fast - as long as you do not need to pull a bigpack of tools before you start rendering your page. VanJS provides all you need to build reactive apps with an overhead of less than 1 KB. There is absolutely no need for hydration. You should try to measure rendering time of the VanJS homepage - I suppose, it will be far less than 1 ms. Even pages created with DML - that is not optimized for size in the current version - feel more responsive than many other pages.
IΒ΄m sure that we could get much better results if we do not waste more time using technologies from the past. Javascript and the HTML DOM API provide enough power that we could completely skip HTML and do something better. VanJS shows, that it is possible to build a perfectly usable system, IΒ΄m sure we could build even more efficient tools that do not bloat the whole system. But this should be built on a proper foundation, not on concepts from the past.
It's nice that that's your experience but it's far from universal.
The Performance Inequality Gap, 2023
Also related Now THATβS What I Call Service Worker!βin which Jeremy Wagner relates how he had to carefully consider the constraints of the web solution for one of his client's in order to maximize the reach to (potential) customers.
And people make fun of some of the arcane technologies β¦
β¦ that some of the tech giants are (still) using.
But what is overlooked is that some of the developers there are so well compensated because developer experience isn't a primary objective for the products they work on.
What is prioritized is reaching every last customer and minimize the probability of having them balk just because the wireless network is throwing a tantrum because some bad weather is moving in.
Not many Raspberry Pis run on a battery while powering a wireless radio.
βMobile reset everythingβ
Nobody is disputing that CSR can be pretty fast under ideal conditions. The issue is that conditions aren't ideal everywhere and everytime, not even in most places and most of the time.
So partial hydration, islands, server components and resumability are solutions that help claw back some of that edge that corporations with expensive engineering teams have who can approach the ideal set by the instantaneously served static page (browsers process HTML/CSS much faster than JS).
Like what? It took the web 34 years to get where it is now. Sure there is some historical baggage most notably the single, fat core mindset of JS (but the same could be said of Object Oriented Programming). Wireless networking will always be constrained by the laws of physics and client-server connections will always be subject to the Fallacies of Distributed Computing.
5G may only benefit a minority of users but could be further from its ideal than previous technologies; it has a shorter range and is more easily blocked (e.g. glass and foliage).
You can have an LTE connection and still only get 10-100kbps (spec is 100Mbps; Netflix recommends 3Mbps for SD/480p and 5Mbps for HD/720p).
Wordpress is not the gold standard for HTML/CSS; an immediately served static page is. While Wordpress can generate static pages certain performance aspects are traded off for affordances for CMS users.
And right there you are leaving a lot of performance on the table. Browsers can turn HTML a lot faster into DOM than JS can create it from scratch. And more to the point it will usually do so in a separate thread (perhaps on a different core). So not only is there less JS to run (initially) but more of everything happens in parallel.
You seem to perceive a very narrow slice of the web's value. It serves a whole spectrum of application holotypes all of which have very different needs and benefit from very different capabilities.
This starts on one end which aims to maximize content density for the network payload going to the other end for use cases like Figma.
Great absolutely great response plus sharing references is absolutely incredible
I am very interested in seeing if the hybrid has legs. I agree the platform is getting used beyond its original intention and people are working for new standards. But yeah backwards compatibility makes the Web great and well,.. your image. With SolidJS we've built arguably the smallest(when you include component code size) and fastest client framework but I think there is more here if we keep going.
SolidJS is an UI framework, and maybe it solves the problems of an UI-designer in the most brilliant way. But what about the rest of an application? Do the tools and methods SolidJS provides help us, to build better applilcations? I think, the whole picture should contain more than HTML and CSS.
There are different application holotypes, and IΒ΄m pretty sure, even static HTML will have a place on the web for a long time. But people start to bring more and more business logic to the client side (see serverless-application), just because they can. Or because it makes their task easier, their life simpler or whatever. There may be many good reasons.
The "hybridΒ΄s" I have seen have been running pretty fast, at least fast enough for me and my clients, even on very slow connections. They did not need to load bulky frameworks, often all they needed to fetch was 30-50kB of Javascript on initial page load, no HTML and only some small CSS files. Shure, this is not the whole application, but loading other parts of the code on demand is not a miracle today.
If you want to build your own "google docs", you would not want your client to wait until the application is completely loaded. So, you have to think how to overcome the bottlenecks. But this is not a question of the UI only... ItΒ΄s a question of a good software design.
We have seen some improvements in transportation over the last years, and I suppose, a Tesla will not run on oats too...
If you're going to bring up Tesla the base of comparison would be:
Not much running on oats there (and really not that different from a user perspective).
Lets not forget that Tesla would be selling fully-self driving cars by next year since 2013.
Svelte is in a really good place, so l won't talk about that.
Maybe van js in its current form does extremely great in terms of framework size and performance (not enough data to give their approach the crown just yet).
However when we make we apps, as developers it's more than just about performance. We look for a great DX and we usually pick a tools with reliable great scalable (in the sense that you can make a semi complex advanced app not toy hello world or counter demos) capabilities (because having to rewrite an app because you chose a wrong tool in the beginning of the dev process is really painful).
I'm going to keep an eye on van but until someone or some team builds a production semi to advanced app and tells the world with straight face they are committed to picking van over other tools l will not use it again.
What I have learned from your comment is what if we reinvent the engine, like what vans or dml does but I think it needs to feel more natural and less wired, just trick the dev to think they're writing html and doing things normal way yet you reinvented these all elements, and not running on the dom, should we say the dom sucks?
I think there are quite a lot projects doing just this. Checkout HTMX or Marko if you want a "pure" HTML experience. Or one of the React-like frameworks, there are hundreds of them (try seach for "JS framework" on github - it is amazing!). Just - nobody seems to be really happy with the result. Otherwise there would be no reason to invent a new framework every half year.
It is interesting to see what happens, if you do not use HTML at all. You get rid of 80% of the problems all this frameworks have been invented to solve. Let me give an example:
You want a responsive designs, so CSS was extended with a bunch of new operators to enable just this. Did this fix the problem? Obviously not, there is a bunch of CSS frameworks adressing just the same topic. If you use Javascript to build your DOM, you do not need any tool at all, you just create your design differently depending on the device properties. There is no magic behind, it is just the difference between a programming language and a markup language.
What about bundle size?
VanJS or DML do not reinvent anything, they just use what is already there. So they just provide a small API to make things more accessible. Javascript and the HTML-DOM-API are powerful enough to care for the rest. In general, we found that bundle sizes are much smaller if you use JS to build the DOM.
IΒ΄m not sure why we should "just trick the dev to think they're writing html". Are they too old to learn something new? The DOM does not suck at all, it is a well designed UI-machine with some really powerful APIΒ΄s behind. It is a real pleasure to use the DOM directly. But maybe it is worth to think about a better way to organize the code that plays on the DOM...
I'm not sure if you don't know, don't understand or don't care why HTML exists.
Maybe the day will come when using it as a standard won't make sense. Until then the benefits of building on top or with HTML outweigh the cons of straight up ignoring it.
I really find the vans and dml code so wired, personally, I can't use such spaghetti despite it's efficiency if so, thus it's not that devs can't learn something new, do you know why jsx became so successful? Anyways I think the idea of building something strong might be explored further, sass did it for css, maybe a better dom is needed, or web fabric at this point most other patterns like code arrangements, optimizations and all have been almost all exhausted!
It would love to have a side by side comparison for some real life cases to showcase the differences and the final performance. Things are far less wired than they might look, beside the fact that people are not used to. Specially with DML the page build is not that different from what you are used to in HTML, just it is part of the code, not separate.
But the main differences are hidden below the surface:
HTML, CSS and JS are all executed in a different context, so you always need some elements to connect the elements. Using IDΒ΄s and too many class names pollutes your namespace and is one source of trouble you get with larger projects. Same is true if you need to address your DOM elements from within Javascript.
DOM elements build with HTML (of even JSX) are more or less static. If you need to control the way a page is built, this getΒ΄s really tricky. React and others have invented a bunch of new tools to achieve a "dynamic page generation", but most of this are "special purpose" tools. What, if you want to build your page differently on days starting with the letter "M"?
Embedding the build process into Javascript removes these issues without new and special tools:
a) As DOM creation is done from within Javascript, you can control the process with simple "if - then - else" programming. So, many of the complicated questions we get with responsive page design can be solved easily.
b) DOM elements created with the HTML-DOM-API just return a reference that you can use directly. So, it is most natural to write
Most interactions are that boring simple, but the biggest advantage is, that DOM references can be held in local scoped variables. This allows you to build something like a "web-component", that dos not have any side-effects, using simple Javascript functions.
From what we see from first "commercial" projects done with DML, the overhead of the "framework" can be pretty small, so we get really small bundle sizes. This solves also the issues of high latency - jusing only what the browsers already provide. And all the interactions performed on the DOM are fast anyway.
Building Apps the run only on the browser will not serve for all types of applications. But it is an appealing fresh option for small and medium sized projects. From what I can see today, these approaches are not "special purpose". Ok, working with these tools is a bit more "programming" than what the average dev might do. But you are not limited to a certain topic, these tools are as universal like any programming language is.
"a bit more "programming" than what the average dev might... " well that's the whole point, we all use these languages and tools because they offer a more better dx and simpler nicer abstractions if you forget that, it's hard to make a tool devs will endup adopting, not because they can't use it, but are naturally lazy and get lazier everyday, I have a mini framework am working on personally, and thus I think you might want to reconsider the design of syntax therein, otherwise nice, we all trying to make this web wrecked ship stand still.
@efpage seems to think performance is the only important thing about frameworks and web development in general. Out of all the comments he has made on this article he has not mentioned other equally if not more important factors like SEO, DX and capabilities (in the sense of how complex of a Webapp can you make before feeling like you are shooting yourself in the foot).
You very right, yet at the end of the day, the average dev or for average applications performance is not much of a thing, but is it easy to use comes as the first impression, meanwhile am making this here if anyone wants to contribute: github.com/javaScriptKampala/z-js
For some reason lm really really having understanding this question :(
Is a site considered interactive if clicking a button works in the sense it does a slower full-page navigation(server round trip) where it otherwise would have done stuff in the browser only?
According to INP an interaction is:
However the measured latency is based on
So a no-JS page navigating to the next page but delayed by an abysmal server latency could have hypothetically a perfect INP score while a page blocking its next frame because of a high latency fragment/API server could have a poor INP score; both provide a similar, terrible user experience.
Hmm feels like it's a metric designed to favor MPA's π
INP is only one of the Core Web Vitals.
But like it or not, serving a static page with minimum latency even under the worst network conditions is the gold standard. Local-first is all well and good but it has to fit the use case where the users are willing to accept the inevitable drawbacks.
Progressive Enhancement means authoring all interactions back to anchor links or form posts. Which means that they work without JavaScript loaded. However, if JavaScript is expected for the experience do you consider this interaction before JavaScript has loaded as being interactive.
Picture you have a collapsible summary section on an item for sale. If you click the collapse button before the JS loads in a Progressively Enhanced application it would reload the whole page to render it with the section collapsed.
Whereas with JS you'd just hide the section in the client without even going back to the server. The difference between clicking early on a slow network is a substantial difference as now you have to wait for the page to load again, and as it is most likely that the JS hasn't loaded on a slow network it really is insult to injury. If there is no visual affordance its hard to say that these experiences are equivalent even though to the user they appear to be.
I'm not speaking against Progressive Enhancement.. I think it is good for graceful degradation, so that the site works when things do go wrong. But its hard to consider it as part of your load time metrics when the result can be so much worse.
This too
Will Resumable Solutions ever embrace explicitly calling out which parts of the view render in different locations?
Try to explain this to your grandma...
Great article π
Framing server components as "Islands + SPA navigation" is what helped me quickly wrap my head around the architecture (my reflection of this). However, I've not delved under the hood and it's interesting to hear that RSC invented their own serialization format.
I'm curious what you think of Astro's View Transitions API that seems to have island state persistence on navigation and if that's enough to label Astro a Server Component framework, rather than an Islands framework. I know SolidStart had been experimenting with Server Components long before Astro introduced this, but now with the SolidStart/Astro collaboration, will SolidStart be leveraging this new API under the hood?
Also, what are your thoughts on framework agnostic islands like
is-land
from the 11ty team? I've only just started playing with it, but it very much feels like Astro islands that you can drop in anywhere since it's just a web component.I was trying to say it wasn't in the Server Component section. Astro can persist elements but atleast the solution today detaches and replaces elements which can have side effects, and Astro hasn't solved for global state. So large part of why I wrote this article is to emphasize that even Islands with View Transitions as we see today aren't equivalent to Server Components. We learned that through experimentation on SolidStart where we had very similar Islands persistence and then realized the shortcomings.
The Astro collaboration was just for deployment. Everything rendering was Solid handled. Although we are looking at Nitro now as our underlying layer. Check out my upcoming Vite talk for more details.
As for is-land and 11ty, I mean Web Components make decent Islands.. but coordinating state really is where the problem is heading if we want to make more application like experiences out of this which is why Server Components/Resumability is a natural extension. I care about the App side more than the Site side but any Islands solution is perfectly good for sites.
Nitro is now under consideration.
On the topic of Astro view transitions stuff, would you say astro apps that use it are
I'm genuinely confused about the architecture.
Astro is just hitting the ground running. They want to be ready when View Transitions become available for cross page navigations.
Astro covers the MPA to Β΅SPA (just enough islands to cover the interactivity for a single capability) range.
It enables an architecture that was considered an anti-pattern in 2015 when the document vs. application web dichotomy was taken for granted.
If you haven't read it yet:
Patterns for Building JavaScript Websites in 2022
Ryan Carniato for This is Learning γ» Jun 8 '22
I'd say MPAs that happen to not unload the page in the browser on navigation, plus some extra bonuses like persisting elements. They don't really behave the way SPAs work.
Personally, I making this: github.com/javaScriptKampala/z-js and I think all we need are specialized frameworks which give clear use case scenarios, and we follow the old pick the right tool for the right job, problem is frameworks are doing too much, where always tradeoffs have to be made instead!
"a resumable solution on its own would load the same code on navigation as an SPA"
I still don't see why this is a problem. Instant initial load, instant subsequent navigation, as long as network speed isn't awful.