Migrating to SSR (Next.js) - part 2/2: IS IT EVEN WORTH IT? π
In this article we'll explore the pros and cons of server-side rendering as opposed to "client-only" single-page apps (and statically generated sites). We'll go through the UX π±, business π§³, and product development π©πΏβπ»perspectives. You'll learn when you should opt for server-side rendering, when statically generated sites are a better choice, and under which circumstances you'll be better off with a "basic" SPA.
What are the pros of SSR? ππ½
Improved UX/Speed π
The first argument which favours using SSR is the improved page speed.
If you navigate to a single-page application in your browser, the browser will first fire a request to download the HTML and JavaScript, and only after the JavaScript is downloaded and evaluated, it can fire an additional request to fetch some data from your API. Meanwhile, the user is presented with a blank screen, spinner or skeleton π.
When you visit a website which uses SSR, the browser will fire a request, but, unlike with the SPAs, the response contains all you need -- JavaScript files, HTML content and your data. There are no spinners, skeletons -- no elements jumping around π€Έπ½ββοΈ. The content is delivered faster and the time to the first paint improves.
However, like with every tool, it might be an overkill for your use-case. Think about if improving your page load by some hundreds of milliseconds is worth it. It might be crucial in for e-commerce sites (which are in an extremely competitive environment), but might be an overkill for application which are only usable after logging in
With the growing popularity of the JAM Stack, you can get similar benefits (even better) in terms of speed for pages which don't require to be highly dynamic -- blogs, marketing sites or even some e-commerce sites.
Better for SEO (controversial π§)
I've seen the SEO argument being used countless times, but frankly, I don't believe it is such a big deal all the time. Let's first clarify why some people claim it to be a big deal.
The way Google (and other) crawlers π¦ (which are scraping your website to display it in the search results) has traditionally worked is following:
1) Visit a website
2) Read the HTML delivered from the server/CDN
3) Save it.
Problems arose as libraries like React or Vue came into existence. As described in the previous blog post, almost no HTML is received in the first response from the server/CDN. It's only after JavaScript gets executed that you can see some meaningful content.
And that's the root of the SEO problem -- crawlers would only see the one div or a spinner and wouldn't wait for the actual content to show up. Therefore, your page wouldn't get properly indexed. However, this is no longer the case with the Google crawler as it waits β³ for all the content to load up (including the one dynamically generated by JavaScript).
Where it might still be necessary is if you want to get a preview of your page when sharing to social media. But if this were your only concern, I think prerendering using a tool like react-snap might be a better solution. π‘
You may still get some SEO improvements by using SSR even for crawlers which wait until all the content loads up. Indirectly -- if the search ranking algorithm accounts for the page speed (which should improve when using SSR), your site should rank higher π in the search results.
What are the cons of SSR? ππ»
The need for a server π±
As opposed to the "traditional" SPAs where you don't even need a server to run your code, you need a server to render the code on the server (it's called server side rendering after all...). What this means is that you have to pay π°π°π° for a server to execute your "front-end" code. If you already have a server, the resource consumption might go up.
What can you do about it? Well, think about if SSR is the right solution for your use-case. You might be better off leveraging JAM Stack or a traditional SPA. Or, with the new 9.3 Next.js release, you can easily combine SSR with static pages which prevents wasting π server resources.
It's harder for the development (sometimes) π΅
If you were to roll your own SSR solution, you might be surprised that it's not as straightforward as creating a "traditional" SPA. You have to take care of rendering the components to HTML, sending them to the browser, hydration, making sure that you can fetch the data both on the server and the client... πΏ
Of course, if you use frameworks like Next.js or Nuxt.js, they abstract a lot of these pain points away so you don't have to worry about them π. However, for larger projects which want to start using SSR or which were using SSR before these frameworks existed, the migration process to such a framework might seem daunting and they need to implement SSR by themselves.
Summary
In this acricle, we explored which applications benefit from using SSR and what are the potential downsides. My personal view is that the need for SSR gradually decreases π. Especially, it's really easy to use statically generated sites with the newest edition of Next.js.
Top comments (0)