The Wikipedia entry for static web page starts like this:
A static web page (sometimes called a flat page or a stationary page) is a web page that is delivered to the user's web browser exactly as stored, in contrast to dynamic web pages which are generated by a web application.
Consequently, a static web page displays the same information for all users, from all contexts, subject to modern capabilities of a web server to negotiate content-type or language of the document where such versions are available and the server is configured to do so.
The first sentence is just about OK but the second is seriously misleading. Some static web pages may "display the same information for all users" but it depends totally on what static assets were served to the browser. The difference is caused by - you probably guessed - JavaScript, which is responsible for most of the good and ill in the online world.
For example, let's have a really basic static website. The only files it contains are index.html
and myscript.js
, the latter being a JavaScript file that has code to create a UI, load Google Maps from a content server and display a map centered on the user's location.
Oh no - every user gets a different map! Quick, call the static website police! Such behavior must not be allowed!
Maybe Wikipedia can be forgiven for some inaccuracy, but similarly misleading statements appear in most of the other results I get from Google. The contributors seem to forget that static websites can deliver JavaScript and they apparently fail to notice that when you do so you are potentially introducing context. This comes in three main parts:
- User-specific information (stored in your browser from previous visits)
- The location of the user
- The date and time
Context is the combination of some or all of these 3 factors; who you are, where you are and when you are requesting a page. With context in play, a website can be anything but static.
So let's have a better, more useful definition. How about
A static website is one in which requests can only be made for read-only server files.
What this means is there are no server-side executable files and no way for client requests to modify server-side files. JavaScript is permitted but only as text to send to the client; server-side programming languages such as PHP, Python or Node.js are not supported at all. So it's quite true that every time a given file is requested, each user gets exactly the same file. However, from that point on, with JavaScript running things in the browser all bets are off. The results can differ widely from one user to another, one location to another and from one time to another.
The point of a static website is not to enforce uniformity but to maintain security, increase speed and minimize server processing load. If scripts can't write to the server they can't inject malicious code that spends hours mining BitCoin instead of delivering content when asked. This is a Good Thing.
Why does any of this matter?
For most human beings, perception is 90% of reality. We don't question what we already believe so only 10% of what we hear, see or read gets any real scrutiny. The widespread belief that static websites must be simple and unchanging is totally incorrect but if it's not challenged we'll all remain unaware of the very real benefits of using them. So here are 3 false beliefs:
I've already dealt with the assertion in Wikipedia that static websites deliver an experience that's the same for all users. This is only true if we ignore context, as defined above.
The second most common misapprehension is that for a site to be 'dynamic' it must use server-side processing. This may have been true a decade ago but it certainly isn't now. The Google Maps example I gave earlier is a case in point, where all processing is done by JavaScript in the browser. The hosting server doesn't even supply the map code; this usually comes from a Content Delivery Network (CDN).
Which leads me to a third questionable belief, that client-side processing means slow load times. This one needs a bit of care to unpick as there is a grain of truth in it, but one that's usually so small as to be irrelevant. The problem is that programmers are driven by the need to complete projects quickly, so instead of writing lean code for themselves they reach for standard packages. This may save time but it usually results in far more code than is actually needed to perform the required tasks.
Coding a static page
The programmers I meet once a month at CodeUp are mostly either beginners learning Python or experienced people working in big teams. The latter divide between a small group doing regular applications in Java, Python or C++ and a larger group building large websites where Angular and React are the predominant tools.
There's a big difference between coding for a PC and for a browser. In the former case it doesn't matter how big your application gets; all the code is downloaded and installed just once then run locally each time. In a web application, however, bloat should be avoided. Typically, much of your content is finished HTML delivered from a remote server to your browser acting as an over-powered terminal. Everything it needs is supplied each time (though caching reduces the amount of data actually transferred) so the effect of having a lot of bulky code is far more noticeable than for a PC application. It's OK if your server is doing all the page generation but not so good if you're asking the browser to do it.
Things don't have to be this way; it's just convention and there's nothing to stop your content being created by client-side code that will be loaded just once and cached by the browser. In fact, when you're hosted on a static server you can't run code on it so the only option is to do the dynamic stuff in the browser.
One strategy for building a "dynamic" static page is this:
- The browser requests the page. This can be as simple as a minimal HTML file with a single JavaScript file either in the header or the body.
- The JS code runs and immediately requests a pile of resources from the server. Not necessarily everything; just enough to get the initial page up. It monitors the loading processes so it will know when each one has arrived.
- While it's waiting for content to arrive, the JS code builds the DOM for the first screen (if it wasn't included in the HTML). This is quicker than requesting an HTML template and having to wait for it to arrive before you can populate it with data. If you don't need to consider context you can either supply the entire DOM as static HTML or put it into your JS as a string and simply inject it into the page body.
- As the requested resources arrive they are processed according to the business rules for the website and the results injected into the DOM.
Unless you have a particularly heavy first page this will all happen in under half a second; way under the 2 seconds recommended as the maximum for a page to be well-regarded by its users.
Now I freely admit I am not an Angular or React expert. If either of these can do the above then that's great. But bear in mind that they are not small files even before adding all the dependencies that usually go along with them, whereas a hand-built loader such as the above will be well under 50kb. One of its jobs, after the initial file set has been requested, is to call for other JS files to provide the main functionality of the site. These aren't needed until the page is actually visible so why waste time loading them any earlier? The best strategy is "just in time", where everything arrives just as it's needed.
Conclusion
I hope I have successfully demolished a few myths about static websites by showing that they can be highly dynamic and that moving code to the browser need not result in a slow site. Static sites may not handle the needs of the biggest websites but for many projects they are perfectly suitable, and of course the code you write for a static site will run anywhere without any changes being needed.
Photo by Luis Quintero on Unsplash
Top comments (2)
This post arrives at the perfect moment, when we are submerged by single pages applications, static website generators, and others alternative to "monolithic" server rendered web pages, so thank you to spend some time demystifying this subject for us!
One of the mutations I noticed in these kind of tools is prerendered websites, which is different from static websites: prerendered websites can be generated either from a static website or a server rendered website, or even from a single page application. Prerendered means it has gone through the expected work flow of the given page until the result is satisfying enough to produce an end result: an HTML web page trimmed of all the dynamic content (e.g. the Javascript).
Prerendering can be a great option to help bots parse a "dynamic free" web page while providing the regular page to your users. It is possible to do this because you can set rules via Apache or Nginx to filter on the
User-Agent
header and redirect bots to the prerendered version of your web page.Anyway I thought I would add this piece of information because I found it relevant since you spoke of static websites, as they have once misleaded me to thinking static website are not dynamic.
And as always, keep up the great articles Graham!
We're probably oversimplifying things by dividing websites into static and dynamic without considering other techniques such as the one you highlight.
If we look at things from the point of view of the server we can ask whether or not it allows mutation. Even a non-mutable server - one where all the internal files are read-only - can still run local scripts to deliver responses. Pages can be constructed by PHP scripts or stored pre-rendered to achieve a quicker response at the cost of more storage. In principle, such a site could run off a CD-ROM.
Another distinction is around whether a site permits interactivity between its users - an important feature of many dynamic sites. A static, non-mutable site, though it may deliver an interactive experience, can't let one user talk to another as this requires mutation of the server state (the database, usually).
You can get around this by using a third party, allowing the core functionality of your site to be kept static (non-mutable). For example, when hosting services offer database access they usually provide it on separate computers, so the pattern is well established.
In your comment you say that some sites do dynamic pre-rendering to provide pages that are stripped of all interactive features other than traditional hyperlinks. This amounts to a dynamic website voluntarily restricting itself to providing static, immutable content (albeit for good reasons).
So after struggling with this answer for well over an hour I have now reached the conclusion that to just say "static" and "dynamic" is over-simplistic. Most of the options are a lot more complex than that. I often find that as soon as we try to label and classify things they tend to sprout a host of exceptions, and that seems to be the case here.