What may look ideal in theory, may turn out cumbersome in practice.
-- Myself
During inception, the Passwordless.ID "app" was built in the purest form of a three-tier architecture.
The UI - A vue app "compiled" into a single-page-application
The API - An API built with Cloudflare Workers
The DB - A distributed DB as a service
In particular, each "tier" was completely independent, built with its own tech stack and deployed on a dedicated subdomain.
The UI: https://ui.passwordless.id
The API: https://api.passwordless.id
The DB: internal network
This complete separation of "tiers" may look ideal. It seems to be full of advantages.
There is a clear separation of concerns
Each tier can be updated independently
One could make changes to the UI without affecting the API and vice-versa.
They can scale independently
The UI is fully browser cached
Each "tier" (UI, API, DB) could theoretically be swapped out with another tech in the long term...
This might seem like an exemplary separation of concerns, something ideal to strive for. It's also what we strived for during inception. Sounds great right? Well, it turns out it's not that great ...it's actually pretty bad.
Decoupling back-end and front-end sucks
Being able to update and make changes to UI and API independently sounds great, but in practice, it turns out it's very rarely needed.
The vast majority of the time, when you work on something, whether it is a new feature, a change or a bug, you typically update mostly the UI and API hand in hand. You change files in both, test both together, and deploy both.
In the daily routine, having two repositories, toolchains and domains to deploy to turns out to be counter-productive. It'll lead to two commits, two builds, a "joint deploy", etc. It's not that big of a deal either, it's just annoying.
Cross-origin requests suck
Due to the UI and API being on two distinct subdomains, requests between the UI and the API are now "cross-origin requests". It applies to different subdomains too. Configuring the API to allow such "cross-origin requests" is no big deal when you know how it works, but some developers may find it cumbersome.
The subtle disadvantage is that it introduces one more round-trip: the Preflight requests. These requests are automatically sent by the browser to check if the requests from UI to API are allowed, before sending the actual requests. While not dramatic, it makes the UI slightly less responsive since it doubles the first request's latency.
Lastly, it also has an impact on session handling and security aspects due to its cross-origin nature. However, that's a whole other topic itself.
Latency sucks
You probably know it, when you develop locally, everything is snappy. The page loads instantly and you are happy ...and once it's deployed, you notice that the experience on your phone in "real life" is not that great, especially before all the browser caching kicks in.
The slowness comes from various things:
the loading of assets from the SPA
the UI pre-flight requests to the API
the UI actual requests to the API
the API calls to the distributed DB
the "rendering" of the page content
Each of these things makes the UI sluggish and is a consequence of the distinct tiers being fully separated. To be more precise, it is because network calls are involved between all parts, each one increasing the latency and sluggishness by a notch.
Moreover, you typically don't notice this "sneaky" behaviour during the initial phase of development. When testing locally, everything appears lightning fast since everything occurs locally without network latency. Often, the sluggishness introduced by the network calls is only discovered when the first prototype is deployed and going "live".
That is why an application which combines everything locally, or in a same subnet, is usually much more responsive than having distinct "tiers" like UI / API / DB each separated by a network, which is common in a SaaS world.
Distinct subdomains suck
Whether you want to show your users a feature preview, provide a developer sandbox, make A/B testing or reproduce some bug in real-life conditions, "staging" environments are always useful.
If both the UI and API are packaged in the same app, deploying it at a single domain like https://prod.passwordless.id would be straightforward. Then, you could also work on a feature branch and deploy it to https://new-feature.passwordless.id to test it out in a live environment.
However, this becomes much more complex if you have it split. It would become something like this:
It also requires some plumbing, so that the new feature UI talks with the corresponding API URLs, including adjusting CORS properly. This is extra work and is error-prone.
If the UI and API were bundled together in the same (sub)domain, that would not be a problem since relative URLs could simply be used and CORS are not involved either.
SPAs (sometimes) suck
SPAs like Vue, React or Angular are not bad. You have plenty of libraries with all kinds of widgets and fancy stuff. You can just "magically" quickly generate whole apps with some initializer. But it has a cost too: the learning curve, the complex toolchain, the clunky dependencies ...and the initial page load time due to larger size and rendering delays.
It's a tradeoff. While SPAs typically have a longer initial loading time, they offer complex widgets and increased interactivity in return. It offers ways to structure complex web applications in a modular way to keep their complexity under control. All these things are great ...if you need them. Otherwise, when you just need a few basic pages, it would likely turn into useless overhead.
In the end, whether SPAs "make sense" totally depends on the app. The more complex and user interaction-heavy the app is, the better suited it will be for SPAs. However, in the case for Passwordless.ID, which has a relatively simple UI, it was counter-productive.
Doing it as a Vue SPA was great to get started quickly, but in the meantime, it hinders me more than benefits me. The UI library used was bug-ridden, the various toolchains between UI, API and deployment platform do not always play well together, the resulting bundle is 400kb big and it'd cost time and effort to reduce it and the bad resulting latency is the nail in the coffin. Good ol' HTML ain't that bad after all.
Back to the basics
Lately, there is a renaissance of good ol' server-side templating. The basics are making a comeback under two umbrella names: SSR (Server-side rendering) and SSG (Server-side generation). SSG means templating at build time, like "generating" pages in various languages, while SSR means templating with dynamic data, like showing the result of a database query. Both have their roles and are complementary.
It's just going back to the forgotten roots of the web, noticing that after all, it's quite handy to produce HTML with the right data inside directly. It's simple, fast and "slim". This is a contrast to the SPAs which typically inflate, requiring larger JS assets, fetch the data in a second step and add rendering delays.
Sadly, the ecosystem is very fragmented in this area.
Porting software sucks
It would be foolish to leave the "architecture" decision purely to theoretical arguments. Moreover, it usually involves switching or at least adapting the technology stack. As such, its ecosystem plays a crucial role. If you go against the "intended usage" of your tech stack, you may fight an uphill battle not worth it.
In particular, at the time of Passwordless.ID's inception in mid-2022, Cloudflare pages functions simply did not exist yet. As such, the option to package both (with Cloudflare) was not even possible at that point. Pages functions appeared later, by the end of 2022. It certainly would have been able to use a more traditional technology stack at that time. However, the deployment and scalability comfort from a Cloudflare Pages/Workers combo was what pulled us over. It was a pragmatic choice rather than the ideal tech stack.
The point is that it would now be possible to combine the back-end and front-end in a single codebase. Is it worth porting the existing codebase? Tough question. I'd say "probably" but it's a substantial effort. The main issue is that in our case, the whole ecosystem around Cloudflare pages functions is very young. It lacks tooling, libraries, Q&A, documentation and so on. It is a "bleeding edge" right now.
Let's make it suck less
Do you know what also sucks? Authentication. It sucks for users (because they create oh so many accounts), it sucks for developers (because it's so complex) and it sucks for security (because passwords are vulnerable).
So at least, let's try to make authentication suck less and use Passwordless.ID. Think of it as a free universal account, a free public service, that makes your developer's life easy, is comfortable for the users and is more secure.
In the meantime, we'll start porting the "tiered" app to a "bundled" app, making it even better, swifter and more lightweight. Thanks for reading and stay tuned!
Top comments (2)
Regarding the (home-made) CORS issues, would using some front-proxy (like caddy/nginx) not be the minimal-invasive solution ?
Also, in my experience you can often go great length to add a lot of (convenience) features on the (web-)client, without having to add any API.
I don't know how many people are working on this product, but such strict technical boundaries are often an indicator for misaligned org structures (multiple reporting lines, often with political quarrel between them).
Hi Fabian. It's just my fault actually 🙄 I was the one planning it that way, with distinct repositories and subdomains for the layers. So the blame falls on me. 😅
That said, I wouldn't dismiss such clear cut separation either. It might make sense for some apps. After all, from a coding perspective, it's very nice.
Upon closer look, all difficulties are related to devops, technical details related to multiple domains and to a lesser extent SPA. These were various problems that weren't obvious to me initially, or that I underestimated.