DEV Community

Cover image for Hyper-Scale Activated! Ship Your Own FaaS 🤖
Florian Rappl for smapiot

Posted on

Hyper-Scale Activated! Ship Your Own FaaS 🤖

In the past decade I've done a lot of work on distributed web applications. In that area you definitely require processes that work implicitly - fully decoupled, with autonomy for individual teams and modules.

The key question quite often is:

How awesome would it be to just extend your application at runtime without any downtime whatsoever?!

In this article I want to look at how we can extend a Node server using Express.js to act like a FaaS provider (such as AWS Lambda or Azure Functions).

Prerequisites

To follow this article you'll need the following:

  • Node.js with npm installed (I recommend version 22 or higher - we'll use fetch in Node, which is available since version 18)
  • A code editor
  • If you want to clone the sample project you'll need to have git installed (but you can download it as a ZIP from GitHub)
  • Access to the Piral Cloud Feed Service - you can use it for free using a Microsoft or GitHub account

Architecture

Before we start to write some code let's look at the anticipated architecture:

Architecture diagram

Using the Piral Cloud Feed Service we get access to a dynamic service / module registry, which we can leverage to load a list of modules, evaluate them directly from their URLs, and then use them to populate a list of (sub-)routers.

The routers are necessary to extend the default router middleware. Let's see the whole thing in action. We start by writing the server.

Writing the Server

As server we use a pretty standard and lightweight Express.js server. It can be created by running the following commands in a new directory.

We first initialize a new npm project:

npm init -y
Enter fullscreen mode Exit fullscreen mode

Now we install the required dependency:

npm i express
Enter fullscreen mode Exit fullscreen mode

We add a new file (index.js) with the following content:

const express = require("express");

const port = 3000;
const app = express();

app.get("/", (req, res) => {
  res.send("Hello world!");
});

app.listen(port, () => {
  console.log(`Running at http://localhost:${port}`);
});
Enter fullscreen mode Exit fullscreen mode

And we finally run the application:

node index.js
Enter fullscreen mode Exit fullscreen mode

Go to localhost:3000 in the browser of your choice and see that everything is running as it should:

Server running

Great! But there is a bit more to it for making it truly dynamic...

Making the Server Dynamic

So what we are after is to run modules that we can just publish whenever we feel the need for it. The first thing to look for is to avoid having routing issues.

To prevent different modules for occupying the same route we will actually namespace the routes by their module names. In Express.js this also allows us to use dynamic Router objects, which are bound to their module names.

In code this can be done like that:

const routers = {};

app.use("/:namespace", (req, res, next) => {
  const { namespace } = req.params;
  const router = routers[namespace];

  if (router) {
    return router(req, res, next);
  }

  return next();
});
Enter fullscreen mode Exit fullscreen mode

We introduce a middleware that is sensitive to a namespace parameter. In a productive setup we might want to constraint this a bit further, e.g., /api/:namespace or /apps/:namespace, to avoid clashes between in-built functionality and the modules.

The job of the middleware is to use the retrieved namespace value by looking at all available sub-routers. If a sub-router is matched we will use the sub-router as follow-up middleware. Otherwise, we continue with the next middleware. Usually, the next middleware would be the "not-found-route" from Express.js.

The question to answer now is how to populate the routers object. How can we obtain these sub-routers?

What we can do is to introduce a function to obtain the information from a discovery service. This service gets us all the relevant pieces of information - we then only need to load / run the referenced modules.

The necessary code to built something like that is:

const feed = "https://feed.piral.cloud/api/v1/pilet/express-demo";

function makeId(item) {
  return `${item.name}@${item.version}`;
}

async function loadPlugins() {
  console.log("Loading plugins ...");
  const res = await fetch(feed);
  const { items } = await res.json();

  for (const item of items) {
    const id = makeId(item);
    console.log(`Integrating plugin "${id}" ...`);
    await installPlugin(item);
    console.log(`Integrated plugin "${id}"!`);
  }

  watchPlugins();
}

loadPlugins();
Enter fullscreen mode Exit fullscreen mode

What are we doing here?

  1. We want to start loading the plugins / sub-routers as fast as possible. So we immediately run the loadPlugins function. Ideally we use the top-level await here; if it's not available (or if we want to save a bit of startup time) we just call it as shown.
  2. The loadPlugins fetches the information from the discovery service.
  3. Once the information is fetched we go over the contained items array. Each item is treated like the meta information of a plugin - so we just call installPlugin to use the found meta information.
  4. Finally we also want to react when something changes - so the loadPlugins part concludes by watching for changes using the watchPlugins function.

Important Note
If you want to follow this on your own then replace https://feed.piral.cloud/api/v1/pilet/express-demo with your own discovery feed URL. Using the Piral Cloud Feed Service you can see the URL on the feed overview page.

Finding the Feed Service URL in the feed overview


Let's look at the installPlugin first:

const current = [];

async function installPlugin(item) {
  const { name, link } = item;
  const router = express.Router();
  const { setup } = await loadModule(link);
  typeof setup === "function" && setup(router);
  routers[name] = router;
  current.push({ id: makeId(item), name });
}
Enter fullscreen mode Exit fullscreen mode

Doesn't look too complicated, does it? What's happening here?

  1. We decompose the item - but we are only interested in the name and link (module reference / URL) of the module.
  2. We create a new Express.js Router instance (the "sub-router").
  3. We load the respective module - it should get us a function called setup (now this is a design choice; you could either name it differently, make it a default export, or export a Router instance directly...
  4. If we really obtained a setup function we call it with the Router instance. This can now be modified properly.
  5. Finally, the created router is assigned to the routers object and the current array is extended with the proper entry.

As you might have guessed the current array is for book-keeping purposes. It allows us to easily patch / change the routers in case of changes. For this, we'll need to look at the watchPlugins function:

const WebSocket = require("ws");

const changeEventTypes = ["add-pilet", "update-pilet", "remove-pilet"];

function watchPlugins() {
  console.log("Watching plugins ...");
  const ws = new WebSocket(feed.replace("http", "ws"));

  ws.on("error", console.error);

  ws.on("message", async (data) => {
    const msg = JSON.parse(Buffer.from(data).toString("utf8"));

    if (changeEventTypes.includes(msg.type)) {
      const res = await fetch(feed);
      const { items } = await res.json();
      const removeItems = current.filter(
        ({ id }) => !items.some((n) => makeId(n) === id)
      );
      const addItems = items.filter(
        (item) => !current.some(({ id }) => id === makeId(item))
      );

      for (const item of removeItems) {
        await uninstallPlugin(item);
      }

      for (const item of addItems) {
        await installPlugin(item);
      }
    }
  });
}
Enter fullscreen mode Exit fullscreen mode

For this code to run we rely on the ws library. So we'll need to install it first:

npm i ws
Enter fullscreen mode Exit fullscreen mode

With ws active we can have a look at what the code does:

  1. We define some events we want to listen to. The provided events are send via the WebSocket channel in case the contained modules change.
  2. Once a message is received we transform it (from raw bytes to a JSON object) and check if the type matches one of the events defined in (1).
  3. We could now use the information from the event, but some events (e.g., update-pilet) don't tell us what changed. In such cases we could be in a problematic zone. So we follow the safe route and just re-load all the modules.
  4. Finally, we need to compare the current information from the discovery service vs the stored / previous information. For the changed (removed, updated) modules we remove them using the uninstallPlugin. All the ones that are not yet instantiated we add them using the installPlugin function.

The uninstallPlugin looks as follows:

async function uninstallPlugin(item) {
  delete routers[item.name];
  current.splice(current.indexOf(item));
}
Enter fullscreen mode Exit fullscreen mode

That was easy, right? But so far we avoided the most complicated topic. How are we evaluating those modules?

Evaluating Modules from URLs

In the browser you can always evaluate ESMs - they are coming from an URL by default. The same is true in runtimes such as Deno. But in Node.js this is not possible. So should we give up?

As it turns out evaluating URLs is actually not so difficult. We need 2 things:

  1. Code to use the vm module. We need to rely on the SourceTextModule to create modules on the fly.
  2. The experimental flag --experimental-vm-modules for running Node. This way, the SourceTextModule is available at runtime.

Using those two ways we can add the following code to our server:

const vm = require("vm");

async function linkModule(url, ctx) {
  const res = await fetch(url);
  const content = await res.text();
  const mod = new vm.SourceTextModule(content);
  mod.context = ctx;
  await mod.link((specifier) => {
    const newUrl = new URL(specifier, url);
    return linkModule(newUrl.href, ctx);
  });
  await mod.evaluate();
  return mod;
}

async function loadModule(url) {
  const ctx = vm.createContext();

  try {
    const res = await linkModule(url, ctx);
    return res.namespace;
  } catch (ex) {
    console.warn(`Failed to evaluate "${url}":`, ex);
    return {};
  }
}
Enter fullscreen mode Exit fullscreen mode

Finally, we have everything together and can just run the server with the flag:

node --experimental-vm-modules index.js
Enter fullscreen mode Exit fullscreen mode

We should see output like this in the console:

Loading plugins ...
Running at http://localhost:3000
Integrating plugin "app1@2.0.0" ...
Integrated plugin "app1@2.0.0"!
Integrating plugin "app2@1.0.4" ...
(node:23052) ExperimentalWarning: VM Modules is an experimental feature and might change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
Integrated plugin "app2@1.0.4"!
Integrating plugin "app3@1.0.0" ...
Integrated plugin "app3@1.0.0"!
Watching plugins ...
Enter fullscreen mode Exit fullscreen mode

Here, we are already including a couple of modules (referred to as plugins). Let's see what we can do here - how we can write, deploy, and update them.

Example Functions

One Module

The first example is a really simple one; just a single function to return a constant string:

export function setup(router) {
  router.get("/foo", (req, res) => {
    res.send("Hello from app1 v2: /foo");
  });
}
Enter fullscreen mode Exit fullscreen mode

Importantly, we initialize a new project and define the package.json like this:

{
  "name": "app1",
  "version": "1.0.0",
  "module": "index.js",
  "scripts": {
    "deploy": "publish-microfrontend --url https://feed.piral.cloud/api/v1/pilet/express-demo --interactive"
  },
  "devDependencies": {
    "publish-microfrontend": "^1.6.2"
  }
}
Enter fullscreen mode Exit fullscreen mode

The publish-microfrontend script is used to deploy the module. This little script just packages the current project and invokes a POST against the provided discovery service URL. The --interactive flag then allows us to dynamically log into the discovery service to deploy our function.

We'll see how this process looks in a second. Let's look at another example.

Two Modules

While publishing the package with a single module works as expected the question is: Does it also work with more modules? The answer is yes.

Pretty much the same definition in the package.json:

{
  "name": "app2",
  "version": "1.0.0",
  "module": "lib/index.js",
  "scripts": {
    "deploy": "publish-microfrontend --url https://feed.dev.piral.cloud/api/v1/pilet/express-demo --interactive"
  },
  "devDependencies": {
    "publish-microfrontend": "^1.6.2"
  }
}
Enter fullscreen mode Exit fullscreen mode

But now lib/index.js is different:

import { compute } from './other.js';

export function setup(router) {
  router.get("/compute", (req, res) => {
    const { a, b } = req.query;
    const c = compute(+a, +b);

    if (!isNaN(c)) {
      return res.status(200).send(`${c}`);
    }

    return res.status(400).send(`Only numbers allowed.`);
  });
}
Enter fullscreen mode Exit fullscreen mode

Not only is this a bit more logic - it also uses a function from another module located at lib/other.js:

export function compute(a, b) {
  if (typeof a === "number" && typeof b === "number") {
    return (a + b) * (a - b);
  }

  return NaN;
}
Enter fullscreen mode Exit fullscreen mode

As you might have guessed our ESM URL loader just works as it should. No questions asked. Great!

Impressive

We won't answer the burning question: Can it do three modules? I guess we know the answer, but if you want to find out get active now!

In the meantime we go already to the complex case of having the modules written in TypeScript - and including some dependencies.

TypeScript Modules with Dependencies

Consider the code at src/index.ts:

import cors from 'cors';
import type { Router } from "express-serve-static-core";

export function setup(router: Router) {
  router.use(cors());

  router.get("/", (req, res) => {
    res.status(404).send(`NOT FOUND.`);
  });
}
Enter fullscreen mode Exit fullscreen mode

Here we are using types and dependencies. This won't work - we don't do any dependency resolution and we don't transpile TypeScript. So should we be worried? Not at all!

Let's bundle it using esbuild:

esbuild --outdir=dist src/index.ts --bundle --platform=node --format=esm
Enter fullscreen mode Exit fullscreen mode

Even better - we can just extend the package.json to take care of this in the deploy task:

{
  "name": "app3",
  "version": "1.0.0",
  "module": "dist/index.js",
  "scripts": {
    "build": "esbuild --outdir=dist src/index.ts --bundle --platform=node --format=esm",
    "deploy": "npm run build && publish-microfrontend --url https://feed.dev.piral.cloud/api/v1/pilet/express-demo --interactive"
  },
  "devDependencies": {
    "@types/cors": "^2.8.17",
    "@types/express": "^5.0.0",
    "esbuild": "^0.24.0",
    "publish-microfrontend": "^1.6.2"
  },
  "dependencies": {
    "cors": "^2.8.5"
  }
}
Enter fullscreen mode Exit fullscreen mode

This way, we can just write anything - bundle it using esbuild and ship it with publish-microfrontend.

Now let's step back a bit and see how we actually deploy the modules:

Deployment

Maybe let's see first how it looks in the discovery service when we deployed all of our modules:

Overview of modules

As already mentioned for deploying such a module we can use the available publish-microfrontend npm package:

npx publish-microfrontend --url <your-discovery-service-url> --api-key <your-key>
Enter fullscreen mode Exit fullscreen mode

In cases such as the ones above we might swap the --api-key with an --interactive flow, where we need to use the browser for obtaining a user token instead.

npx publish-microfrontend --url <your-discovery-service-url> --interactive
Enter fullscreen mode Exit fullscreen mode

Let's see how this process looks in real life:

The publish process in motion

In the end having such a powerful service registry between our server and the respective modules is not only clean, but also useful for flexibility reasons.

Advanced Dynamics

We already saw that the dynamics with respect to partial rollbacks are in place from the beginning. In general, the service registry's portal gives us the ability to control every bit of the delivery process:

Version selector

The version selector in particular is quite useful. Besides the possibility of making rollbacks, we can also do feature flags, A/B testing, blue-green deployments, or canary releases.

Quite powerful stuff...

Homelander

This allows us, e.g., to have multiple instances of our Express.js server, where each instance might get a different set of modules. Sounds crazy (how can this be useful?!) but if you don't consider these instances equal (i.e., meant for scaling), but rather as different sites (e.g., one for US, one for EU etc.) then backend behavior might be different - at least in some areas.

Beforehand, that was a lot of overhead. You had to produce countless variations of the same server - now you can just have one and have the different behavior qualified through the loaded modules.

Conclusion

In this article you've seen how you can easily extend your Express.js server into a FaaS platform allowing you to add, update, and remove endpoints on the fly.

Of course, the provided sample is rather simplistic and does not solve some of the issues including advanced isolation, memory footprint (unload and restrictions of modules), local module development using an emulator, and many others. But it's a very convenient start - showing what's possible with just a few lines of code.

Check out the sample code at github.com/piral-samples/piral-cloud-express-plugins-demo.

Top comments (0)