DEV Community

Cover image for Dynamic image creation with service workers
Massimo Artizzu
Massimo Artizzu

Posted on

Dynamic image creation with service workers

Service workers are a fantastic technology. You may know them in relation to the term Progressive Web Application (PWA), so that something that's normally visible on the browser could be "installed" in the OS, and could be opened like a native application, and disinstalled like a native application, and looks like a native application all around. But service workers can do much more than that.

Of course there's a xkcd comic for that

For accessibility and explanation, look here.

Service workers are basically shared web workers (which exist as a separate technology, by the way) with the special ability to intercept all http requests made by the browser from URLs in the same scope (origin + path) the worker has been registered with. Then, it could be instructed to either respond with a constructed or a cached response - actually preventing the browser to hit the network with the request - or pass the request to the network as normal or by modifying the request (using fetch).

This said, it's clear why service workers are often associated with the ability to access to a web page when offline: the first time you can download and cache all the static resources (which basically "installs" the page), then the service worker can respond to the same requests with the cached versions, basically serving the "application resources" like it was a native app. dev.to is a great example of that.

This is already a simplification, and talking about cache busting, updates and the rest is out of scope for this article, so I won't indulge in that. What I'll talk about is the ability of service workers to serve constructed responses.

Mocking responses

My team was recently tasked to build a "showcase" application, i.e. a web application that basically does nothing, but serves the purpose to show how to use our Web Component UI Kit, following the design system and the coding guide lines.

The application was intended as a purely frontend application (meaning we weren't supposed to develop a backend too), but should look like one the many B2B applications that our customer maintains, with backend and all. That's were the role of a service worker comes in handy.

Now, responding with a textual response is quite simple. Even a JSON is basically text, so in the end our service worker could be something like this:

self.addEventListener('fetch', event => {
  if (event.request.url.includes('/api/hello')) {
    event.respondWith(new Response(
      JSON.stringify({ message: 'Hello!' }),
      { headers: { 'Content-Type': 'application/json' }}
    );
  } else  {
    event.respondWith(fetch(event.request));
  }
});
Enter fullscreen mode Exit fullscreen mode

I won't bore you about how this snippet could be improved. URL matching could use URLPattern. You can load static data with fetch and store them on IndexedDB. You can go nuts with that.

But what about other kind of dynamic responses? Like images?

Generating images: the "easy" way.

The easiest way to generate a dynamic image is to create an SVG, which is basically an XML document. Meaning, it's text. It's a totally feasible task, and you can use libraries like D3.js to generate the SVG elements and paths for you: factories like line() and others returns functions that return what you need to put into the d attribute of <path> elements:

import { pie, arc } from 'd3-shape';

const pieData = pie().sort(null)(data);
const sectorArc = arc().outerRadius(35).innerRadius(20);

const svg = '<svg viewBox="-40 -40 80 80" xmlns="http://www.w3.org/2000/svg">'
  + pieData.map((pie, index) =>
    `<path d="${sectorArc(pie)}" fill="${colors[index]}"/>`
  ).join('')
  + '</svg>';

event.respondWith(new Response(
  svg, { headers: { 'Content-Type': 'image/svg+xml' }}
));
Enter fullscreen mode Exit fullscreen mode

Dynamically generating SVGs could be great to get the task off the main thread - and the result could even be cached. This is great for charts and infographics, and "easy" enough to accomplish.

Generating other image types

What's more tricky is generating a raster image like a PNG or a JPG. "Generation" means using editing instruments to alter a picture or create it from scratch. What we usually do in these cases is using a <canvas> element, getting its 2d context and start painting on it using its many drawing directives.

Problem is, service workers don't have accesso to DOM element. So, are we out of luck?

Worry not, my friends! Because all workers (including service workers) can create OffscreenCanvas objects. Give a width and a height in pixels to the conscructor and there you go, a perfectly fine (although invisible) canvas in a service worker:

const canvas = new OffscreenCanvas(800, 600);
const context = canvas.getContext('2d');
Enter fullscreen mode Exit fullscreen mode

For those wondering: yes, you can get a different type of context, although not all of them are available in every browser. You can try using a library like three.js to generate 3D scenes in a service worker (I think I'll try that later).

Now we can do... whatever, basically. Draw lines, arcs, paths, etc. Even modifying the geometry of our canvas. That's as simple as drawing on a DOM canvas context, so I won't indulge in this part.

Drawing text

We can indeed write text too. This is important because in other environments - namely, a Paint worklet, we cannot do that:

Note: The PaintRenderingContext2D implements a subset of the CanvasRenderingContext2D API. Specifically it doesn’t implement the CanvasImageData, CanvasUserInterface, CanvasText, or CanvasTextDrawingStyles APIs.

But in a service worker, this is all fine. This means that we have a more powerful (although less performant) environment to generate our background images.

Drawing text is as easy as this:

context.fillStyle = '#222';
context.font = '24px serif';
// (x, y) = (50, 90) will be the *bottom left* corner of the text
context.fillText('Hello, world!', 50, 90);
Enter fullscreen mode Exit fullscreen mode

You can use the font you like here, but I've found that usual standard values like sans-serif, monospace or system-ui don't seem to work, as they all fall back to the default serif font. But you can use font stacks as usual:

context.font = '24px Helvetica, Roboto, Arial, Open Sans';
Enter fullscreen mode Exit fullscreen mode

Moreover, you can use the Font Loading API to load fonts from external resources:

const font = new FontFace('Doto', 'url(./fonts/doto.woff2)');
self.fonts.add(font);

self.fonts.ready.then(() => {
  // ...
  context.font = '24px Doto';
});
Enter fullscreen mode Exit fullscreen mode

Sending back to the application

Sending back the response is, again, as easy as calling the convertToBlob method that returns the promise of - you guessed it - a Blob. And blobs can be easily sent back to sender.

const blob = await canvas.convertToBlob({ type: 'image/jpeg' });
event.respondWith(new Response(blob));  
Enter fullscreen mode Exit fullscreen mode

The method creates a PNG image by default, but could be instructed to create a JPG file instead, as seen above. 'image/webp' is another common format, but Safari doesn't support it. To be honest, the choice here is a little underwhelming, as newly available and more capable image format decoders aren't reflected in their corresponding encoders. But that's sufficient for most purposes anyway.

Fun fact: the method convertToBlob is specific to the OffscreenCanvas class. HTMLCanvasElements have toBlob instead, which takes a callback as the first argument, in the common pre-Promise era style of asynchronous task handling.

Using a template image

Now, this all works if we want to create a picture from scratch. But what if we want to start from a blank template?

If we were to work in the main thread, we could slap a picture in the context using the drawImage method of our 2D context, sourcing it e.g. from a readily available <img> element.

Problem is, again, that we can't access the DOM, so we can't reference <img> elements. What we can do, instead, it fetching the picture we need as background, getting its Blob and then convert it to something else that drawImage can digest. Enter createImageBitmap, a global method that's available in service workers too. It returns a promise for an ImageBitmap instance, one of the many less-known classes of frontend web development. It's apparently more widely used in WebGL contexts, but drawImage seems to accept it, so...

const templateBlob = await (await (fetch('./img/template.png')).blob();
const template = await self.createImageBitmap(templateBlob);

const canvas = new OffscreenCanvas(template.width, template.height);
const context = canvas.getContext('2d');
context.drawImage(template, 0, 0);
Enter fullscreen mode Exit fullscreen mode

From this point on, we can proceed drawing our scribbles and texts on it, creating a satisfying synamic image to send back to the user.

Note: this could be more easily solved with an SVG, as you could just use a <image> element to set up a background picture. But that would mean the browser has to load the picture after the generated image has been sent, whereas with this technique this is done before. Something similar applies when picking a font.

Putting all together

In all these examples, I've used module service workers (i.e. I've used import from other ES modules). Alas, module service workers aren't yet supported by Firefox, but hopefully they'll be soon. In the meanwhile, you might need to adjust your code to use the old importScripts instead.

When importing other scripts into a service workers, either via import or importScripts, remember that the browser will not fire an updatefound event when an imported file changes: it's fired only when the service worker entry script changes.

In a case like ours, where the service worker is needed only to mock the presence of a backend, its life cycle could be shortcut by calling self.skipWaiting() right when the install event is fired, and then call self.clients.claim() on the activate event in order to be able to immediately respond to requests (otherwise, it'll start only on the next page refresh).

import { getUser } from './users.js';

self.addEventListener('install', () => self.skipWaiting());

self.addEventListener('activate', event => {
  event.waitUntil(self.clients.claim());
});

const pattern = new URLPattern('/api/docs/mock-:id', location.origin);
self.addEventListener('fetch', event => {
  const match = pattern.exec(event.request.url);
  event.respondWith(match
    ? createDocument(match.pathname.groups.id)
    : fetch(event.request)
  );
});

let templatePromise;
let font;
const createDocument = async userId => {
  const user = await getUser(userId);
  if (!user) return new Response(null, { status: 404 });

  if (!templatePromise) {
    templatePromise = fetch('./img/template.png')
      .then(response => response.blob())
      .then(blob => self.createImageBitmap(blob));
  }
  if (!font) {
    font = new FontFace('Doto', 'url(./fonts/doto.woff2)');
    self.fonts.add(font);
    font.load();
  }

  const [ template ] = await Promise.all([
    templatePromise,
    self.fonts.ready
  ]);

  const canvas = new OffscreenCanvas(template.width, template.height);
  const ctx = canvas.getContext('2d');
  ctx.drawImage(template, 0, 0);

  ctx.font = '24px Doto';
  ctx.fillText(`${user.firstName} ${user.lastName}`, 20, 50);

  const blob = await canvas.convertToBlob({ type: 'image/jpeg' });
  return new Response(blob);
};
Enter fullscreen mode Exit fullscreen mode

And this is basically everythin, so... have fun with service workers, folks!

Top comments (0)