Node.js backend development continues to stand out in 2024 as a powerful and flexible runtime for building scalable and efficient applications, even more so with the rise of other runtimes such as Bun.
In this article, I wanted to provide a lightweight introduction to essential Node.js backend examples that demonstrate the effective use of advanced JavaScript and Node.js features. From harnessing the WHATWG Streams Standard and Web Streams API for efficient data handling to employing the built-in Node.js crypto module for security, working with Buffers for binary data manipulation, leveraging Symbols for encapsulation and namespacing, and utilizing template literals and tagged templates for generating dynamic HTML and SQL queries — each section provides practical, real-world code snippets and insights.
These examples not only showcase the versatility and strength of Node.js in solving backend challenges but also serve as a valuable reference for developers looking to elevate their backend solutions in 2024.
Chapters in this article:
- The WHATWG Streams Standard, Web Streams API, and async iterables
- Working with the Crypto module to validate webhook signatures
- Working with buffers and raw binary data in Node.js
- Using JavaScript Symbol for encapsulation
- Template literals and tagged templates to generate HTML and SQL queries
You can always catch up with the full source-code Node.js backend examples in this GitHub repo.
1. The WHATWG Streams Standard, Web Streams API, and Async Iterables
In 2024, the prevalence of working with streams has significantly increased, especially when dealing with large language models for General Artificial Intelligence (GenAI). The OpenAI SDK for chat completion serves as a prime example of this trend. A typical example of streaming data from the OpenAI SDK API would look like this:
const completion = await openai.chat.completions.create(config);
for await (const chunk of completion) {
console.log(chunk);
}
Here, we use the concept of async iterables, which has simplified asynchronous workflows in Node.js to match the promise-based programming style. The for await…
statement creates a loop iterating over async iterable objects as well as sync iterables, including built-in String, Array, Array-like objects (e.g., arguments or NodeList), TypedArray, Map, Set, and user-defined async/sync iterables.
Introducing WHATWG Standard Web Streams in Node.js
Web Streams, a part of the WHATWG Streams standard, have been integrated into Node.js, and they provide a robust way of handling streaming data. This standard allows developers to efficiently read, write, and transform streaming data using JavaScript.
Let's look at a complete example of how to use Web Streams in Node.js.
In the following example, we create a function createReadableStreamFromFile()
that uses the ReadableStream
class from the Web Streams API to create a stream of data from a file. A second function, consumeStreamWithAsyncIterator()
, then consumes this stream using an Async Iterator.
import { ReadableStream } from "node:stream/web";
import fs from "fs";
function createReadableStreamFromFile(filePath) {
const stream = new ReadableStream({
start(controller) {
const reader = fs.createReadStream(filePath);
reader.on("data", (chunk) => {
controller.enqueue(chunk);
if (reader.readableFlowing === false) {
reader.resume();
}
});
reader.on("end", () => {
controller.close();
});
reader.on("error", (err) => {
controller.error(err);
});
},
});
return stream;
}
async function consumeStreamWithAsyncIterator(stream) {
try {
for await (const chunk of stream) {
process.stdout.write(chunk);
}
} catch (err) {
console.error("Error occurred while reading the stream:", err);
}
}
const filePath = process.argv[2];
const stream = createReadableStreamFromFile(filePath);
consumeStreamWithAsyncIterator(stream);
In this code, we're using the Web Streams API to read data from a file in a streaming manner. The function createReadableStreamFromFile()
returns a ReadableStream
object from a given file path. This stream is then consumed by consumeStreamWithAsyncIterator()
, which reads the stream chunk by chunk and writes each chunk to the standard output. If an error occurs during the reading process, it's caught and logged to the console.
By the way, did you notice the security vulnerability in the code above?
If you had the Snyk VS Code extension installed, then you’d get a wiggly linter error showing you that there’s a path traversal vulnerability in the code example.
The Snyk VS Code extension would show you how insecure data can flow into this Node.js backend code example, what this security vulnerability is about, and how to fix it with AI-curated suggestions from live open source projects.
2. Working with the Crypto module to validate webhook signatures
Webhooks have become an integral part of modern backend development, providing a way for different services to communicate with each other in an efficient and real-time manner. However, with the increased use of webhooks comes the need for stronger security measures, one of which is the validation of webhook signatures.
In the realm of webhooks, a signature is a hash that is sent along with the webhook payload, which is calculated using a secret key known only to the sender and the recipient. The recipient can then calculate the hash on their end and compare it with the signature to verify the authenticity of the webhook.
The Node.js Crypto module provides a host of cryptographic functionality, including a set of wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, and verify functions. HMAC (Hash-based Message Authentication Code) is particularly useful for validating webhook signatures.
Let's break down the provided code snippet to understand how it works:
const crypto = require("node:crypto");
const secret = process.env.WEBHOOK_SECERT;
const hmac = crypto.createHmac("sha256", secret);
const digest = Buffer.from(
hmac.update(request.rawBody).digest("hex"),
"utf8"
);
const signature = Buffer.from(request.headers["x-signature"] || "", "utf8");
if (!crypto.timingSafeEqual(digest, signature)) {
throw new Error("Invalid signature.");
}
In this snippet, the crypto.createHmac
method is used to create an HMAC object. This method takes two parameters — the algorithm to be used (in this case, sha256
) and the secret key.
The HMAC object is then updated with the raw body of the webhook request using the hmac.update
method. This method can be called multiple times with new data as it is streamed.
The digest
method is then used to generate the hash. This method can only be called once on the HMAC object, and it returns the calculated hash. The hex
parameter instructs the method to return the hash in hexadecimal format.
Now, we introduce the concept of secure string comparison. The calculated hash (digest) and the received signature from the webhook request header are then compared using the crypto.timingSafeEqual
method. This method performs a timing-attack safe equality comparison between two buffers, making it ideal for comparing cryptographic outputs.
Timing attacks are a type of side-channel attack where an attacker tries to compromise a system by analyzing the time taken to execute cryptographic algorithms. By using a timing-safe method like crypto.timingSafeEqual
, we protect against these types of attacks. If the digest and signature do not match, an error is thrown, indicating that the webhook request may not be authentic.
3. Working with buffers and raw binary data in Node.js
The Buffer API in Node.js is a powerful tool that allows developers to work directly with binary data. Whether you need to read a file, analyze an image, or process raw data, the Buffer API provides methods to handle such tasks efficiently.
Let's explore some of the common Buffer API methods in Node.js, such as .from
, .alloc
, and .write
, which allow for the creation and manipulation of buffer objects. The .from
method creates a new buffer using the data passed in as an argument, .alloc
creates a new buffer of a specified size, and .write
allows you to write data to a buffer. Another one is the .concat
method, which is used to concatenate a list of Buffer instances.
let buffer1 = Buffer.from('Hello, ');
let buffer2 = Buffer.from('World!');
let buffer3 = Buffer.concat([buffer1, buffer2]);
console.log(buffer3.toString());
// prints: 'Hello, World!'
Analyzing an image with OpenAI API using Node.js Buffer API
In the next Node.js backend example code, we are using the Buffer API to read an image file and analyze it using the OpenAI API. Firstly, we import the necessary modules and create an instance of the OpenAI API client:
import { readFile } from "node:fs/promises";
import OpenAI from "openai";
const openai = new OpenAI();
Next, we read the image file into a buffer:
const imageBuffer = await readFileToBuffer(process.argv[2]);
We then validate the image type by checking the file signature against a known PNG image type signature:
function isImageTypeValid(imageBuffer) {
const pngSignature = Buffer.from([
0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a,
]);
const fileSignature = imageBuffer.slice(0, 8);
if (pngSignature.equals(fileSignature)) {
return true;
}
}
Finally, we generate a descriptive alt text for the image using the OpenAI API:
async function generateAltTextForImage(imageBuffer) {
const imageInBase64 = imageBuffer.toString("base64");
const response = await openai.chat.completions.create({
model: "gpt-4-vision-preview",
messages: [
{
role: "user",
content: [
{
type: "text",
text: "What's in this image? generate a simple alt text for an image source in an HTML page",
},
{
type: "image_url",
image_url: {
url: `data:image/png;base64,${imageInBase64}`,
},
},
],
},
],
});
return response.choices[0];
}
Security considerations when using Buffer API
When working with the Buffer API, there are several security considerations to keep in mind. Improper handling of binary data can lead to potential security vulnerabilities such as buffer overflow or underflow errors. Always make sure to validate the input and properly handle the errors.
Also, be mindful of potential security vulnerabilities, such as the Buffer
constructor, which is now deprecated and should be avoided in favor of safer alternatives like Buffer.from
or Buffer.alloc
.
4. Using JavaScript Symbol for encapsulation
Symbols are a primitive data type introduced in ES6 (ECMAScript 2015) that represent unique and immutable identifiers. They are created with the Symbol()
function, which optionally accepts a description (a string) that can be used for debugging but does not affect the uniqueness of the symbol. Symbols are primarily used to create unique property keys for objects that do not collide with any other property, including those inherited. This makes them particularly useful for defining private or special properties of objects without risking property name collisions.
Here's a Node.js backend code example of how to use symbols to create a private property in a class as a way to encapsulate data:
const _privateProperty = Symbol('privateProperty');
class MyClass {
constructor(value) {
this[_privateProperty] = value;
}
getPrivateProperty() {
return this[_privateProperty];
}
}
const instance = new MyClass('secret');
console.log(instance.getPrivateProperty());
// Will output 'secret'
// The _privateProperty cannot be directly accessed from outside the class
In addition, Symbols are not accessible through object property enumeration (like for...in
loops or Object.keys()
), so in a sense, they can be used to simulate private properties and methods for objects, but it's important to note that they are not truly private and can still be accessed using reflection methods like Object.getOwnPropertySymbols()
.
JavaScript defines a set of well-known symbols that represent internal language behaviors that can be customized by developers. For example, implementing an iterator for a custom object using Symbol.iterator
:
const iterable = {
[Symbol.iterator]: function* () {
yield 1;
yield 2;
}
};
for (const value of iterable) {
// Logs 1 and 2
console.log(value);
}
Fastify’s use of JavaScript's Symbol
Let's look at a more real-world example with the Fastify web application framework and how the project uses Symbols. Specifically, we're going to look at an example from Fastify's plugin architecture. One of the key aspects of Fastify's design is its encapsulation feature, which allows developers to create isolated application contexts using plugins. This is crucial for building large-scale applications where namespace collisions can become a problem.
Fastify uses Symbols to uniquely identify internal properties and methods, ensuring that these do not interfere with user-defined properties or those from other plugins. Here is a simplified example based on the actual use of Symbols in Fastify's source code for encapsulating the plugin's metadata:
const pluginMeta = Symbol('fastify.pluginMeta');
function registerPlugin(instance, plugin, options) {
if (!plugin[pluginMeta]) {
plugin[pluginMeta] = { options, name: plugin.name };
}
instance.register(plugin, options);
}
function myPlugin(instance, opts, done) {
done();
}
registerPlugin(fastifyInstance, myPlugin, { prefix: '/api' });
In the above, pluginMeta
is a Symbol used by Fastify to attach metadata to plugin functions. This metadata includes the plugin's options, name, and potentially other necessary information for the framework's internal use. The registerPlugin
function simplifies the process of attaching metadata to a plugin before registering it with a Fastify instance. This metadata is then accessible within the Fastify framework but remains isolated from the plugin's public interface and the application's global scope.
The benefits of using JavaScript's Symbol for internal metadata like this has several advantages in a framework like Fastify:
- Encapsulation: It prevents internal details from leaking into the user space, keeping the public API clean and intuitive.
- Safety: It reduces the risk of accidental interference between plugins or between a plugin and the core framework, as Symbols are not accessible through normal object property enumeration.
- Clarity: It clearly distinguishes between the framework's internal mechanisms and the APIs exposed to developers, making the Fastify codebase easier to maintain and extend.
5. Template literals and tagged templates to generate HTML and SQL queries
Introduced in ES6, template literals offer a more readable and concise syntax for creating strings in JavaScript, and you might have been using template literals already in JavaScript to write strings and integrate dynamic expressions — such as hello ${name}
. No more string concatenation.
We're now seeing a growing trend of using template literals in the form of tagged templates to generate HTML and SQL queries in Node.js backends. Here are two real-world examples in Node.js backend and SSR code:
Tagged templates for generating SQL queries with Vercel's PostgreSQL library
Dealing with SQL queries in Node.js can often lead to verbose and error-prone code, especially when dynamically inserting values into queries. Vercel's PostgreSQL package (@vercel/postgres
) introduces a safer and more concise way to format SQL queries using template literals:
import { sql } from '@vercel/postgres';
const jediName = 'Luke Skywalker';
const { rows } = await sql`SELECT * FROM jedis WHERE name = ${jediName};`;
The SQL-tagged template literal function safely interpolates the jediName
variable into the SQL query, effectively preventing SQL injection attacks.
Tagged templates for generating HTML on Fastify servers
Similar to the SQL query, the fastify-html
is a Fastify plugin that allows developers to use tagged templates to generate HTML content directly in route handlers. This can be particularly useful for server-side rendering (SSR) or generating dynamic HTML content (did someone say htmx?):
import fastify from 'fastify'
import fastifyHtml from 'fastify-html'
const app = fastify()
await app.register(fastifyHtml)
app.get('/', async (req, reply) => {
const name = req.query.name || 'World';
return reply.html`Hello ${name}
`;
});
Closing up
If you liked this article, you might also want to check out best practices for creating a modern npm package with security in mind and 10 best practices to containerize Node.js web applications with Docker.
And if you're into security stories, you’ll want to make sure you're well-equipped to combat supply chain attacks on npm with developer security practices that I mentioned in the article.
Lastly, don't forget to check out Snyk with a free account to start securing your Node.js code, dependencies, and Docker container images.
Top comments (2)
Some cool advice here 👍 Just, we got used to syntax highlighting too much — it's not a big deal eventually, and it's quite hard to read without it — you can just add "js" or "ts" after the first backtick trinity of each code snippet, and here we go...
Shoutout for good crypto module and explanation!