DEV Community

Cover image for Mastering Docker for Node.js: Advanced Techniques and Best Practices
David Chibueze Ndubuisi
David Chibueze Ndubuisi

Posted on • Edited on

Mastering Docker for Node.js: Advanced Techniques and Best Practices

Introduction

In today's fast-paced software development environment, containerization has become a popular and effective way to package and deploy applications. Docker is one of the most widely used containerization technologies, and it has become an essential tool for Node.js developers looking to build, test, and deploy their applications more efficiently.

The basics of Dockerizing a Node.js application were covered in my previous article, which included installing Docker, creating a minimal Dockerfile, and running some commands to start it. However, as your applications become more complex and sophisticated, you’ll need more advanced techniques to make the most of Docker.

In this article, we will explore some advanced Docker techniques and best practices that will help you take your Node.js containerization game to the next level, as we build a simple authentication API. We will discuss how to use multi-stage builds, environment variables, Docker volumes, and other techniques to master Docker for Node.js. These techniques will help you create more secure, scalable, and efficient Docker images that are tailored to the unique requirements of your Node.js application. So buckle up and get ready to take your Docker skills to the next level!

Prerequisite

Before diving into the advanced Docker techniques covered in this article, it is recommended that you have a basic understanding of Docker and its fundamental concepts. Familiarity with Node.js, Express, and MongoDB is also beneficial but not essential. If you are new to Docker or need a refresher, check out my previous article on "How to set up Docker in Node.js" which covers the basics. Additionally, you should have Docker installed on your local machine and a basic understanding of the command-line interface. With these prerequisites in place, you will be ready to follow along and master Docker for Node.js!

Building a simple Authentication API

To better understand the advanced Docker techniques that we will cover in this article, we will create a simple authentication API using Node.js, Express, and MongoDB. Our application will have two endpoints - one for user registration and another for user login.

By containerizing our application with Docker, we will be able to showcase various advanced techniques such as multi-stage builds, environment variables, Docker volumes, and more. These techniques will help you create a more efficient, secure, and scalable Docker image that caters to the unique requirements of your Node.js application. So, let’s get started!

Setting up the Project

To get started, we need to set up our project directory and install the necessary dependencies. We will use the express and mongoose packages for building the application, the bcrypt package for password hashing, and other packages we’ll learn about by following these steps:

a. Create a new directory for your project and navigate to it:

mkdir docker-node-app && cd docker-node-app
Enter fullscreen mode Exit fullscreen mode

b. Initialize your new Node.js project with the following command:

npm init -y
Enter fullscreen mode Exit fullscreen mode

c. Install the required dependencies by running the following command:

npm install express mongoose bcrypt jsonwebtoken dotenv nodemon
Enter fullscreen mode Exit fullscreen mode

The above command installs the required dependencies for our authentication application. Let's take a closer look at each of these dependencies:

  • express: A popular Node.js framework for building web applications and APIs.

  • mongoose: A library that provides a simple schema-based solution for modeling MongoDB data.

  • bcrypt: A library used for password hashing and storing passwords securely.

  • jsonwebtoken: A library used for generating and verifying JSON web tokens.

  • dotenv: A zero-dependency module that loads environment variables from a .env file into process.env. We will use this to load sensitive configuration data for our application.

  • nodemon: a tool for Node.js that automatically restarts the application when changes are made to the code, making development easier and more efficient.

By installing these dependencies, we have laid the foundation for our authentication API. In the next section, we will create the basic structure of our application and define the necessary routes.

Defining the Application Structure

Now that we have installed all the necessary dependencies, let's move forward and define the structure of our authentication application.

To ensure a more organized and maintainable code base, we will create a "src" directory that will house our application files and folders. Within the "src" directory, we will create a "routes" directory and define our registration and login routes in separate files. We will also create a "models" directory to define our database schema and models for user registration and authentication.

Use the following commands to create the necessary directories and files:

mkdir src
cd src
touch server.js
mkdir models routes controllers
touch models/user.js routes/auth.js controllers/authController.js
Enter fullscreen mode Exit fullscreen mode

In server.js, we will set up the basic Express server and connect to the MongoDB database:

const express = require("express");
const mongoose = require("mongoose");
const dotenv = require("dotenv");
const router = require("./routes/auth");


dotenv.config();


const app = express();
const port = process.env.PORT || 8080;


const connectDB = async () => {
   try {
       await mongoose.connect(
           process.env.MONGO_URI || "mongodb://localhost:27017/docker-node-app"
       );
       console.log("MongoDB connected");
   } catch (error) {
       console.error(error);
   }
};


connectDB();


app.use(express.json());
app.use("/api", router);


app.listen(port, () => {
   console.log(`Server listening at http://localhost:${port}`);
});
Enter fullscreen mode Exit fullscreen mode

In models/user.js, define the User schema and model using Mongoose by copying and pasting the following code:

const mongoose = require("mongoose");
const bcrypt = require("bcrypt");

const userSchema = new mongoose.Schema({
    name: {
        type: String,
        required: true,
    },
    email: {
        type: String,
        required: true,
        unique: true,
        lowercase: true,
    },
    password: {
        type: String,
        required: true,
        minlength: 8,
    },
});

// hash user password before saving into database
userSchema.pre("save", async function (next) {
    try {
        const salt = await bcrypt.genSalt(10);
        const hashedPassword = await bcrypt.hash(this.password, salt);
        this.password = hashedPassword;
        next();
    } catch (error) {
        next(error);
    }
});

const User = mongoose.model("User", userSchema);

module.exports = User;
Enter fullscreen mode Exit fullscreen mode

In routes/auth.js, copy and paste the following code to define the authentication routes using Express:

const express = require("express");
const authController = require("../controllers/authController");

const router = express.Router();

router.post("/register", authController.register);
router.post("/login", authController.login);

module.exports = router;
Enter fullscreen mode Exit fullscreen mode

Next, let's define the controller functions for user registration and login in controllers/authController.js:

const User = require("../models/user");
const bcrypt = require("bcrypt");
const jwt = require("jsonwebtoken");

const register = async (req, res, next) => {
    try {
        const { name, email, password } = req.body;
        const user = await User.create({ name, email, password });
        res.status(201).json({
            success: true,
            message: "User registered successfully",
            data: user,
        });
    } catch (error) {
        next(error);
    }
};

const login = async (req, res, next) => {
    try {
        const { email, password } = req.body;
        const user = await User.findOne({ email });
        if (!user) {
            return res
                .status(401)
                .json({ success: false, message: "Invalid email or password" });
        }
        const isMatch = await bcrypt.compare(password, user.password);
        if (!isMatch) {
            return res
                .status(401)
                .json({ success: false, message: "Invalid email or password" });
        }
        const token = jwt.sign(
            { userId: user._id },
            process.env.JWT_SECRET || "secret"
        );
        res.json({ success: true, token });
    } catch (error) {
        next(error);
    }
};

module.exports = {
    register,
    login,
};
Enter fullscreen mode Exit fullscreen mode

In the above authController.js file, we have defined two important functions - register and login. Let's start with the register function. Here, we are creating a new user in our database using the User.create() method from the Mongoose library. This method takes in the user object that we have received from the client and saves it to our MongoDB database. This function also automatically hashes the user's password using the bcrypt library before saving it to the database, ensuring that the password is secure and cannot be easily decrypted.

Moving on to the login function, we first search for the user by their email address using the User.findOne() method from Mongoose. Once we have the user object, we then use the bcrypt.compare() method to check if the password provided by the user matches the hashed password in the database. If the password is correct, it generates a JSON Web Token (JWT) using the jwt.sign() method from the jsonwebtoken package. This token contains the user's ID, email address, and an expiration time, and is sent back to the client for use in subsequent API requests.

Overall, these two functions provide the basic functionality required for user authentication in our application. The register function allows new users to create an account with a secure, hashed password, while the login function verifies the user's credentials and generates a secure token for future use.

Finally, update your package.json and server.js files with the following code:
package.json:

{
    "name": "docker-node-app",
    "version": "1.0.0",
    "description": "",
    "main": "src/server.js",
    "scripts": {
        "dev": "nodemon src/server.js",
        "build": "NODE_ENV=production node server.js"
    },
    "keywords": [
        "docker",
        "node"
    ],
    "author": "",
    "license": "ISC",
    "dependencies": {
        "bcrypt": "^5.1.0",
        "dotenv": "^16.0.3",
        "express": "^4.18.2",
        "jsonwebtoken": "^9.0.0",
        "mongoose": "^7.0.1"
    },
    "devDependencies": {
        "nodemon": "^2.0.21"
    }
}
Enter fullscreen mode Exit fullscreen mode

server.js:

const express = require("express");
const mongoose = require("mongoose");
const dotenv = require("dotenv");
const router = require("./routes/auth");

dotenv.config();

const app = express();
const port = process.env.PORT || 8080;

const connectDB = async () => {
    try {
        await mongoose.connect(
            process.env.MONGO_URI || "mongodb://localhost:27017/docker-node-app"
        );
        console.log("MongoDB connected");
    } catch (error) {
        console.error(error);
    }
};

connectDB();

app.use(express.json());
app.use("/api", router);

app.listen(port, () => {
    console.log(`Server listening at http://localhost:${port}`);
});
Enter fullscreen mode Exit fullscreen mode

Multi-Stage Builds

Now that we have created a basic authentication API, let's containerize it using Docker. The first thing we will do is create a Dockerfile for the application. In the Dockerfile, we will use multi-stage builds to optimize the Docker image size.

What are Multi-Stage Builds?

Multi-stage builds in Docker are extremely useful features that allow us to optimize our Docker images by breaking up the build process into multiple stages. Each stage of the build process is essentially a separate image that has its own base image and set of instructions. This allows us to create a final Docker image that only includes the necessary files for running our application, without any of the build tools or dependencies that were used during the build process.

The multi-stage build process is initiated by the use of the FROM instruction in our Dockerfile. Each time we use the FROM instruction, we are essentially starting a new stage in the build process. Each stage can have its own set of instructions, such as installing dependencies or compiling code. Once a stage is complete, we can copy files from that stage to another using the COPY instruction. This is useful for copying over only the necessary files for our application while leaving behind any unnecessary files or dependencies that were used during the build process.

In summary, multi-stage builds allow us to create optimized Docker images that are tailored specifically for running our application, without any unnecessary bloat. By breaking up the build process into multiple stages, we can ensure that each stage is as efficient and optimized as possible, leading to faster build times and smaller images.

Creating a Dockerfile

In this section, we will go through the process of creating a Dockerfile for our authentication API, using multi-stage builds to optimize our image and only include the necessary files for running our application.

# Build stage
FROM node:18-alpine as build

# set working directory
WORKDIR /app

# copy package.json and package-lock.json
COPY package*.json ./

# install dependencies
RUN npm install

# copy source code
COPY . .

# expose port 8080
EXPOSE 8080

# start app
CMD ["npm", "run", "dev"]
Enter fullscreen mode Exit fullscreen mode

Let's break down what's happening in this Dockerfile:

The first line of the Dockerfile specifies the base image that we'll use to build our Node.js application. In this case, we're using the node:18-alpine image, which is a lightweight Alpine Linux-based image that includes Node.js 18.

FROM node:18-alpine as build

Next, we set the working directory for our application inside the Docker container:

WORKDIR /app

We then copy the package.json and package-lock.json files to the working directory:

COPY package*.json ./

This step is important because it allows Docker to cache the installation of our application's dependencies. If these files haven't changed since the last build, Docker can skip the installation step and use the cached dependencies instead.

We then install our application's dependencies using npm install:
RUN npm install

After that, we copy the rest of our application's source code to the Docker container:

COPY . .

This includes all of our application's JavaScript files, as well as any static assets like images or stylesheets.

Next, we expose port 8080 to the outside world:
EXPOSE 8080

Finally, we specify the command that will be run when the Docker container starts up. In this case, we're using npm run dev to start our application in development mode:

CMD ["npm", "run", "dev"]

This will start our application using the dev script specified in the package.json file.

Defining Docker Services with Docker Compose

In the previous section, we created a Dockerfile for our app and optimized the Docker image using multi-stage builds. Now, we will take a step further by defining the Docker services for our application using Docker Compose.

Docker Compose is a tool that allows us to define and run multi-container Docker applications. In this section, we will define the services required for our Node.js authentication API and how to run them using Docker Compose.

version: '3'

services:
  app:
    image: docker-node-app
    build:
      context: .
      dockerfile: Dockerfile
    restart: always
    environment:
      NODE_ENV: development
      MONGO_URI: mongodb://app-db:27017/docker-node-app
      JWT_SECRET: my-secret # you can use any string
    ports:
      - '8080:8080'
    depends_on:
      - app-db

  app-db:
    image: mongo:5.0
    restart: always
    ports:
      - '27017:27017'
    volumes:
      - app-db-data:/data/db

volumes:
  app-db-data:
Enter fullscreen mode Exit fullscreen mode

If you find the contents of the above docker-compose.yml file strange, don’t worry, here’s a detailed explanation of what is going on:

Services:

The services section of the docker-compose.yml file defines the different containers that make up our application. In this case, we have two services: app and app-db.

app

The app service is responsible for running our Node.js application. Here are the key details:

  • image: This specifies the name of the Docker image that will be used to run the app service. In this case, we're using the docker-node-app image that we built in the previous steps.

  • build: This section tells Docker Compose how to build the Docker image for the app service. We specify the context as . (the current directory) and the dockerfile as Dockerfile because it’s in the project root directory.

  • restart: This tells Docker Compose to always restart the app service if it fails or is stopped.

  • environment: This specifies the environment variables that will be set in the app service. In this case, we're setting the NODE_ENV variable to development, the MONGO_URI variable to mongodb://app-db:27017/docker-node-app which is the URL for our MongoDB database (generated by Docker), and the JWT_SECRET variable to my-secret (the secret string used to sign JWT tokens).

  • ports: This specifies that we want to expose port 8080 on the host machine and map it to port 8080 in the app container.

  • depends_on: This specifies that the app service depends on the app-db service being started first.

app-db

The app-db service is responsible for running our MongoDB database. Here are the key details:

  • image: This specifies the name of the Docker image that will be used to run the app-db service. In this case, we're using the mongo:5.0 image.
  • restart: This tells Docker Compose to always restart the app-db service if it fails or is stopped.
  • ports: This specifies that we want to expose port 27017 on the host machine and map it to port 27017 in the app-db container.
  • volumes: This specifies that we want to use a Docker volume named app-db-data to persist the data for our MongoDB database.

Volumes

The volumes section of the docker-compose.yml file defines the Docker volumes that will be used by our application. In this case, we have one volume named app-db-data that will be used to persist the data for our MongoDB database.

Adding a .dockerignore file for optimization

As we've been building our Docker image for our Node.js application, we've been copying files and directories from our project directory into the image using the COPY command in our Dockerfile. However, not all files and directories in our project directory are necessary or desirable to include in the image. In fact, including unnecessary files can bloat the size of our image and increase build times. This is where the .dockerignore file comes in - it allows us to specify files and directories that we want to exclude from the Docker build context. In this section, we'll take a closer look at how to use the .dockerignore file to ensure that only the necessary files are included in our Docker image.

Create a .dockerignore file in the project root directory and paste following code:

node_modules
npm-debug.log
.DS_Store
.env
.git
.gitignore
README.md
Enter fullscreen mode Exit fullscreen mode

The node_modules directory contains all the installed packages and modules, which we don't need to include in our image since we can install them using npm install in our Dockerfile. The npm-debug.log file is also not needed and can be ignored.

The .DS_Store file is a hidden file created by macOS Finder that stores folder-specific metadata. We don't need this file either.

The .env file contains environment variables and is not required in the Docker image as we set our environment in our docker-compose.yml file.

The .git directory and .gitignore file are also not needed in our image.

Finally, the README.md file is not needed in the production image, but we may want to keep it for reference during development.

By adding a .dockerignore file with the above contents to our project directory, we can ensure that these files and directories are not included in the Docker build context. This helps to minimize the size of our image and reduce build times.

Now, with everything set we can build and run the entire application stack using a single command:

docker compose up

This will build and start the MongoDB container and the Node.js container for our application.

Before we conclude, it's important to test our application to make sure everything is working as expected. You can test the application using an API testing tool like Postman.

To test the endpoints, send a POST request to register endpoint: http://localhost:8080/api/register with the following JSON payload (you can edit the details):

{
   "name": "Test",
   "email": "test@email.com",
   "password": "password"
}
Enter fullscreen mode Exit fullscreen mode

This should return a JSON response similar to this:

{
   "success": true,
   "message": "User registered successfully",
   "data": {
       "name": "Test",
       "email": "test@email.com",
      ...
   }
}
Enter fullscreen mode Exit fullscreen mode

Similar to the register endpoint, send a POST request to the login endpoint: http://localhost:8080/api/login with the registered user’s details:

{
   "email": "test@email.com",
   "password": "password"
}
Enter fullscreen mode Exit fullscreen mode

This should return a JSON response with a token similar to this:

{
   "success": true,
   "token": "eyJhbGciOiJIUzI1NiIsInR..."
}
Enter fullscreen mode Exit fullscreen mode

Conclusion

So far, we've covered advanced Docker techniques and best practices for building and containerizing a Node.js authentication API. We started by setting up a basic Node.js application and dockerizing it using a Dockerfile. We then explored multi-stage builds, which enabled us to reduce the size of our Docker image and improve performance by separating the build and runtime environments.

We then implemented user authentication functionality using JSON Web Tokens (JWTs) and added the necessary dependencies to our application. We also covered how to use environment variables to configure our Node.js application, and how to manage secrets using Docker secrets or environment variables stored in a .env file.

Next, we looked at how to use Docker Compose to define and orchestrate multi-container applications. We defined two services: one for our Node.js application and one for our MongoDB database. We also defined a network and a volume to facilitate communication between the services and to persist our database data.

We used the docker-compose command to build and run our application and tested it using an API testing tool like Postman. We also learned how to scale our services to handle increased traffic, and how to monitor our application using logs and metrics.

Finally, we discussed the importance of including a .dockerignore file in our project directory to exclude unnecessary files and directories from the Docker build context, and how this can help to reduce the size of our image and improve build times.

The full code for this project is available on GitHub at https://github.com/davydocsurg/docker-node-app. By following these advanced Docker techniques and best practices, we've built a scalable and secure authentication API using Node.js and Docker. I hope that this article has been helpful in advancing your Docker and Node.js knowledge and skills and that you feel confident in applying these techniques to your own projects. So go ahead and try out these techniques in your next Node.js project!

Top comments (17)

Collapse
 
lissy93 profile image
Alicia Sykes • Edited

This is a really nicely explained article David :) Take my Unicorn 🦄
I would have loved a guide like this a few years ago, when I was Dockerizing my first application!

One final step that could be good to also mention (but maybe a bit out of scope) is distributing images on a registry like DockerHub / GHCR. Either through the UI, via CLI or using something like GitHub Actions to build, tag and publish images to registries, ready for direct consumption by users.

Collapse
 
davydocsurg profile image
David Chibueze Ndubuisi

Thank you so much, Alicia. I'm glad you found the article helpful!

And you're absolutely right about the importance of distributing images on a registry like DockerHub or GHCR. That step is definitely worth mentioning, and I appreciate you bringing it up. It can be a bit overwhelming at first, but it's an essential part of the process, especially for teams working on collaborative projects.

Thanks again for reading and commenting on my article. I appreciate your feedback!

Collapse
 
baljitweb profile image
Baljitweb • Edited

After docker compose up
I checked in postman with mentioned json payload.\
It is NOT working
Did we miss any step?
Even localhost8080 also not working and postman send error as :

`



Error

Cannot POST /api/auth/register

`

Collapse
 
davydocsurg profile image
David Chibueze Ndubuisi

The URL is not correct, it should be: localhost:8080/api/register

Collapse
 
baljitweb profile image
Baljitweb • Edited

But you had mentioned in your blog:
"To test the endpoints, send a POST request to register endpoint: localhost:8080/api/auth/register with the following JSON payload."

Collapse
 
davydocsurg profile image
David Chibueze Ndubuisi • Edited

Did your MongoDB connection establish successfully?
Also, check if you missed any steps.

Collapse
 
baljitweb profile image
Baljitweb • Edited

yes:
console.log is
node-with-docker-app-1 | MongoDB connected

Also I checkout github repo [github.com/davydocsurg/docker-node...] and try docker compose up from this repo.
same error.

Thread Thread
 
davydocsurg profile image
David Chibueze Ndubuisi

Can you provide a screenshot of the error?

Thread Thread
 
baljitweb profile image
Baljitweb

Sure...

Image description

Thread Thread
 
ashishbeetle profile image
Ashish Binu

your post url is incorrect. you can either change it to : localhost:8080/api/register
or
change your routes/auth.js routes to:
/auth/login and
/auth/register

Thread Thread
 
wilsonibekason profile image
Wilson Ibekason • Edited

@baljitweb Have a look at your setup as well
Image description

Collapse
 
muzzammil194 profile image
Muzzammil Shaikh

Nice content easy to understand the code line by line it's especially for beginners

Collapse
 
wilsonibekason profile image
Wilson Ibekason

This is really amazing and straight forward, I had issues with containerizing my docker to Nodejs, but this your article helped me resolve it. I would also recommend a typescript version

Collapse
 
davydocsurg profile image
David Chibueze Ndubuisi

Thank you @wilsonibekason. I'm glad you found this helpful.

Collapse
 
william_glasse_a132326a1b profile image
William Glasse

Really great article! This was a nice refresher for me as I'm getting back into web development after a break.

My only thought was that your demonstration doesn't highlight a multi-stage build. You speak to the benefits of why we'd write a multi-stage build, but you'd need to use multiple images for it to be multi-stage.

# Stage 1: Build the application
FROM node:14 AS build
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
RUN npm run build

# Stage 2: Copy the built application to a smaller base image
FROM node:14-alpine
WORKDIR /app
COPY --from=build /app/dist /app
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

In this example, the first stage builds the application using the official Node.js image, and the second stage copies only the built files to a smaller Node.js Alpine image. The --from=build option in the COPY instruction tells Docker to copy files from the first stage of the build.

Collapse
 
bohooslav profile image
Богуслав

Well, I was looking for ways to optimize build of my app in the cloud, and you said that making the build multi-stage helps with that, but in your example did everything in just one stage. No examples on how one could use the FROM directive.

Collapse
 
alexagc profile image
Alejandro Gomez Canal

I would check the dockerfile, you use multi-stage builds but only use one stage so you are not using the benefits of multi-stage caching images builds