DEV Community

Cover image for Understanding Next.JS Docker Images
Jonas Scholz Subscriber for Sliplane

Posted on • Edited on

Understanding Next.JS Docker Images

If you've tried to containerize a NextJS app, you've probably found the official documentation to be a bit lacking. Especially for beginners, the provided Dockerfile might be confusing. In this post, we'll go over the Dockerfile and explain what exactly is going on!

The Dockerfile

Let's start by looking at the Dockerfile provided by the wonderful NextJS team.

The Dockerfile can be broken down into 4 parts: the base image, the dependency installation, the build, and the runtime.

Visual Breakdown

The Base Image

Base Image

The first part (and first line!) of the Dockerfile is the base image that the Docker Image is built on top of. Similar to an operating system like Linux or Windows, the base image provides the foundation and structure for the rest of the image. In this case, we use node:18-alpine, which is a small but powerful image that contains NodeJS. The 18 refers to the version of NodeJS. If you want to use a different version, you could replace the 18 with a 16 or 20!

Dependency Installation

Dependency Installation

As always with Next.JS projects, we first need to install our dependencies. This is done in the next block. But before we install our dependencies, we see the line:



RUN apk add --no-cache libc6-compat


Enter fullscreen mode Exit fullscreen mode

Why is this line here? Well, the node:18-alpine image is based on Alpine Linux, which is a very small Linux distribution. However, it is so small that it doesn't have all the libraries that some NodeJS packages need. The libc6-compat package provides some of these libraries, reducing the chance of errors when installing dependencies. This line isn't always needed, but it's a good idea to include it just in case. If you want a more in-depth explanation of why this might be needed, check out this Github Repo.

Now we can finally start working on our dependencies! To keep everything neat and tidy, we first create a new directory called /app and set it as our working directory. Then, we copy over the package.json, package-lock.json, yarn.lock, and pnpm-lock.yaml files.

Now you might be looking at your own Next.JS app and only see one or two of these files. That's okay! The Dockerfile is designed to work with all three of the most popular NodeJS package managers: npm, yarn, and pnpm.

Because this Dockerfile is designed to work with all three package managers, the next part is also a bit more complicated:



RUN \
  if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
  elif [ -f package-lock.json ]; then npm ci; \
  elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
  else echo "Lockfile not found." && exit 1; \
  fi


Enter fullscreen mode Exit fullscreen mode

This block of code checks which package manager you're using by checking which lockfile exists. If you're using yarn, it will run yarn --frozen-lockfile. If you're using npm, it will run npm ci. If you're using pnpm, it will run yarn global add pnpm && pnpm i --frozen-lockfile. If you're using something else, it will print an error message and exit.

If you know which package manager you're using, you can simplify this part of the Dockerfile. For example, if you're using npm, you can remove the yarn.lock and pnpm-lock.yaml files and replace the entire block with:



COPY package.json package-lock.json* ./
RUN npm ci


Enter fullscreen mode Exit fullscreen mode

Try it out and see if it works! If not, feel free to write a comment below and I'll be happy to help you out :)

The Build

The next part of the Dockerfile contains the actual build process where we compile our NextJS app.

The Build

As before, we first start a new image layer and set our working directory to /app. Then, we copy over the node_modules folder from our previous image layer. After copying the node_modules folder, we copy over the rest of our app. This is done in two steps to improve build times. If we copied over the entire app first, then every time we made a change to our app, we would have to reinstall our dependencies. By copying over the node_modules folder first, we can skip the dependency installation step if we haven't changed our dependencies! Smart, right?

Finally, we run yarn build to execute the command that is defined in our package.json file. This command is usually next build, but it can be changed to whatever you want. It doesn't really matter if we use yarn or npm here, because we already installed our dependencies in the previous step and the package manager doesn't really matter for the build process!

The Runtime

And finally, we are done with installing our dependencies and building our app! The last part of the Dockerfile is the runtime, where we actually run our app. This part is a bit longer, so let's go through it step by step.

The Runtime

Again, we start by creating a new image layer and setting our working directory to /app. We then set the NODE_ENV environment variable to PRODUCTION. This signals to NextJS that we are running in production mode, which will improve performance. This can also affect other parts of your app and NodeJS. Check out this awesome documentation page for more information!

Next, we create a new group and user. This is done to improve security. If we didn't do this, our app would run as root, which can be a security issue. Generally, you want to try to follow the "Principle of least privilege", which states that you should only give your app the permissions that it needs. In this case, our app doesn't need root permissions, so we create a new user and group for it. We then finally switch to this new user with USER nextjs.

Now we can finally copy over the build artifacts from the previous image layer. We first copy over the ./public folder which includes all of our static assets. Then, we copy over the ./.next/standalone and ./.next/static folders, which include all of our compiled code. This step will only work if you set your output mode in your Next.JS Config. If you don't set your output mode, your dependencies will not be included!

At this point we have improved security, enabled production mode, and copied over all the build artifacts. The next 3 lines are all about networking and making our Next.JS app available to the the network.

We first expose port 3000 to the network with EXPOSE 3000. This doesn't actually do anything, but it's a good practice to include it so that other developers know which port the Docker Container will be running on. Next, we set the PORT environment variable to 3000. This is used by NextJS to determine which port to run on. Finally, we set the HOST environment variable to localhost.

The last line is the actual command that is run when the Docker Container is started and is not executed during the image build process. Since we compiled the Next.JS app to a standalone file, we can simply start it with node server.js. That's it! We're done! 🎉🎉🎉

Conclusion

That was a long one, good job! I hope this post helped you understand the NextJS Dockerfile a bit better. If you have any questions, feel free to leave a comment below and I'll be happy to help you out! If you have any suggestions for future posts, I'd love to hear them as well. Thanks for reading! 😊

Want to host your next cool dockerized Next.JS project? Check out Sliplane!

Top comments (3)

Collapse
 
sirmoustache profile image
SirMoustache

What is the benefit of having several layers inside one Docker image?

What I mean, is we have deps layer, where we install node modules:

FROM base AS deps

COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
  if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
  elif [ -f package-lock.json ]; then npm ci; \
  elif [ -f pnpm-lock.yaml ]; then corepack enable pnpm && pnpm i --frozen-lockfile; \
  else echo "Lockfile not found." && exit 1; \
  fi
Enter fullscreen mode Exit fullscreen mode

But then we start every new layer by copying from the previous:

# copy node_modules from deps 
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .

RUN npm run build
Enter fullscreen mode Exit fullscreen mode
# copy  files from builder 
FROM base AS runner
WORKDIR /app
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
Enter fullscreen mode Exit fullscreen mode

I know, that Docker caches per comment line and not layer, is there some additional benefits, or it's just done for structure? Because, for example, copying node_modules can affect build time.

Collapse
 
code42cate profile image
Jonas Scholz

Hi! I think you are mixing two thinks up here, each instruction roughly translates to one layer (this is what's mainly getting cached and speeds up build times if done correctly), while FROM base AS builder starts a new stage. When you create a new stage you basically start from scratch and you need to copy from the previous stage what you want to keep. For example, when we build we need all our dev dependencies and maybe stuff like pnpm or yarn. If we just want to run the compiled javascript bundle, we don't need that. So we create a new empty stage (runner), and only copy the files that we actually need to run.

Yes, this might make the build slower (very slightly, modern disks and cpus are incredible fast!) but makes the image in the end considerable smaller. Often times with node thats around 1 GB less. This is a tradeoff that makes sense 99% of the time, bandwidth/storage is more expensive than your CPU :)

Collapse
 
davboy profile image
Daithi O’Baoill

Excellent, thanks