DEV Community

Alfonso Domenech for One Beyond

Posted on • Edited on

Build Faster: Your Guide to a Quick-Start Project Template

Every time I've wanted to embark on a personal project - to learn a new technology or develop an idea - I've found myself redoing configurations for code structure and maintenance, not to mention execution and deployment. It can be frustrating, but I've come to realize that these steps are crucial for success. That's why I'm sharing a minimal template with the most common configurations - the essentials that I believe will help you achieve your goals. We are going to introduce and configure the following technologies.

If you want to take a look at it while you read, you can check out the official repo.

NestJS

NestJS is our backend superhero in the template repo. It's like a magic trick for building solid server-side applications effortlessly in TypeScript or JavaScript. Check out more at NestJS Official Website.

Creating .nvmrc file

In this part of the process, the whole team must work with the same version of NodeJS to ensure that all changes made to the application are compatible. This can also be done with Docker, but we will see it later.

In the .nvmrc file we have to put the version of node we want to use.



20


Enter fullscreen mode Exit fullscreen mode

Then we have to install nvm.

Configure commitlint

If you've worked on projects with a team, you may have noticed that everyone creates different branches and commits changes with random messages. This can make things confusing. That's where commitlint comes in. It helps make sure that commit messages are more consistent and easier to understand.

💡 Here we have to explain the commit convention and if we want to add our custom commit configuration. Also, the benefits of including commitlint in the projects.

Now we have to set up commitment in our project following the instructions of the library commitlint.

First, we must install the library and the convention we want to follow. In our case, we are going to use conventional commits.



npm install --save-dev @commitlint/cli @commitlint/config-conventional


Enter fullscreen mode Exit fullscreen mode

Then we have to configure commitlint to use the conventional config. Let´s create our commitlint.config.js.



module.exports = {
  extends: ['@commitlint/config-conventional'],
};


Enter fullscreen mode Exit fullscreen mode

Set up Husky

Husky is a library which allows developers to execute some commands in the different Git hooks. Why do we want this? As we said we need to ensure a little bit of homogeneity in our project with husky so we can achieve this. Let´s see how.
So, there's this neat library called husky that developers can use. It helps you run specific commands in various Git hooks(https://git-scm.com/docs/githooks), which is a fancy way of saying it keeps your project looking consistent. Why is that a good thing? Well, it makes everything easier to understand! Want to know more? Let me break it down.

First, we need to install husky in our project. For this we are going to follow husky automatic installation which is recommended.



npx husky-init && npm install


Enter fullscreen mode Exit fullscreen mode

This command executes the next actions.

  1. Add the prepare script to package.json.
  2. Create a sample pre-commit hook that you can edit (by default, npm test will run when you commit).
  3. Configure Git hooks path.

If you want manual installation and customize the husky configuration you can read the corresponding documentation.

Now we have to configure husky to execute some commands in the git hooks. We are going to add three that from my point of view are the most important.

To add a new hook we have to follow the next structure.



npx husky add .husky/{gitHook} "{command}"


Enter fullscreen mode Exit fullscreen mode

This command is going to create different bash files in the .husky directory. The names of these files are going to be named with the correspondent git hook.

The first one is the commit-msg hook. Here we are going to ensure that our commit messages follow our commitlint configuration.

So, let´s add our new hook! Following the command structure mentioned above.



npx husky add .husky/commit-msg  'npx --no -- commitlint --edit ${1}'


Enter fullscreen mode Exit fullscreen mode

The second one is the pre-commit hook. With this hook, we are going to lint and format the files we want to add to our commit. This action is executed before committing our changes.



npx husky add .husky/pre-commit  'npm run lint && npm run format'


Enter fullscreen mode Exit fullscreen mode

The last one but not the least, is the pre-push hook. This hook is going to execute our test before pushing our changes.



npx husky add .husky/pre-push  'npm run test'


Enter fullscreen mode Exit fullscreen mode

Docker

Now we want to create our development environment for our NestJS project. In this part of the article we are going to see how to include Docker and Docker compose.

First, we need to configure our Docker container creating our Dockerfile.

Our Dockerfile employs multi-stage builds, which facilitate the setup of our docker container for both local and development settings. This method offers a range of advantages. It's a two-step process, compile dependencies in one stage and then keep only the essentials in the final image improving our Docker image performance. To learn more about multi-stage builds you can check the official documentation



# BUILD FOR LOCAL DEVELOPMENT

FROM node:20-alpine As development

# Create app directory
WORKDIR /usr/src/app

# Copy application dependency manifests to the container image.
# A wildcard is used to ensure copying both package.json AND package-lock.json (when available).
# Copying this first prevents re-running npm install on every code change.
COPY --chown=node:node package*.json ./

# Install app dependencies using the `npm ci` command instead of `npm install`
RUN npm ci

# Bundle app source
COPY --chown=node:node . .

# Use the node user from the image (instead of the root user)
USER node

# BUILD FOR PRODUCTION

FROM node:20-alpine As build

WORKDIR /usr/src/app

COPY --chown=node:node package*.json ./

# To run `npm run build` we need access to the Nest CLI which is a dev dependency. In the previous development stage we ran `npm ci` which installed all dependencies, so we can copy over the node_modules directory from the development image
COPY --chown=node:node --from=development /usr/src/app/node_modules ./node_modules

COPY --chown=node:node . .

# Run the build command which creates the production bundle
RUN npm run build

# Set NODE_ENV environment variable
ENV NODE_ENV production

# Remove husky from the production build and install the production dependencies
RUN npm pkg delete scripts.prepare \
    npm ci --omit=dev

USER node

# PRODUCTION

FROM node:20-alpine As production

# Copy the bundled code from the build stage to the production image
COPY --chown=node:node --from=build /usr/src/app/node_modules ./node_modules
COPY --chown=node:node --from=build /usr/src/app/dist ./dist

# Start the server using the production build
CMD [ "node", "dist/main.js" ]



Enter fullscreen mode Exit fullscreen mode

Docker-compose configuration

Docker-compose is our dev sidekick, making local coding a cakewalk. One config file and that´s it! No more "it works on my machine" drama. Our docker-compose file is located at the root level of our project.



services:
  api:
    build:
      dockerfile: Dockerfile
      context: .
      # Only will build development stage from our dockerfile
      target: development
    volumes:
      - .:/usr/src/app
    env_file:
    - .env
    # Run a command against the development stage of the image
    command: npm run start:dev
    ports:
      - 3000:3000


Enter fullscreen mode Exit fullscreen mode

Now it's a piece of cake to develop our app with the container system we have. It's super easy and hassle-free.

Github Action

Continuous Integration (CI) is a software development practice that allows developers to automatically build, test, and validate their code changes in a centralized and consistent environment. GitHub Actions is a powerful CI/CD (Continuous Integration/Continuous Deployment) platform integrated directly into GitHub repositories, enabling developers to automate their workflows seamlessly. Checkout their documentation to learn more!

In this section, I'll guide you through the process of setting up a basic CI pipeline using GitHub Actions. This pipeline will automatically build and push your docker image to the Docker Hub account when a Pull Request is merged.

Great! So, to push our Docker images, we need to authenticate our GitHub action. Don't worry, it's quite simple. First, let's add your Dockerhub username and token. After that, we just need to store the credentials in our GitHub repository settings. You can see how to do this in the image below. Let me know if you need any help with this.

GitHub Privacy settings

Awesome! Let's create our workflow now. We need to place it in the root of your directory at .github/workflows/build.yml. This file will help us set up our CI steps. Don't worry, I'll guide you through the different steps so you can easily follow along!



# This workflow is triggered when a pull request is closed (merged or closed without merging) into the main branch.
on:
  pull_request:
    branches: [main]
    types: [closed]
jobs:
  #This defines a job named "build" that runs on the latest version of the Ubuntu environment.
  build:
    name: Build Docker image
    runs-on: ubuntu-latest
    steps:
      - # This step checks out your Git repository content
        name: Checkout
        uses: actions/checkout@v4

      - # Uses the docker/login-action to log in to Docker Hub using the provided username and token.
        # The credentials are stored as secrets (DOCKERHUB_USERNAME and DOCKERHUB_TOKEN).
        name: Log in to Docker Hub
        uses: docker/login-action@v3
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}

      - # Extracts the repository name from the GitHub repository full name and sets it as an environment variable (REPO_NAME). 
        # This information can be useful for later steps.
        name: Get repository name
        run: |
          repo_full_name="$GITHUB_REPOSITORY"
          IFS='/' read -ra repo_parts <<< "$repo_full_name"
          echo "REPO_NAME=${repo_parts[1]}" >> $GITHUB_ENV

      - # Uses the docker/metadata-action to extract metadata such as tags and labels for Docker. 
        # This metadata can be used for versioning and labelling Docker images.
        name: Extract metadata (tags, labels) for Docker
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: |
            ${{ secrets.DOCKERHUB_USERNAME }}/${{env.REPO_NAME}}
          tags: |
            type=sha,format=short 

      - # Uses the docker/build-push-action to build and push the Docker image. It specifies the context as the current directory (.), the Dockerfile location (./Dockerfile), tags from the metadata, and labels from the metadata. 
        # The push: true indicates that the image should be pushed to the Docker Hub registry.
        name: Build and push
        uses: docker/build-push-action@v5
        with:
          context: .
          file: ./Dockerfile
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          push: true




Enter fullscreen mode Exit fullscreen mode

Now each time we merge our PRs a GitHub action is going to build a docker image and push it to our registry in DockerHub as you can see in the following images.

GitHub action to create a Docker image

Docker images in our Dockerhub registry

Hope you found it cool and picked up something useful. Stay curious, keep learning, and rock on! Cheers to your next adventure!

Bibliography

Top comments (1)

Collapse
 
inigomarquinez profile image
Íñigo Marquínez

Great tips!