In this concise tutorial, discover how to use Docker’s Multi-Stage Build process to create production-ready Docker images. Follow our step-by-step guide to writing a multi-stage Dockerfile for a web application and elevate your Docker skills.
Whether it is a new Ruby project or an existing one, I Dockerize it first. I run pretty much all of the applications as containers on my local as it helps keeping local development setups isolated between the projects. While in production environment, some of those get deployed to Kubernetes or ECS and some go to non-container environments. Regardless, my local development environment is always runs as containers.
This post will be useful to you if,
- You want to dockerize the local development environment for your Rails application
- Your organization or company is moving toward container deployments and you need to start to dockerize your applications
- You know a little about docker or at least the theory part of it and you would like dockerize your Rails application but not sure where to start
If you’re looking to create a new Rails application using Docker, I suggest you read the following blog post first: Create Ruby on Rails application using Docker.
In this article, we’ll take a look at Dockerizing an existing Ruby on Rails application. Let’s get started.
The application we’re going to Dockerize is a simple blog application, written in Rails 7 and it uses MySQL database. Let’s get the code from GitHub.
git clone git@github.com:devteds/example-rails-app.git
cd example-rails-app
Dockerfile
The first thing we need is the Dockerfile. This is the file where we define all the necessary OS needs, libraries, work directory, and tools necessary to build the docker image for the application. Dockerfile will be placed at the root of the project folder. In this case, at example-rails-app/Dockerfile.
# example-rails-app/Dockerfile
FROM ruby:3.1.2
RUN mkdir /app
WORKDIR /app
RUN apt-get update -qq && \
apt-get -y install build-essential
ADD Gemfile /app/Gemfile
ADD Gemfile.lock /app/Gemfile.lock
RUN bundle install
CMD ["bundle", "exec", "rails", "s", "-b", "0.0.0.0"]
A quick walkthrough of the Dockerfile:
- It uses the base image from DockerHub ruby:3.1.2. This base image is built on Debian and has ruby, bundle, and basic OS libraries necessary to run a ruby application
- Creates a directory for the app. This directory, /app, is where the application build/code will be copied to and where the rails commands will be run from when the container is run.
- Installs some build tools. A most commonly used one is build-essential which Rails developers often need to install some gems
- Copy the Gemfile and Gemfile.lock from the code on Host (Mac in my case) to the app directory inside the container image
- Run bundle install. Bundler expects Gemfile and Gemfile.lock in the app directory and that’s why, in the previous step, we copied those files to container’s app directory. - We don’t yet need the source code copied to the container image
- And last one is the command that will be run when a container is created using the image built from this Dockerfile. Here we run the rails server and have it listen on the default port 3000. One bit here is the address binding option which is set to 0.0.0.0 meaning that when rails server runs inside the container, it allows us access the service from the host (Mac in my case)
Build Docker Image
Our Dockerfile is ready and we can now build a Docker image. There are a couple of different ways we can go about to docker image. First and most common one is by using docker client and the command will look something the below,
docker build -t example-rails-app .
Instead, I am going to use Docker Compose. I prefer Docker Compose for most of local setup. It helps define the build inputs, container services, and dependencies better, mainly useful for local development environments
- Create docker-compose.yml . This file will be placed at the root of the application code directory
- Compose file is a YAML and starts with this version line. See the versioning details
Besides the version, the more common one you’ll see in docker-compose.yml is the services section. This is where you define all the container services.
# example-rails-app/docker-compose.yml
version: '3.8'
services:
blog-app:
build: .
The docker-compose.yml above has just one service, blog-app. The build is the directory relative to the current directory where you have the source code, Dockerfile and where you’ll run docker or docker-compose commands from.
This should be good enough for us to build the docker image.
docker-compose build
Above command builds an image. The image name will be example-rails-app_blog-app in which example-rails-app is the directory where you have the source code and blog-app is the service name as defined in the docker-compose.yml file.
The docker image we built has all the necessary OS libraries, ruby, directory structure, and bundled gems. What is missing in the image is the application code.
We don’t need to copy the source code to the image for running the application locally; instead, we could mount volume or directory between the host OS (Mac in my case) and the running container.
And there is one more thing missing, that is the database MySQL. Can we install MySQL as well in the same docker image? Beginners tend to do that but that’s not how you want to do it. Because database runs on a different host independent of your application in staging/production, let’s treat it the same way on local.
We can run MySQL also as a container. We don’t want to create a Dockerfile for MySQL or worry about the OS or libraries it requires. Instead, we will use the MySQL Docker image from DockerHub.
MySQL as Container
Let’s add a new service in the docker-compose.yml for the database: blog-db
# example-rails-app/docker-compose.yml
version: '3.8'
services:
blog-app:
build: .
blog-db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: blog
Let’s take a quick look at the blog-db service:
- This service uses the docker image from DockerHub mysql:5.7. Unlike the blog-app service, there is nothing to build for this service and it uses a pre-built docker image from DockerHub
- Sets a few environment variables that mysql container reads before booting up mysql-server inside the container. Those environment variables, MYSQL_*, are specific to the docker image we used.
- When the blog-db container starts up it creates a database, blog (value of MYSQL_DATABASE); it uses password (value of MYSQL_ROOT_PASSWORD) as root password.
That’s all for the blog-db service. If you would like to learn more about running MySQL as container, watch the following screencast blog: MySQL And PostgreSQL With Docker In Development
Runtime Configs for Application
Now that we have the database container service defined, let’s provide the database configuration to the application (blog-app) container or service. Similar to the blog-db service, let’s provide the environment variables for the application inside the container to read from. And then we will make this app read database configs from environment variables.
# example-rails-app/docker-compose.yml
version: '3.8'
services:
blog-app:
build: .
environment:
DB_USER: root
DB_PASSWORD: password
DB_NAME: blog
DB_HOST: blog-db
DB_PORT: 3306
blog-db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: blog
The application that will run inside the blog-app container is supplied with a few environment variables for it’s database configs
- It uses MySQL’s root user & password, as configured for the MySQL container service blog-db
- DB_NAME is blog as configured in the MySQL container service blog-db. DB_PORT is the default, 3306 for mysql.
- The value of DB_HOST is the name of database service blog-db. This is one of the reasons why I like docker-compose that it makes is easy to do service linking and works well when there involves inter-service communication between containers services. Among the services defined in docker-compose.yml, one service can talk to another using the service name as host name.
Now that the application is supplied with these environment variables, let’s update database.yml to read the configs from environment variables. Note, this is for local environment and there may be security concerns if you use environment variables like this in prod.
For developers that aren’t familiar with Rails, the database.yml, is the config file that a Rails application reads database configs from. It doesn’t have to hardcode all the values.
It’s nice that database.yml in Rails works nicely with embedded ruby (ERB). That makes it all easy to configure this.
# config/database.yml
default: &default
adapter: mysql2
encoding: utf8mb4
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
database: <%= ENV['DB_NAME'] %>
username: <%= ENV['DB_USER'] %>
password: <%= ENV['DB_PASSWORD'] %>
host: <%= ENV['DB_HOST'] %>
port: <%= ENV['DB_PORT'] %>
development:
<<: *default
test:
<<: *default
production:
<<: *default
3 more things we need updated in the docker-compose.yml for the blog-app service,
-
Application Code
: The docker image that we build will not have the source code and that because the Dockerfile does not have any instruction to copy the source code to the container image. More importantly, we don’t want the docker image to have the source code for running it locally. Instead we can mount our local directory (from the host machine, Mac in my case) to the work-directory inside the container when running the container or when running the service -
Service Port
: When we run the application using docker or docker-compose, it runs inside a container. Rails service listens at the port 3000 by default and but that port will NOT be available to host (Mac in my case), to browse from the browser. That means I cannot hit http://localhost:3000 from my browser. In order to allow that, we need a port map and provide the port that you want to access the service on your browser and the port at which the service inside the container listens to. Here is I set the host port as 3300 which maps to port 3000 of the container. And then I will be able to access the app service at http://localhost:3300 -
Service Dependency
: we need the database service be ready before rails container starts up. Put that dependency using depends_on. Note: when mysql container status is Ready/Running, it is possible that the mysql server is not fully started up. That means, depends_on, may not work in all cases
# example-rails-app/docker-compose.yml
version: '3.8'
services:
blog-app:
build: .
volumes:
- .:/app:rw
environment:
DB_USER: root
DB_PASSWORD: password
DB_NAME: blog
DB_HOST: blog-db
depends_on:
- blog-db
ports:
- 3300:3000
blog-db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: blog
All set! Our docker-compose.yml has both the container services (blog-app and blog-db) defined.
docker-compose up
Leave the above command running in the foreground and run the following on another terminal window or tab to verify the container service status. This should list both the services with some details such as the service status and port-mapping information.
docker-compose ps
Now that both the services, the application blog-app and the database blog-db, are running, I should be able to hit the application from browser at http://localhost:3300
.
But it will fails with an error on querying some tables in blog-db. That’s because we haven’t created any tables. Let’s run a script create necessary tables. In Rails, we manage all the database schema changes in the application code and run rails migration commands.
The application code has necessary schema migration code. Let’s run the command rails db:migrate to run those and create necessary tables. We can either run this command inside the container service that is already running that serves web requests or spin off a temporary container for blog-app service to run this command. I always prefer the latter and let’s do that.
Leave the docker-compose up running, and run the database migration and reload the page,
docker-compose run --rm blog-app rails db:migrate
The above command will create a new container for blog-app, using its definition from the docker-compose.yml and in that container it runs the rails db:migrate command and exit. The –rm makes the container be killed after the completion of the command it run.
You should now be able to hit the application from browser and it should respond 200OK.
Use the admin interface to populate some posts and verify the API response for posts.
Conclusion
That’s all about containerizing a Rails application along with running MySQL as container. Again, this is good for running your application locally using docker.
We have some more work to do in Dockerfile to make the docker image production-ready or even to be able to deploy on non-production environments on platforms such as Kubernetes, Amazon ECS, etc. I will create a separate post to cover some of the basics involved in creating a docker image for production.
I hope you found this post helpful. I appreciate you sharing with your network that may find it useful.
Top comments (0)