DEV Community

Vladimir Dementyev
Vladimir Dementyev

Posted on • Edited on • Originally published at evilmartians.com

Reusable Docker development environment

So far, I've been only talking about Docker for development in the context of web applications, i.e., something involving multiple services and specific system dependencies. Today, I'd like to look at the other side and discuss how containerization could help me work on libraries.

Once upon a time...

A recent discussion in our corporate (Evil Martians) Slack triggered me to write this post finally: my colleagues discussed a new tool to manage multiple Node.js versions on the same machine (like nvm but written in Rust—you know, things get cooler when rewritten in Rust 😉).

My first thought was: "Why on Earth in 2020, we still need all these version managers, such as rbenv, nvm, asdf, whatever?" I almost forgot the pleasure of using them: my computer breathes easy without all the environmental pollution these tools bring.

Let's do a twist and talk about my monolithic personal computer (or laptop).

Mono vs. Compo

Slides from my Between monoliths and microservices RailsConf talk

About a year ago, I switched to a new laptop. Since I'm not a big fan of backups and other time machines, I had to craft a comfortable working environment from scratch. Instead of installing all the runtimes I usually use (Ruby, Golang, Erlang, Node), I decided to experiment and go with Docker for everything: applications and libraries, commercial and open-source projects. In other words, I only installed Git, Docker, and Dip.

I decided to use Docker for everything: applications and libraries, commercial and open-source projects.

Phase 0: Universal docker-compose.yml

You may think that keeping a Docker4Dev configuration (like the one described in the Ruby on Whales post) for every tiny library is a bunch of overhead. Yep, that's true. So, I started with a shared docker-compose.yml configuration, containing a service per project and sharing volumes between them (thus, I didn't have to install all the dependencies multiple times).

Launching a container looked like this:

docker-compose -f ~/dev/docker-compose.yml \
  run --rm some_service

# we can omit -f if our project is somewhere inside the ~/dev folder: docker-compose tries to find the first docker-compose.yml up the tree
docker-compose run --rm some_service

# finally, using an alias
dcr some_service
Enter fullscreen mode Exit fullscreen mode

Not so bad. The only problem is that I had to define a service every time I wanted to run a new project within Docker. I continued investigating and came up with the Reusable Docker Environment (RDE) concept.

Phase 1 (discarded): RDE

The idea of RDE was to completely eliminate the need for Dockerfile and docker-compose.yml files and generate them on-the-fly using predefined templates (or executors).

This is how I imagined it to work:

# Open the current folder within a Ruby executor
$ rde -e ruby
root@1123:/app#

# Execute a command within a JRuby executor
$ rde run -e jruby -- ruby -e "puts RUBY_PLATFORM"
java
Enter fullscreen mode Exit fullscreen mode

This idea left in a gist to gather dust.

Phase 2 (current): Back to docker-compose.yml with Dip

It turned out that solving the duplication problem could be done without building a yet-another tool (even in Rust 😉). After discussing the RDE concept with Mikhail Merkushin, we realized that a similar functionality could be achieved with Dip if we add a couple of features:

  • Lookup configurations in parent directories (so, we can use a single ~/dip.yml for all projects).
  • Provide an environment variable containing the relative path to the current directory from the configuration directory (so we can use it as a dynamic working_dir).

These features have been added in v5.0.0 (thanks to Misha), and I started exploring the new possibilities.

Let's skip all the intermediate states and finally take a look at the final configuration.

Currently, my ~/dip.yml only contains different Rubies and databases:

version: '5.0'

compose:
  files:
    - ./.dip/docker-compose.yml
  project_name: shared_dip_env

interaction:
  ruby: &ruby
    description: Open Ruby service terminal
    service: ruby
    command: /bin/bash
  jruby:
    <<: *ruby
    service: jruby
  'ruby:latest':
    <<: *ruby
    service: ruby-latest
  psql:
    description: Run psql console
    service: postgres
    command: psql -h postgres -U postgres
  createdb:
    description: Run PostgreSQL createdb command
    service: postgres
    command: createdb -h postgres -U postgres
  'redis-cli':
    description: Run Redis console
    service: redis
    command: redis-cli -h redis
Enter fullscreen mode Exit fullscreen mode

Whenever I want to work on a Ruby gem, I just launched dip ruby from the project's directory and run all the commands (e.g., bundle install, rake) within a container:

~ $ cd ~/my_ruby_project
~/my_ruby_project $ dip ruby:latest

[../my_ruby_project] ruby -v
ruby 3.0.0dev (2020-10-20T12:46:54Z master 451836f582) [x86_64-linux]
Enter fullscreen mode Exit fullscreen mode

See, I can run Ruby 3 without any hassle 🙂

There is only one special trick I have in the docker-compose.yml which allows me to re-use the same container for all projects without manual volumes mounting—PWD! Yes, all you need is PWD, the absolute path to the current working directory on the host machine. Here is how I use this sacred knowledge in my configuration:

version: '2.4'

services:
  ruby: &ruby
    command: bash
    image: ruby:2.7
    volumes:
      # That's all the magic!
      - ${PWD}:/${PWD}:cached
      - bundler_data:/usr/local/bundle
      - history:/usr/local/hist
      # I also mount different configuration files
      # for better DX
      - ./.bashrc:/root/.bashrc:ro
      - ./.irbrc:/root/.irbrc:ro
      - ./.pryrc:/root/.pryrc:ro
    environment:
      DATABASE_URL: postgres://postgres:postgres@postgres:5432
      REDIS_URL: redis://redis:6379/
      HISTFILE: /usr/local/hist/.bash_history
      LANG: C.UTF-8
      PROMPT_DIRTRIM: 2
      PS1: '[\W]\! '
      # Plays nice with gemfiles/*.gemfile files for CI
      BUNDLE_GEMFILE: ${BUNDLE_GEMFILE:-Gemfile}
    # And that's the second part of the spell
    working_dir: ${PWD}
    tmpfs:
      - /tmp
  jruby:
    <<: *ruby
    image: jruby:latest
    volumes:
      - ${PWD}:/${PWD}:cached
      - bundler_jruby:/usr/local/bundle
      - history:/usr/local/hist
      - ./.bashrc:/root/.bashrc:ro
      - ./.irbrc:/root/.irbrc:ro
      - ./.pryrc:/root/.pryrc:ro
  ruby-latest:
    <<: *ruby
    image: rubocophq/ruby-snapshot:latest
    volumes:
      - ${PWD}:/${PWD}:cached
      - bundler_data_edge:/usr/local/bundle
      - history:/usr/local/hist
      - ./.bashrc:/root/.bashrc:ro
      - ./.irbrc:/root/.irbrc:ro
      - ./.pryrc:/root/.pryrc:ro
  postgres:
    image: postgres:11.7
    volumes:
      - history:/usr/local/hist
      - ./.psqlrc:/root/.psqlrc:ro
      - postgres:/var/lib/postgresql/data
    environment:
      PSQL_HISTFILE: /usr/local/hist/.psql_history
      POSTGRES_PASSWORD: postgres
      PGPASSWORD: postgres
    ports:
      - 5432
  redis:
    image: redis:5-alpine
    volumes:
      - redis:/data
    ports:
      - 6379
    healthcheck:
      test: redis-cli ping
      interval: 1s
      timeout: 3s
      retries: 30

volumes:
  postgres:
  redis:
  bundler_data:
  bundler_jruby:
  bundler_data_edge:
  history:
Enter fullscreen mode Exit fullscreen mode

Whenever I need PostgreSQL or Redis to build the library, I do the following:

# Launch PostgreSQL in the background
dip up -d postgres
# Create a database
dip createdb my_library_db
# Run psql
dip psql
# And, for example, run tests
dip ruby -c "bundle exec rspec"
Enter fullscreen mode Exit fullscreen mode

Databases "live" within the same Docker network as other containers (since we're using the same docker-compose.yml) and accessible via their names (postgres and redis). My code should only recognize the DATABASE_URL and REDIS_URL, respectively.

Let's consider a few more examples.

Using with VS Code

If you're a VC Code user and want to use the power of IntelliSense, you can combine this approach with Remote Containers: just run dip up -d ruby and attach to a running container!

Node.js example: Docsify

Let's take a look at beyond-Ruby example: running Docsify documentation servers.

Docsify is a JavaScript / Node.js documentation site generator. I'm using it for all my open-source projects. It requires Node.js and the docsify-cli package to be installed. But we don't to install anything, remember? Let's pack it into Docker!

First, we declare a base Node service in our docker-compose.yml:

services:
  # ...
  node: &node
    image: node:14
    volumes:
      - ${PWD}:/${PWD}:cached
      # Where to store global packages
      - npm_data:${NPM_CONFIG_PREFIX}
      - history:/usr/local/hist
      - ./.bashrc:/root/.bashrc:ro
    environment:
      NPM_CONFIG_PREFIX: ${NPM_CONFIG_PREFIX}
      HISTFILE: /usr/local/hist/.bash_history
      PROMPT_DIRTRIM: 2
      PS1: '[\W]\! '
    working_dir: ${PWD}
    tmpfs:
      - /tmp
Enter fullscreen mode Exit fullscreen mode

It's recommended to keep global dependencies in a non-root user directory. Also, we want to make sure we "cache" these packages by putting them into a volume.

We can define the env var (NPM_CONFIG_PREFIX) in the Dip config:

# dip.yml
environment:
  NPM_CONFIG_PREFIX: /home/node/.npm-global
Enter fullscreen mode Exit fullscreen mode

Since we want to run a Docsify server to access a documentation website, we need to expose ports. Let's define a separate service for that and also define a command to run a server:

services:
  # ...
  node: &node
    # ...

  docsify:
    <<: *node
    working_dir: ${NPM_CONFIG_PREFIX}/bin
    command: docsify serve ${PWD}/docs -p 5000 --livereload-port 55729
    ports:
      - 5000:5000
      - 55729:55729
Enter fullscreen mode Exit fullscreen mode

To install the docsify-cli package globally, we should run the following command:

dip compose run node npm i docsify-cli -g
Enter fullscreen mode Exit fullscreen mode

We can simplify the command a bit if we define the node command in the dip.yml:

interaction:
  # ...
  node:
    description: Open Node service terminal
    service: node
Enter fullscreen mode Exit fullscreen mode

Now we can type fewer characters: dip node npm i docsify-cli -g 🙂

Now to run a Docsify server we just need to invoke dip up docsify in the project's folder.

Erlang example: keeping build artifacts

The final example I'd like to share is from the world of compiled languages—let's talk some Erlang!

As before, we define a service in the docker-compose.yml and the corresponding shortcut in the dip.yml:

# docker-compose.yml
services:
  # ...
  erlang: &erlang
    image: erlang:23
    volumes:
      - ${PWD}:/${PWD}:cached
      - rebar_cache:/rebar_data
      - history:/usr/local/hist
      - ./.bashrc:/root/.bashrc:ro
    environment:
      REBAR_CACHE_DIR: /rebar_data/.cache
      REBAR_GLOBAL_CONFIG_DIR: /rebar_data/.config
      REBAR_BASE_DIR: /rebar_data/.project-cache${PWD}
      HISTFILE: /usr/local/hist/.bash_history
      PROMPT_DIRTRIM: 2
      PS1: '[\W]\! '
    working_dir: ${PWD}
    tmpfs:
      - /tmp

# dip.yml
interactions:
  # ...
  erl:
    description: Open Erlang service terminal
    service: erlang
    command: /bin/bash
Enter fullscreen mode Exit fullscreen mode

What differs this configuration from the Ruby one is that we the same pwd trick to store dependencies and build files:

REBAR_BASE_DIR: /rebar_data/.project-cache${PWD}
Enter fullscreen mode Exit fullscreen mode

That change the default _build location to the one within the mounted volume (and ${PWD} ensures we have no collisions with other projects).

This helps us to speed up the compilation by not writing to the host (which is especially useful for MacOS users).

Bonus: multiple compose files

One benefit of using Dip is the ability to specify multiple compose files to load services from. That allows us to group services by their nature and avoid putting everything into the same docker-compose.yml:

# dip.yml
compose:
  files:
    - ./.dip/docker-compose.base.yml
    - ./.dip/docker-compose.databases.yml
    - ./.dip/docker-compose.ruby.yml
    - ./.dip/docker-compose.node.yml
    - ./.dip/docker-compose.erlang.yml
  project_name: shared_dip_env
Enter fullscreen mode Exit fullscreen mode

That's it! The example setup could be found in a gist. Feel free to use and share your feedback!


P.S. I should admit that my initial plan of not installing anything on a local machine failed: I gave up and ran brew install ruby (though that was long before the Phase 2).


P.P.S. Recently, I got access to GitHub Codespaces. I still haven't figured out all the details, but it looks like it could become my first choice for library development in the future (and the hacks described in this post will no longer be needed 🙂).

Top comments (0)