This article was originally published on programmingliftoff.com as Using Docker for Experimenting with Code.
One of the best ways to learn to code is by writing or modifying programs to see what they do. For me, this usually involves creating a file named something like delete.py
, and writing some code in it to help me understand more about the language. I then inevitably end up with a bunch of files on my desktop named delete1.py
, delete2.py
, tmp.c
, temp.rb
, temp1.py
, and so on. These are usually mixed in with some images and other files I may want to keep on my Desktop so removing them then becomes a semi time-consuming task. What if there was a better way!?
Using Docker for Code Experiments
Recently I started using docker for this task. The benefit of using Docker is that container provide a layer of isolation from your computer, so for the most part you can execute any commands in a container without worrying about messing something up on your computer. If something goes drastically wrong inside the container, no worries! Simply boot up a new container and try again! Cool, so how do I create a container and install the programming languages and tools that I need? Iām glad you asked. In the next section I will explain how to get started using the Dockerfile so you can get up and running! Then I will break down the Dockerfile I use line-by-line. It looks a little lengthy, but each line is fairly simple.
Using The Dockerfile
The Dockerfile and supporting files can be downloaded from the GitHub repository here: docker-playground.
There is a Readme on the GitHub page to help you get started, but I will provide the instructions here as well.
Getting Started
Prereq: Before you begin, make sure you have installed Git and Docker.
1) Clone the repository with git clone https://github.com/programming-liftoff/docker-playground.git
2) cd into the directory by typing cd docker-playground/
3) Build the image by typing docker build -t playground:latest .
4) Start a bash shell in the container by typing docker run --rm -it playground
Thats it! Now you can create files and run them without worrying about cluttering up your desktop! Simply type exit
or press Ctrl-c
to exit the container, leaving no trace of those files.
Additional Notes
At the beginning of the Dockerfile, some enviroment variable are set with the following lines:
ENV username andrew
ENV password pass
ENV rootpassword toor
When the container loads a shell, it will load to user andrew
, with the password set to pass
. The password for the root user is set to toor
.
If you wish to change these values, simply change them in this one place and rebuild the image with docker build -t playground:latest .
to update the container's settings.
The Dockerfile
Here's where I break down the Dockerfile and supporting files so you can learn what each lines does and modify the code to fit your needs. We'll start with the smallest file first 'entry_point.sh'.
entry_point.sh
This file is run every time the container starts. In this way, putting code in here is similar to putting code in '.bash_profile' or 'profile', etc.
The full code can be view here: entry-point.
1) This section of code checks to see if the file 'rvm.sh' exists. If it exists, the file is executed. This is necessary to add Ruby's executables to the PATH variable, among other things. If you do now want to use Ruby, you can remove this from 'entry_point.sh'.
# For RVM - Ruby
if [ -f /etc/profile.d/rvm.sh ]
then
source /etc/profile.d/rvm.sh
fi
2) This section of code starts a bash shell. If you remove it, the container will start and stop without giving you a shell. Feel free to give it a try!
# Get a shell
/bin/bash
Dockerfile
The Dockerfile contains the majority of the code. It is used to build the docker image that a container can be started from. Hang on tight as we explore it!
The full code can be found here: Dockerfile.
1) Docker images can be build starting with a base image. In this case, the new image will start with everything in the base image, plus whatever is added in the Dockerfile. This line tells Docker to start building the image using the latests version of the ubuntu image as a base.
FROM ubuntu:latest
2) This section was shown earlier. It sets some environment variables that are used when building the image. There is a TODO in the comments to let you know that you can set these to whatever values you like. The environment variables can be accessed from later on inside the Dockerfile, as well as in the container itself after it is built.
#==========================
# TODO: Use desired values
#==========================
ENV username andrew
ENV password pass
ENV rootpassword toor
3) This section creates a new user for the container. By default with the ubuntu image, the container would start as the root user. Using a computer as the root user is generally frowned upon because it can be dangerous (easy to mess something up when you have full rights). This may not be as big of a deal since we're using a container, but creating a new user keeps things similar to how you would normally use the command line. The docker RUN
command runs a shell command to change something in the container that is being build. In this case, as mentioned, its simply running bash commands to add a user, set the user's password, and give the user 'sudo' rights.
#============
# Add a user
# 1) First add a user and set user's shell to bash and create a directory for user in the /home folder
# 2) Next set the user's password
# 3) Then add the user to the `sudo` group so the user can use the `sudo` command
#============
RUN useradd -ms /bin/bash $username
RUN echo $username:$password | chpasswd
RUN adduser $username sudo
4) This line updates the package manager for the container. This is necessary before using apt-get to install packages.
RUN apt-get update
5) These lines aren't necessary, but they prevent some warnings from being displayed. Docker builds the container without accepting/allowing user input, so the first line tells debconf (the Debian configuration management system) not to worry about displaying dialogs to the user. The second command installs apt-utils to allow package configuration to occur directly after install, rather than delaying it.
#=====================
# Updates for debconf
# Prevent message 'debconf: unable to initialize frontend: Dialog'
# Prevent message 'debconf: delaying package configuration, since apt-utils is not installed'
#=====================
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
RUN apt-get install -y --no-install-recommends apt-utils
6) These lines use the apt-get package manager to install some basic tools that you would expect to have available on your command line. These include sudo
, curl
(used to download files, useful when installing programming languages), the c and c++ compilers which are also needed for many other language installations, and nano
, a simple text editor. You may want to add some more install commands here if you use another text editor like emacs or vim.
#================
# Basic software
# build-essential Includes:
# - gcc (C language compiler)
# - g++ (C++ compiler)
# - make
#================
RUN apt-get install -y sudo
RUN apt-get install -y curl
RUN apt-get install -y build-essential
RUN apt-get install -y nano
7) This part installs Python 3 and Pip 3, which is the package manager for Python 3.
#=========
# Python3
#=========
RUN apt-get install -y python3 && \
apt-get install -y python3-pip
8) This section installs RVM, the Ruby Version Manager, which makes it easier to work with Ruby. I comes with the ruby language interpreter as well as irb
(interactive ruby REPL) and some additional software. It first downloads the gpg keys to verify the the RVM install is legitimate. Next it downloads RVM with the latest stable version of Ruby. It then add the user that was created earlier to the rvm
group.
#================
# Ruby (via RVM)
#================
RUN gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB && \
\curl -L https://get.rvm.io | bash -s stable --ruby && \
adduser $username rvm
9) This section downloads version 1.9.1 of the Go programming language. First it uses curl to download the compressed language files. It then uncompresses the files to the /usr/local directory. Then it removes the compressed files. Finally it updates the PATH environment variable to include the path to the Go binaries (e.g., the go build
and other Go commands).
#====
# Go
#====
RUN curl -o ./go.linux-amd64.tar.gz https://storage.googleapis.com/golang/go1.9.1.linux-amd64.tar.gz && \
tar -C /usr/local -xzf go.linux-amd64.tar.gz && \
rm -f go.linux-amd64.tar.gz
ENV PATH="${PATH}:/usr/local/go/bin"
10) This section sets the password for the root user. It uses the rootpassword environment variable that was set at the beginning of the Dockerfile as the new password for the root user.
#===================
# Set root password
#===================
RUN echo root:$rootpassword | chpasswd
11) These lines copy the entry_point.sh file from your computer into the root directory of the container (/entry_point.sh). The second line changes the file's permissions to make it executable. We need to do this because the file is copied with the same permissions it has on your computer. That means that if it cannot be executed on you computer, Docker won't be able to execute it. You could change the permissions to the desired permissions on your computer so that it is copied with the correct permissions; however, to make it work every time, this Dockerfile doesn't assume that the file has the correct permissions. The third line tells Docker to use that file "/entry_point.sh" as the ENTRYPOINT. Meaning that this file will be run every time the container starts.
#================
# Set ENTRYPOINT
# Copy entry_point file and make it executable.
#=================
COPY ./entry_point.sh /
RUN chmod 744 /entry_point.sh
ENTRYPOINT ["/entry_point.sh"]
12) The first line here changes the current user for the container build. All commands after this will be executed as this user. This also makes it so that when the container starts, it loads the bash shell as this user. The final command allows us to specify the directory that the shell will be in when the container starts. This is set to load the bash shell in the user's home directory.
#==================
# Set default user
# 1) Set the default user when the container starts
# 2) Set the default directory to load when container starts
#==================
USER $username
WORKDIR /home/$username
That's it!
You made it! Congratulations! Now not only can you use this Dockerfile to create an image and containers, you also can fork this Dockerfile on GitHub and customize it to exactly suit your needs! Docker is a powerful tool, and this is just one of many uses for it. Learning happens step-by-step though, so master this knowledge and then move on to increase your understanding of what Docker can do for you! Thanks for reading!
Top comments (1)
Whole tutorial above will lead you to an error: standard_init_linux.go:190: exec user process caused "permission denied"
The solution is available in github.com/programming-liftoff/doc...
in Dockerfile, replace
RUN chmod 744 /entry_point.sh
with
RUN ["chmod", "+x", "/entry_point.sh"]