Every developer and every team faces confusion about COPY
and ADD
in the Dockerfile at some point. When I get this question, first I usually give the technical background, which is this:
Both ADD
and COPY
copy files and directories from the host machine into a Docker image, the difference is that ADD
can also extract and copy local tar archives and it can also download files from URLs (a.k.a. the internet), and copy them into the Docker image. The best practice is to use COPY
.
So COPY
equals ADD
minus the unpacking and URL fetching features. COPY
is the preferred way, except if you unpack a local tar archive into a Docker image and you are certain that the local archive has the right format.
You can understand why this is the case looking at some background info. Read on...
Why COPY is preferred
The core purpose of ADD
and COPY
is to let Dockerfile developers copy files and directories from the host machine into the Docker image during image build.
Extracting archives and downloading files from the internet are common use-cases, these features are built into ADD
.
The uncompression feature is described in the official documentation as follows:
If
<src>
is a local tar archive in a recognized compression format (identity, gzip, bzip2 or xz) then it is unpacked as a directory.
The following note on the same page further explains the behavior:
Note: Whether a file is identified as a recognized compression format is done solely based on the contents of the file, not the name of the file. For example, if an empty file happens to end with .tar.gz this will not be recognized as a compressed file, and will not generate any kind of decompression error message, rather the file will simply be copied to the destination.
This means that your final outcome depends on the contents of the file you intend to copy, and you don't get warnings if something goes wrong. This may make your build pipeline unpredictable.
To make life more reliable, we have the COPY
instruction, which is "the same as ADD
, but without the tar and URL handling". COPY
does one thing and it does it well.
The best practice
Docker best practices suggest to always use COPY
when you don't need extraction functionality, because COPY
is more transparent.
In real-life projects COPY
is sufficient in most scenarios, mainly because we rarely add tarballs to our applications' source code. The main use-case for tarballs, thus ADD
, is when we create a base image from a tar archive. This doesn't happen very often. In this case ADD
is preferred.
For all other use-cases we use COPY
;
- We prefer
COPY
for copying files from the host machine into a Docker image. - We use
RUN
withcurl
orwget
to fetch files from URLs.ADD
does not unpack files from the web anyway, so we are better off avoiding it entirely.
Let's see how you can accomplish unpacking and URL fetching.
Unpacking local archives
ADD
unpacks archives from the host machine, it does not unpack files from URLs. To unpack an archive you just use it in its default form; ADD <src>... <dest>
. Check out this sample Dockerfile:
FROM alpine:3.10
ADD bigfile.tar.xz /tmp/
When you build the image Docker will unpack the archive.
docker build -t yourname/alpine-bigfile .
Sending build context to Docker daemon 4.096kB
Step 1/2 : FROM alpine:3.10
---> 4d90542f0623
Step 2/2 : ADD bigfile.tar.xz /tmp/
---> 32cfa3eb41f7
Successfully built 32cfa3eb41f7
Successfully tagged yourname/alpine-bigfile:latest
Since the format of ADD
is the exact same when you just copy a file or you unpack an archive, this might get tricky. As we mentioned earlier, if Docker does not recognize the archive format during the build, it will copy the archive as it is into the Docker image without warning. You can mitigate the risks by adding a check into your build pipeline.
Our archive in the example was recognized by Docker, so the file is uncompressed in our image:
docker run --rm -ti yourname/alpine-bigfile /bin/ash
/ # ls -al /tmp
total 12
drwxrwxrwt 1 root root 4096 Jan 31 09:49 .
drwxr-xr-x 1 root root 4096 Jan 31 09:50 ..
-rw-r--r-- 1 501 dialout 29 Jan 31 09:46 bigfile
If you need a solution to share your image as an archive, check out our article How to Transfer/Move a Docker Image to Another System?.
Downloading and unpacking archives from a URL
For downloading and unpacking archives from the internet curl
or wget
are the better options, because it takes only one image layer to get the results you want. With ADD
you'd grab the archive first in one layer, then uncompress it with RUN
in another. This is not so efficient.
You can build a Dockerfile to curl
an archive and uncompress it like shown below.
FROM alpine:3.10
RUN apk add --no-cache curl && \
curl -SL https://github.com/yikaus/docker-alpine-base/raw/master/rootfs.tar.xz | tar -xJC /tmp
This takes one image layer and you have full control over the process.
One more thing
One more noteworthy difference between ADD
and COPY
is that COPY
has the --from=<name|index>
flag that lets you copy files from a previous build stage in a multi-stage build. ADD
does not have this option.
This is another reason to use COPY
as your preferred option.
Top comments (1)
ADD
can be used to pull a file uncompressed from a stable known URL; it has its uses!