Summary
In this post we gonna build from scratch a dockerized Nextcloud instance. Nextcloud is a powerfull self hosted collaborative cloud platform.
Key features
- Nextcloud 24.0.7
- PostgreSQL for the database
- Redis for cache
- Nginx as proxy
- Out of the box A+ SSL grade
- Easy backup
We gonna use a PostgreSQL 14 database container in our stack. We also create another container running a Redis instance for cache purposes.
Since Nextcloud is a web app, the frontend will be serve by an Nginx container in front of a reverse proxy to PHP FPM. We also need a to run Nextcloud related commands or cron job using an another container connected to the same volume.
We'll also need some other containers to generate and also renew the SSL certificate using Letsencrypt.
Legggo!
Docker compose
To build this complexe stack let's leverage the power of docker-compose and begin to write the docker-compose YAML file.
version: '3'
services:
db:
container_name: nc-db
hostname: nc_db
image: postgres:14
restart: always
environment:
POSTGRES_DB: ${DATABASE_NAME}
POSTGRES_PASSWORD: ${DATABASE_PWD}
volumes:
- db:/var/lib/postgresql/data
ports:
- "15432:5432"
app:
container_name: nc-app
build: ./
volumes:
- nextcloud:/usr/src/nextcloud
environment:
- POSTGRES_HOST=${DATABASE_HOST}
- POSTGRES_DB=${DATABASE_NAME}
- POSTGRES_USER=${DATABASE_USER}
- POSTGRES_PASSWORD=${DATABASE_PWD}
- REDIS_HOST=${REDIS_HOST}
- REDIS_HOST_PASSWORD=${REDIS_PASSWORD}
- DEFAULT_PHONE_REGION=FR
depends_on:
- db
- redis
cron:
container_name: nc-cron
image: nextcloud:fpm-alpine
restart: always
volumes:
- nextcloud:/var/www/html
entrypoint: /cron.sh
depends_on:
- db
- redis
nc-web:
container_name: nc-web
build:
context: ./nginx
restart: always
environment:
- VIRTUAL_HOST=${CLOUD_HOST}
volumes:
- ./nginx/conf.d/:/etc/nginx/conf.d
- ./certbot/conf/:/etc/nginx/ssl/:ro
- ./certbot/www:/var/www/certbot/:ro
- nextcloud:/var/www/html
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/sites/:/etc/nginx/sites-available
- ./nginx/logs:/var/log/nginx
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- app
networks:
- proxy-tier
- default
ports:
- "8888:80"
- "443:443"
redis:
container_name: nc-redis
image: "redis:alpine"
command: redis-server --requirepass ${REDIS_PASSWORD}
ports:
- "6379:6379"
volumes:
- ./redis/data:/var/lib/redis
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
environment:
- REDIS_REPLICATION_MODE=master
certbot:
container_name: nc-certbot
image: certbot/certbot:latest
volumes:
- ./certbot/www/:/var/www/certbot/:rw
- ./certbot/conf/:/etc/letsencrypt/:rw
networks:
- proxy-tier
depends_on:
- nc-web
volumes:
db:
nextcloud:
networks:
proxy-tier:
Nothing fancy here, we configure all the needed containers and mount handful volumes for configuration and logs directly to the host file system.
Environment variables like DATABASE_HOST
will be resolved using dotenv, something like this for example IN .env
at project root:
CLOUD_HOST=cloud.example.localhost
DATABASE_HOST=nc_db
DATABASE_NAME=nextcloud
DATABASE_USER=postgres
DATABASE_PWD=example!
REDIS_HOST=redis
REDIS_PASSWORD=example!
Lets break this in smaller parts
Nextcloud
First let's talk about the Nextcloud container, here's his DockerFile largely inspired by the official documentation.
# DO NOT EDIT: created by update.sh from Dockerfile-alpine.template
FROM php:8.0-fpm-alpine3.16
# entrypoint.sh and cron.sh dependencies
RUN set -ex; \
\
apk add --no-cache \
rsync \
; \
\
rm /var/spool/cron/crontabs/root; \
echo '*/5 * * * * php -f /var/www/html/cron.php' > /var/spool/cron/crontabs/www-data
# install the PHP extensions we need
# see https://docs.nextcloud.com/server/stable/admin_manual/installation/source_installation.html
RUN set -ex; \
\
apk add --no-cache --virtual .build-deps \
$PHPIZE_DEPS \
autoconf \
freetype-dev \
icu-dev \
libevent-dev \
libjpeg-turbo-dev \
libmcrypt-dev \
libpng-dev \
libmemcached-dev \
libxml2-dev \
libzip-dev \
openldap-dev \
pcre-dev \
postgresql-dev \
imagemagick-dev \
libwebp-dev \
gmp-dev \
; \
\
docker-php-ext-configure gd --with-freetype --with-jpeg --with-webp; \
docker-php-ext-configure ldap; \
docker-php-ext-install -j "$(nproc)" \
bcmath \
exif \
gd \
intl \
ldap \
opcache \
pcntl \
pdo_mysql \
pdo_pgsql \
zip \
gmp \
; \
\
# pecl will claim success even if one install fails, so we need to perform each install separately
pecl install APCu-5.1.21; \
pecl install memcached-3.2.0; \
pecl install redis-5.3.7; \
pecl install imagick-3.7.0; \
\
docker-php-ext-enable \
apcu \
memcached \
redis \
imagick \
; \
rm -r /tmp/pear; \
\
runDeps="$( \
scanelf --needed --nobanner --format '%n#p' --recursive /usr/local/lib/php/extensions \
| tr ',' '\n' \
| sort -u \
| awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' \
)"; \
apk add --virtual .nextcloud-phpext-rundeps $runDeps; \
apk del .build-deps
# set recommended PHP.ini settings
# see https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html#enable-php-opcache
ENV PHP_MEMORY_LIMIT 512M
ENV PHP_UPLOAD_LIMIT 10G
RUN { \
echo 'opcache.enable=1'; \
echo 'opcache.interned_strings_buffer=16'; \
echo 'opcache.max_accelerated_files=10000'; \
echo 'opcache.memory_consumption=128'; \
echo 'opcache.save_comments=1'; \
echo 'opcache.revalidate_freq=60'; \
} > "${PHP_INI_DIR}/conf.d/opcache-recommended.ini"; \
\
echo 'apc.enable_cli=1' >> "${PHP_INI_DIR}/conf.d/docker-php-ext-apcu.ini"; \
\
{ \
echo 'memory_limit=${PHP_MEMORY_LIMIT}'; \
echo 'upload_max_filesize=${PHP_UPLOAD_LIMIT}'; \
echo 'post_max_size=${PHP_UPLOAD_LIMIT}'; \
} > "${PHP_INI_DIR}/conf.d/nextcloud.ini"; \
\
mkdir /var/www/data; \
chown -R www-data:root /var/www; \
chmod -R g=u /var/www
VOLUME /var/www/html
ENV NEXTCLOUD_VERSION 24.0.7
RUN set -ex; \
apk add --no-cache --virtual .fetch-deps \
bzip2 \
gnupg \
; \
\
curl -fsSL -o nextcloud.tar.bz2 \
"https://download.nextcloud.com/server/releases/nextcloud-${NEXTCLOUD_VERSION}.tar.bz2"; \
curl -fsSL -o nextcloud.tar.bz2.asc \
"https://download.nextcloud.com/server/releases/nextcloud-${NEXTCLOUD_VERSION}.tar.bz2.asc"; \
export GNUPGHOME="$(mktemp -d)"; \
# gpg key from https://nextcloud.com/nextcloud.asc
gpg --batch --keyserver keyserver.ubuntu.com --recv-keys 28806A878AE423A28372792ED75899B9A724937A; \
gpg --batch --verify nextcloud.tar.bz2.asc nextcloud.tar.bz2; \
tar -xjf nextcloud.tar.bz2 -C /usr/src/; \
gpgconf --kill all; \
rm nextcloud.tar.bz2.asc nextcloud.tar.bz2; \
rm -rf "$GNUPGHOME" /usr/src/nextcloud/updater; \
mkdir -p /usr/src/nextcloud/data; \
mkdir -p /usr/src/nextcloud/custom_apps; \
chmod +x /usr/src/nextcloud/occ; \
apk del .fetch-deps
COPY *.sh upgrade.exclude /
COPY config/* /usr/src/nextcloud/config/
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm"]
EXPOSE 9000
Here we use the 24.0.7 Nextcloud version, to build a PHP-FPM container from php:8.0-fpm-alpine3.16
, install all needed dependencies, configure PHP then run the process on port 9000.
As i write those lines, Nextcloud still don't support PHP 8.1 version
We also configure PHP to accept upload up to 10go according to Nginx configuration.
Nginx
Nginx main role will be to serve http/s requests from browser then forward the request to our previous PHP-FPM nc-app
container on port 9000.
I chose to mount local folder nginx/logs to retrieve the access as well as error logs, the other volumes are useful to tweak the web server configuration.
Here is the dead simple container Dockerfile, where the nginx instance will expose HTTP ports.
FROM nginx:alpine
WORKDIR /var/www
CMD ["nginx"]
EXPOSE 80 443
Then here's the nginx.conf, with the global nginx configuration.
user nginx;
worker_processes auto;
daemon off;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
#access_log /dev/stdout;
#error_log /dev/stderr;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-available/*.conf;
}
Finally the default Nextcloud nginx block and his upstream from the nc-app
container.
# Set the `immutable` cache control options only for assets with a cache busting `v` argument
map $arg_v $asset_immutable {
"" "";
default "immutable";
}
upstream php-handler {
server app:9000;
}
# server {
# listen 80;
# listen [::]:80;
# server_name cloud.example.localhost;
# server_tokens off;
# location /.well-known/acme-challenge/ {
# root /var/www/certbot;
# }
# location / {
# return 301 https://cloud.example.localhost$request_uri;
# }
# }
server {
# listen 443 default_server ssl http2;
# listen [::]:443 ssl http2;
listen 80;
server_name cloud.example.localhost;
# Path to the root of your installation
root /var/www/html;
# Use Mozilla's guidelines for SSL/TLS settings
# https://mozilla.github.io/server-side-tls/ssl-config-generator/
# ssl_certificate /etc/nginx/ssl/live/cloud.example.localhost/fullchain.pem;
# ssl_certificate_key /etc/nginx/ssl/live/cloud.example.localhost/privkey.pem;
# ssl_protocols TLSv1.2 TLSv1.3;
# Prevent nginx HTTP Server Detection
server_tokens off;
# ECDHE forward secrecy
# ssl_ciphers "HIGH:!aNULL:!MD5:!ADH:!RC4:!DH";
# ssl_prefer_server_ciphers on;
# HSTS settings
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
# add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload" always;
# set max upload size and increase upload timeout:
client_max_body_size 10G;
client_body_timeout 300s;
fastcgi_buffers 64 4K;
# Enable gzip but do not remove ETag headers
gzip on;
gzip_vary on;
gzip_comp_level 4;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/wasm application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
# Pagespeed is not supported by Nextcloud, so if your server is built
# with the `ngx_pagespeed` module, uncomment this line to disable it.
#pagespeed off;
# The settings allows you to optimize the HTTP2 bandwitdth.
# See https://blog.cloudflare.com/delivering-http-2-upload-speed-improvements/
# for tunning hints
client_body_buffer_size 512k;
# HTTP response headers borrowed from Nextcloud `.htaccess`
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Download-Options "noopen" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "none" always;
add_header X-XSS-Protection "1; mode=block" always;
# Remove X-Powered-By, which is an information leak
fastcgi_hide_header X-Powered-By;
# Specify how to handle directories -- specifying `/index.php$request_uri`
# here as the fallback means that Nginx always exhibits the desired behaviour
# when a client requests a path that corresponds to a directory that exists
# on the server. In particular, if that directory contains an index.php file,
# that file is correctly served; if it doesn't, then the request is passed to
# the front-end controller. This consistent behaviour means that we don't need
# to specify custom rules for certain paths (e.g. images and other assets,
# `/updater`, `/ocm-provider`, `/ocs-provider`), and thus
# `try_files $uri $uri/ /index.php$request_uri`
# always provides the desired behaviour.
index index.php index.html /index.php$request_uri;
# Rule borrowed from `.htaccess` to handle Microsoft DAV clients
location = / {
if ( $http_user_agent ~ ^DavClnt ) {
return 302 /remote.php/webdav/$is_args$args;
}
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Make a regex exception for `/.well-known` so that clients can still
# access it despite the existence of the regex rule
# `location ~ /(\.|autotest|...)` which would otherwise handle requests
# for `/.well-known`.
location ^~ /.well-known {
# The rules in this block are an adaptation of the rules
# in `.htaccess` that concern `/.well-known`.
location = /.well-known/carddav { return 301 /remote.php/dav/; }
location = /.well-known/caldav { return 301 /remote.php/dav/; }
location /.well-known/acme-challenge { try_files $uri $uri/ =404; }
location /.well-known/pki-validation { try_files $uri $uri/ =404; }
# Let Nextcloud's API for `/.well-known` URIs handle all other
# requests by passing them to the front-end controller.
return 301 /index.php$request_uri;
}
# Rules borrowed from `.htaccess` to hide certain paths from clients
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/) { return 404; }
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) { return 404; }
# Ensure this block, which passes PHP files to the PHP process, is above the blocks
# which handle static assets (as seen below). If this block is not declared first,
# then Nginx will encounter an infinite rewriting loop when it prepends `/index.php`
# to the URI, resulting in a HTTP 500 error response.
location ~ \.php(?:$|/) {
# Required for legacy support
rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+|.+\/richdocumentscode\/proxy) /index.php$request_uri;
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
set $path_info $fastcgi_path_info;
try_files $fastcgi_script_name =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $path_info;
fastcgi_param HTTPS on;
fastcgi_param modHeadersAvailable true; # Avoid sending the security headers twice
fastcgi_param front_controller_active true; # Enable pretty urls
fastcgi_pass php-handler;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
fastcgi_max_temp_file_size 0;
}
location ~ \.(?:css|js|svg|gif|png|jpg|ico|wasm|tflite|map)$ {
try_files $uri /index.php$request_uri;
add_header Cache-Control "public, max-age=15778463, $asset_immutable";
access_log off; # Optional: Don't log access to assets
location ~ \.wasm$ {
default_type application/wasm;
}
}
location ~ \.woff2?$ {
try_files $uri /index.php$request_uri;
expires 7d; # Cache-Control policy borrowed from `.htaccess`
access_log off; # Optional: Don't log access to assets
}
# Rule borrowed from `.htaccess`
location /remote {
return 301 /remote.php$request_uri;
}
location / {
try_files $uri $uri/ /index.php$request_uri;
}
}
Redis
We just secure the instance using a password and use the latest redis:alpine
image.
Certbot
We use the latest certbot
image, to easiky and freely generate SSL certificate for our cloud. The important point here is the certificate will be generated under the certbot/conf
and this same folder will be mapped to Nginx nc-web
container /etc/nginx/ssl/
path.
To resolve LetsEncrypt host validation challenge we also need to serve /var/www/certbot
from certbot
container in the Nginx container nc-web
nad using the commented server block in nginx/sites/default.conf
.
Time to launch
At this point we already have a fully functional dockerized Nextcloud instance, let's use docker-compose to build and run it in background.
docker-compose up -d --build
This is it! Let's install and configure your own Nextcloud instance on http://localhost:8888 or http://cloud.example.localhost:8888 if you added cloud.example.localhost
hostname to your local etc/hosts
.
Et voila! keep in mind that this instance run without SSL, some errors can occur for instance while setup you'll be redirected to https:// url scheme, just remove the "s".
Security consideration
Nextcloud offer a built in security diagnosis very useful to keep your instance secure.
SSL
First we need to generate an SSL certificate using letsencrypt then configure the nginx instance to only listen on 443 default https port. To do so we need a real domain name (at least subdomain) since .localhost
extension is not valid.
You need to replace all project's occurence of cloud.example.localhost
with your own domain name.
Generate your SSL certificate
To use the certbot container and generate your domain name SSL certificate you first need to uncomment those lines in nginx/sites/default.conf
server {
listen 80;
listen [::]:80;
server_name cloud.example.localhost;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://cloud.example.localhost$request_uri;
}
}
Dont forget to replace cloud.example.localhost
by your real domain name too. This way we can validate our host via http challenge.
And we need to use default http port 80 too rather than our previous custom 8888 in our nginx proxy container nc-web
.
In docker-compose.yml
, replace
ports:
- "8888:80"
- "443:443"
by
ports:
- "80:80"
- "443:443"
Dont forget to restart the stack:
docker-compose restart
Now we can generate it, first let's try in dry-run mode:
docker-compose run --rm certbot certonly --webroot --webroot-path /var/www/certbot/ --dry-run -d votredomaine.com
If all went smooth the, directly generate them by removing the dry-run flag.
docker-compose run --rm certbot certonly --webroot --webroot-path /var/www/certbot/ -d votredomaine.com
Finally we need to update our nginx/sites/default.conf
to listen on SSL default port and also load SSL certificates and uncommented those lines:
server {
listen 443 default_server ssl http2;
listen [::]:443 ssl http2;
server_name yourdomainenamehere.com;
# Path to the root of your installation
root /var/www/html;
# Use Mozilla's guidelines for SSL/TLS settings
# https://mozilla.github.io/server-side-tls/ssl-config-generator/
ssl_certificate /etc/nginx/ssl/live/yourdomainenamehere.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/yourdomainenamehere.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
# Prevent nginx HTTP Server Detection
server_tokens off;
# ECDHE forward secrecy
ssl_ciphers "HIGH:!aNULL:!MD5:!ADH:!RC4:!DH";
ssl_prefer_server_ciphers on;
# HSTS settings
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload" always;
This way you'll end up with a A+ SSL grade according to SSL Labs.
Backup
Since we created two docker volumes, one for the PostgreSQL database and an antoher for the actual nextcloud script path also containing the data
folder. All you have to backup the nextcloud instance is to dump the database and backup the cloud files onto the data
folder.
To backup or restore those two volumes, just refer to the official docker documentation about volumes.
Conclusion
We built a cutting edge dockerized Nextcloud instance.
You can find all the related code on this Github repository, feel free to contribute.
Thank you for reading.
Top comments (1)
=> ERROR [app 5/7] RUN set -ex; apk add --no-cache --virtual .fetch-deps bzip2 gnupg ; curl -fsSL -o nextcloud.tar.bz2 "download.nextcloud.com/se 17.1s