When deploying a Laravel application, the goal is to make sure that the deployment process is as fast and secure as possible. A big part of achieving this goal is choosing the right base Linux image to compose the container image where the application will be running and later deployed.
Alpine Linux has shown that there is no faster distro when working with a container for any language. Since Docker's first release, the popularity of the Alpine distro has grown and keeps growing because it is a tiny, container-focused, and security-focused distro.
To be able to run an application, just PHP and Composer aren't enough; NGINX and Supervisor are also required, and here is where a little complexity comes in. But don't worry; a Dockerfile will be dissected, and you will get to understand why things are the way they are.
Content
- Dockerfile
- Defining image bases
- Software installation
- Software configuration
- Build process
- Container execution
Dockerfile
Down below, there is an entire Dockerfile used locally and in production to serve a Laravel application. Notice that it's not optimized to have a minimal number of layers, and that is on purpose, since we will grab small pieces of the file and understand what each part does.
FROM alpine:latest
WORKDIR /var/www/html/
# Essentials
RUN echo "UTC" > /etc/timezone
RUN apk add --no-cache zip unzip curl sqlite nginx supervisor
# Installing bash
RUN apk add bash
RUN sed -i 's/bin\/ash/bin\/bash/g' /etc/passwd
# Installing PHP
RUN apk add --no-cache php82 \
php82-common \
php82-fpm \
php82-pdo \
php82-opcache \
php82-zip \
php82-phar \
php82-iconv \
php82-cli \
php82-curl \
php82-openssl \
php82-mbstring \
php82-tokenizer \
php82-fileinfo \
php82-json \
php82-xml \
php82-xmlwriter \
php82-simplexml \
php82-dom \
php82-pdo_mysql \
php82-pdo_sqlite \
php82-tokenizer \
php82-pecl-redis
RUN ln -s /usr/bin/php82 /usr/bin/php
# Installing composer
RUN curl -sS https://getcomposer.org/installer -o composer-setup.php
RUN php composer-setup.php --install-dir=/usr/local/bin --filename=composer
RUN rm -rf composer-setup.php
# Configure supervisor
RUN mkdir -p /etc/supervisor.d/
COPY .docker/supervisord.ini /etc/supervisor.d/supervisord.ini
# Configure PHP
RUN mkdir -p /run/php/
RUN touch /run/php/php8.2-fpm.pid
COPY .docker/php-fpm.conf /etc/php82/php-fpm.conf
COPY .docker/php.ini-production /etc/php82/php.ini
# Configure nginx
COPY .docker/nginx.conf /etc/nginx/
COPY .docker/nginx-laravel.conf /etc/nginx/modules/
RUN mkdir -p /run/nginx/
RUN touch /run/nginx/nginx.pid
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
# Building process
COPY . .
RUN composer install --no-dev
RUN chown -R nobody:nobody /var/www/html/storage
EXPOSE 80
CMD ["supervisord", "-c", "/etc/supervisor.d/supervisord.ini"]
Defining image bases
The first step towards the construction of a Dockerfile is to create the file itself and define a Linux distribution and its version. Once that is done, you can start composing your Dockerfile with the instructions needed to build your container image.
FROM alpine:latest
WORKDIR /var/www/html/
The FROM instruction sets the base image for subsequent instructions. Notice that alpine:latest gets defined, which sets the base Linux image. After the distro name, there is a :
used to specify a tag or version, so when the instruction FROM alpine:latest
gets interpreted, it will set alpine at the latest version as the base image.
While the WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY, and ADD instructions that follow it in the Dockerfile, when the instruction WORKDIR /var/www/html/
is interpreted, every command execution in the Dockerfile will take place in /var/www/html/.
Software installation
Now that the container image base has been defined, it's time to start looking into the software that we need to install to run the application. As mentioned, PHP, Composer, NGINX, and Supervisor are softwares to install, but that's not all. As these pieces of software have dependencies, they also have to be installed. Here is the installation process broken down into understandable pieces:
Install essentials
RUN echo "UTC" > /etc/timezone
RUN apk add --no-cache zip unzip curl sqlite nginx supervisor
The first RUN instruction will execute any commands in a new layer on top of the current image and commit the results. Hence, when RUN echo "UTC" > /etc/timezone
is interpreted, the echo command will print out the UTC string into the /etc/timezone file. As a result of the command's execution, UTC becomes the standard timezone.
In the second RUN instruction, an apk command appears; apk is the Alpine package manager; another well-known package manager is apt from Ubuntu. With that said, when RUN apk add --no-cache zip unzip curl sqlite nginx supervisor
is processed, it installs those softwares in the base image.
Install bash
RUN apk add bash
RUN sed -i 's/bin\/ash/bin\/bash/g' /etc/passwd
The first RUN instruction says that bash has to be installed. The second instruction sets it as a standard shell by replacing the string /bin/ash by /bin/bash in the /etc/passwd file. This change is because the Alpine standard shell, ash, works differently, and these differences can get in your way when you or your team need to execute a shell script in the container.
Install PHP
RUN apk add --no-cache php82 \
php82-common \
php82-fpm \
php82-pdo \
php82-opcache \
php82-zip \
php82-phar \
php82-iconv \
php82-cli \
php82-curl \
php82-openssl \
php82-mbstring \
php82-tokenizer \
php82-fileinfo \
php82-json \
php82-xml \
php82-xmlwriter \
php82-simplexml \
php82-dom \
php82-pdo_mysql \
php82-pdo_sqlite \
php82-tokenizer \
php82-pecl-redis
RUN ln -s /usr/bin/php82 /usr/bin/php
The first RUN instruction says that PHP and all listed extensions have to be installed. As mentioned before, this Dockerfile gets used to serve Laravel applications, so the PHP extensions are arbitrary and may change depending on the framework or application you are trying to run.
While the second RUN instruction creates a symbolic link named php
that points to the file php82
in the /usr/bin
directory.
Lastly, you can find out what the PHP extensions do by checking the PHP extensions documentation and the PHP extension community library PECL pages and searching for them.
Install Composer
RUN curl -sS https://getcomposer.org/installer -o composer-setup.php
RUN php composer-setup.php --install-dir=/usr/local/bin --filename=composer
RUN rm -rf composer-setup.php
In this RUN instruction, the composer binary, composer-setup.php, gets downloaded from the composer's official page. Then, in the second instruction, the binary gets used to install composer into the /usr/local/bin directory. Lastly, the binary gets removed after composer installation since it has no use for the system any longer.
Software configuration
Now that all the needed software is installed, it has to be configured and tightened together to make the serving of a Laravel application work as expected.
Configure supervisor
RUN mkdir -p /etc/supervisor.d/
COPY .docker/supervisord.ini /etc/supervisor.d/supervisord.ini
In this RUN instruction, the Dockerfile specifies that the directory supervisor.d has to be created inside the /etc/ directory. This directory will hold initializer files that specify sets of instructions that the Supervisor will run upon when the OS starts, in this case when the container starts, since these two events cannot happen without each other.
In the second RUN instruction, the supervisord.ini file gets copied from a local .docker folder into the /etc/supervisor.d/ container folder. As mentioned above, this file contains the instructions that Supervisor will run upon, and these instructions are:
[supervisord]
nodaemon=true
[program:nginx]
command=nginx
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:php-fpm]
command=php-fpm82
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
Explaining supervisor.ini
- nodaemon=true
Start Supervisor in the foreground instead of daemonizing.
- command=nginx
The command that will run when Supervisor starts.
- stdout_logfile=/dev/stdout
Redirect all output to the Alpine standard output device, which is the container itself, allowing us to see Supervisor logs about NGINX execution when running docker logs MY_CONTAINER or docker-compose up to start the container stack.
- stdout_logfile_maxbytes=0
The maximum number of bytes that can get consumed by stdout_logfile before it rotates, since files didn't get written, has to be deactivated by setting maxbytes to 0.
- stderr_logfile=/dev/stderr
Redirect all errors to the Alpine standard error device that is the container itself, allowing us to see Supervisor logs about NGINX execution when running docker logs MY_CONTAINER or docker-compose up to start the container stack.
- stderr_logfile_maxbytes=0
The maximum number of bytes that can get consumed by stderr_logfile before it rotates, since files didn't get written, has to be deactivated by setting maxbytes to 0.
Configure PHP
RUN mkdir -p /run/php/
RUN touch /run/php/php8.2-fpm.pid
COPY .docker/php-fpm.conf /etc/php82/php-fpm.conf
COPY .docker/php.ini-production /etc/php82/php.ini
In the first RUN statement, the Dockerfile specifies that the directory php has to be created inside the /run/ directory. This directory will hold .pid files that contain the process ID specific to the software.
The second statement creates the file php8.2-fpm.pid inside the /run/php/ directory. Now the Alpine distro has a file to store the process ID that will be created when PHP-FPM starts.
The third statement copies a php.ini-production file from a local .docker folder into the /etc/php82/ container folder. This file contains all the configurations that PHP will run on. The content of this file was copied from PHP's official repository on GitHub.
The fourth statement copies a php-fpm.conf file from a local .docker folder into /etc/php82/ container folder. This file contains all the configurations that PHP-FPM will run upon, and here are the configurations:
;;;;;;;;;;;;;;;;;;;;
; FPM Configuration ;
;;;;;;;;;;;;;;;;;;;;;
; All relative paths in this configuration file are relative to PHP's install
; prefix (/usr). This prefix can be dynamically changed by using the
; '-p' argument from the command line.
;;;;;;;;;;;;;;;;;;
; Global Options ;
;;;;;;;;;;;;;;;;;;
[global]
; Pid file
; Note: the default prefix is /var
; Default Value: none
pid = /run/php/php8.0-fpm.pid
; Error log file
; If it's set to "syslog", log is sent to syslogd instead of being written
; in a local file.
; Note: the default prefix is /var
; Default Value: log/php-fpm.log
error_log = /proc/self/fd/2
; syslog_facility is used to specify what type of program is logging the
; message. This lets syslogd specify that messages from different facilities
; will be handled differently.
; See syslog(3) for possible values (ex daemon equiv LOG_DAEMON)
; Default Value: daemon
;syslog.facility = daemon
; syslog_ident is prepended to every message. If you have multiple FPM
; instances running on the same server, you can change the default value
; which must suit common needs.
; Default Value: php-fpm
;syslog.ident = php-fpm
; Log level
; Possible Values: alert, error, warning, notice, debug
; Default Value: notice
;log_level = notice
; If this number of child processes exit with SIGSEGV or SIGBUS within the time
; interval set by emergency_restart_interval then FPM will restart. A value
; of '0' means 'Off'.
; Default Value: 0
;emergency_restart_threshold = 0
; Interval of time used by emergency_restart_interval to determine when
; a graceful restart will be initiated. This can be useful to work around
; accidental corruptions in an accelerator's shared memory.
; Available Units: s(econds), m(inutes), h(ours), or d(ays)
; Default Unit: seconds
; Default Value: 0
;emergency_restart_interval = 0
; Time limit for child processes to wait for a reaction on signals from master.
; Available units: s(econds), m(inutes), h(ours), or d(ays)
; Default Unit: seconds
; Default Value: 0
;process_control_timeout = 0
; The maximum number of processes FPM will fork. This has been design to control
; the global number of processes when using dynamic PM within a lot of pools.
; Use it with caution.
; Note: A value of 0 indicates no limit
; Default Value: 0
; process.max = 128
; Specify the nice(2) priority to apply to the master process (only if set)
; The value can vary from -19 (highest priority) to 20 (lower priority)
; Note: - It will only work if the FPM master process is launched as root
; - The pool process will inherit the master process priority
; unless it specified otherwise
; Default Value: no set
; process.priority = -19
; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging.
; Default Value: yes
daemonize = no
; Set open file descriptor rlimit for the master process.
; Default Value: system defined value
;rlimit_files = 1024
; Set max core size rlimit for the master process.
; Possible Values: 'unlimited' or an integer greater or equal to 0
; Default Value: system defined value
;rlimit_core = 0
; Specify the event mechanism FPM will use. The following is available:
; - select (any POSIX os)
; - poll (any POSIX os)
; - epoll (linux >= 2.5.44)
; - kqueue (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0)
; - /dev/poll (Solaris >= 7)
; - port (Solaris >= 10)
; Default Value: not set (auto detection)
;events.mechanism = epoll
; When FPM is build with systemd integration, specify the interval,
; in second, between health report notification to systemd.
; Set to 0 to disable.
; Available Units: s(econds), m(inutes), h(ours)
; Default Unit: seconds
; Default value: 10
;systemd_interval = 10
;;;;;;;;;;;;;;;;;;;;
; Pool Definitions ;
;;;;;;;;;;;;;;;;;;;;
; Multiple pools of child processes may be started with different listening
; ports and different management options. The name of the pool will be
; used in logs and stats. There is no limitation on the number of pools which
; FPM can handle. Your system will tell you anyway :)
; Include one or more files. If glob(3) exists, it is used to include a bunch of
; files from a glob(3) pattern. This directive can be used everywhere in the
; file.
; Relative path can also be used. They will be prefixed by:
; - the global prefix if it's been set (-p argument)
; - /usr otherwise
include=/etc/php82/php-fpm.d/*.conf
Notice that php-fpm.conf doesn't have any custom configuration or optimization; feel free to configure this file according to your needs.
Configure NGINX
COPY .docker/nginx.conf /etc/nginx/
COPY .docker/nginx-laravel.conf /etc/nginx/modules/
RUN mkdir -p /run/nginx/
RUN touch /run/nginx/nginx.pid
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
In this first statement, nginx.conf gets copied from a local .docker folder into the /etc/nginx/ container folder. This file contains all the configurations that NGINX will use to run upon it, and down below you can check the file content:
# /etc/nginx/nginx.conf
user nobody;
# NGINX will run in the foreground
daemon off;
# Set number of worker processes automatically based on number of CPU cores.
worker_processes auto;
# Enables the use of JIT for regular expressions to speed-up their processing.
pcre_jit on;
# Configures default error logger.
error_log /var/log/nginx/error.log warn;
# Uncomment to include files with config snippets into the root context.
# NOTE: This will be enabled by default in Alpine 3.15.
# include /etc/nginx/conf.d/*.conf;
events {
# The maximum number of simultaneous connections that can be opened by
# a worker process.
worker_connections 1024;
}
http {
# Includes mapping of file name extensions to MIME types of responses
# and defines the default type.
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Includes files with directives to load dynamic modules.
include /etc/nginx/modules/*.conf;
# Name servers used to resolve names of upstream servers into addresses.
# It's also needed when using tcpsocket and udpsocket in Lua modules.
#resolver 1.1.1.1 1.0.0.1 2606:4700:4700::1111 2606:4700:4700::1001;
# Don't tell nginx version to the clients. Default is 'on'.
server_tokens off;
# Specifies the maximum accepted body size of a client request, as
# indicated by the request header Content-Length. If the stated content
# length is greater than this size, then the client receives the HTTP
# error code 413. Set to 0 to disable. Default is '1m'.
client_max_body_size 1m;
# Sendfile copies data between one FD and other from within the kernel,
# which is more efficient than read() + write(). Default is off.
sendfile on;
# Causes nginx to attempt to send its HTTP response head in one packet,
# instead of using partial frames. Default is 'off'.
tcp_nopush on;
# Enables the specified protocols. Default is TLSv1 TLSv1.1 TLSv1.2.
# TIP: If you're not obligated to support ancient clients, remove TLSv1.1.
ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;
# Path of the file with Diffie-Hellman parameters for EDH ciphers.
# TIP: Generate with: `openssl dhparam -out /etc/ssl/nginx/dh2048.pem 2048`
#ssl_dhparam /etc/ssl/nginx/dh2048.pem;
# Specifies that our cipher suits should be preferred over client ciphers.
# Default is 'off'.
ssl_prefer_server_ciphers on;
# Enables a shared SSL cache with size that can hold around 8000 sessions.
# Default is 'none'.
ssl_session_cache shared:SSL:2m;
# Specifies a time during which a client may reuse the session parameters.
# Default is '5m'.
ssl_session_timeout 1h;
# Disable TLS session tickets (they are insecure). Default is 'on'.
ssl_session_tickets off;
# Enable gzipping of responses.
#gzip on;
# Set the Vary HTTP header as defined in the RFC 2616. Default is 'off'.
gzip_vary on;
# Helper variable for proxying websockets.
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# Specifies the main log format.
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# Sets the path, format, and configuration for a buffered log write.
access_log /var/log/nginx/access.log main;
# Includes virtual hosts configs.
include /etc/nginx/http.d/*.conf;
}
# TIP: Uncomment if you use stream module.
#include /etc/nginx/stream.conf;
The third statement copies nginx-laravel.conf from a local .docker folder into the /etc/nginx/modules/ container folder. This file contains all the configurations that NGINX will use to serve Laravel correctly, and down below you can check the file content:
server {
listen 80;
server_name localhost;
root /var/www/html/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";
index index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass localhost:9000;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
The fourth statement specifies that the directory nginx has to be created inside the /run/ directory. As mentioned in the PHP-FPM configuration session, the run directory holds .pid files where the process ID for a specific software gets written.
In the fifth statement, create the file nginx.pid inside the /run/nginx/ directory. Now, the Alpine distro has a file to store the process ID that will be created when NGINX starts.
The sixth statement instructs that a symbolic link to the Alpine standard output has to be created at /var/log/nginx/access.log. This configuration, as mentioned in the Supervisor sections, is what allows us to see NGINX logs from containers.
Lastly, the seventh statement instructs that a symbolic link to the Alpine standard error gets created at /var/log/nginx/error.log. This configuration, as mentioned in the Supervisor sections, is what allows us to see NGINX errors from containers.
Build process
The build process is where the application gets copied into the container and its dependencies get installed, leaving the Laravel application ready to be served by NGINX, PHP-FPM, and Supervisor.
COPY . .
RUN composer install --no-dev
At the COPY statement, all Laravel files and folders from the directory where the Dockerfile is are copied into the working directory specified at the WORKDIR instruction.
At the RUN statement, production dependencies from the Laravel application get installed, making the application ready to be served by Supervisor, NGINX, and PHP-FPM.
Container execution
Now that everything is installed and properly configured, we need to know how this container image will start serving the application once the container starts and what TCP port to use.
EXPOSE 80
CMD ["supervisord", "-c", "/etc/supervisor.d/supervisord.ini"]
The EXPOSE instruction informs the container to listen on the specified network ports at runtime, while the purpose of the CMD instruction is to provide a default command for an executing Docker container.
Now your Dockerfile is finally done, and you can build a container from it by executing docker build -t laravel-alpine:latest . --no-cache
in your terminal.
Happy coding!
Top comments (41)
Thank you for sharing this great and detailed post, it really helped me.
I've have just a question as I am bloody beginner here: You are combining PHP and NGINX into one container. Until now, I always tried to avoid this by following the one function per container principle.
Especially if you want to scale the application you will scale NGINX as well, which might not be necessary. Is there es special reason why you go this way and do not use a separate container for NGINX?
Hi @myfriendlyusername,
I use both ways, but here is what I usually try to consider before doing one way or the other. I usually like to analyze the context of the teams and projects. Decouple NGINX can make your container smaller or make your container use a little less hardware, but it's not necessarily the simplest or the easier or the safest way to go through, specially if you have a few projects.
In my opinion decouple NGINX from PHP it makes sense when in your context your team has a good understanding of containers and minimum knowledge of server architecture or if you have a significant number of microservices, and you explicitly want to use NGINX as a reverse proxy to upstream requests to the containers.
But if your reverse proxy runs in a single container you will be creating a single point of failure and if your container goes offline even for a few seconds all of your container will be offline and this may cause problems, specially if you have a high volume of requests/transactions, with NGINX embedded with the application container you don't have the proxy created by you in the middle of the request process removing this single point of failure.
Also, the technology being used to manage the container will influence in your decision as well, let's take AWS ECS and AWS EKS as example:
If you choose to run your containers using ECS it doesn't matter that much if NGINX is, or it isn't, embedded in the container image because ECS is a simpler cluster abstraction that accept both ways, and you could suffer with problems that I've described above.
But if you are running your containers on EKS it may be preferable to not have NGINX because Kubernetes has the ingress controller "component" tight to the cluster, the ingress controller is a special implementation of NGINX, so you could just configure the upstream and your proxy would be running and in this case, we don't have a single point of failure because if the ingress controller stop working the entire cluster would go offline, so it's not a problem of how you architected your infrastructure but a problem with a piece of your infrastructure.
Finally, I would like to reinforce that I don't discourage anyone to try to remove NGINX completely, this is just me sharing the way I do things. Not only that, but I don't know other people's context, so fell free to adapt anything that you saw in the article to your context, and if there is anything that I can help, let me know.
Don't sound like much of a beginner to me! I agree 100% on separating your web server from your PHP app container, especially if you're using nginx as a reverse proxy to php-fpm. It's super easy to setup using docker-compose and the base nginx container. You can still use alpine as the base for your app container.
I also don't see the need for some of the adds. You don't need/want composer in your app--you can use the composer docker image as part of the build process to install app dependencies. Of course, you can use a docker compose override to add it to your local for dev purposes as well. Same goes for node/npm, supervisor, and bash.
Hi James ... sorry for a question years later ... but just starting my "docker containers" journey. I began with separate nginx and php-fpm containers. I pass php requests via the internal network - whatever:9000
This works fine, all good. However I mount the same (host) web dir into both containers (using volumes - and I would prefer not to have the same codebase copying to 2 containers). I'm building an API using Laravel/Breeze ... there are no static files other than project docs that I can handle in the nginx config.
That said, this is not 'reverse proxying' ... which I cannot understand since php-fpm is of course not handling http requests. So, if I start another webserver as either a separate container or integrated in one with php-fpm, have I not just added complexity? and pretty much recreated the author's solution?
What am I missing in this "super easy to setup" reverse proxy in container land ?
Any advice or directions-to-docs etc would be appreciated.
Thanks for this! Any idea how to resolve this error? I maybe missing some steps
2021/03/17 09:49:01 [crit] 12#12: *6 connect() to unix:/run/php/php7.4-fpm.sock failed (13: Permission denied) while connecting to upstream, client: 172.29.0.1, server: _, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/run/php/php7.4-fpm.sock:"
Hi ! I am getting the same error. Did you find any solution to this?
Hello. Sorry for the late reply. Do you still need help with this?
@atreya Hi, I have the same problem. Could you share the solution if you managed to solve this problem?
This is happening because
nginx
process running as the nginx user cannot access laravel files because the laravel files were copied as the root user to the container. One way to solve this is to add the following to thewww.conf
file used forphp-fpm
.The above section will already be present if I am not mistaken. You just have to set
user = nginx
And then after that, when you are copying the Laravel files in the Dockerfile. Do it like this.
Basically, you are changing the owner to
nginx
when copying the files so that the nginx process can access the laravel files via the php-fpm process and the php-fpm process is also running as the nginx user because of the above setting in thewww.conf
file.Let me know if this solved your problem
@dziurka @lvidalio
@atreya can u show how to edit conf or edit your original post?? im getting the same error
Sorry for the late reply. This is the complete conf file.
If are you getting this error "ModuleNotFoundError: No module named 'pkg_resources'" you need to install python3:
RUN apk add --no-cache zip unzip curl sqlite nginx python3-dev python3 supervisor \
&& curl -O https://bootstrap.pypa.io/get-pip.py \
&& python3 get-pip.py
Thanks for that!
I have successfully build this dockerfile and my local web run successfully,
but when i push same codes to development stage, at aws pipeline, it shows error when installing package from dockerfile like this:
ERROR: unable to select packages:
required by: world[php8-pdo_pgsql]
php8-pdo_sqlite (no such package):
required by: world[php8-pdo_sqlite]
php8-pecl-redis (no such package):
required by: world[php8-pecl-redis]
php8-phar (no such package):
required by: world[php8-phar]
php8-simplexml (no such package):
required by: world[php8-simplexml]
php8-tokenizer (no such package):
required by: world[php8-tokenizer]
php8-xml (no such package):
required by: world[php8-xml]
php8-xmlreader (no such package):
required by: world[php8-xmlreader]
php8-xmlwriter (no such package):
required by: world[php8-xmlwriter]
php8-zip (no such package):
required by: world[php8-zip]
The command '/bin/sh -c apk add --no-cache php8 php8-common php8-fpm php8-pdo php8-opcache php8-zip php8-phar php8-iconv php8-cli php8-curl php8-openssl php8-mbstring php8-tokenizer php8-fileinfo php8-json php8-xml php8-xmlwriter php8-simplexml php8-dom php8-pdo_mysql php8-pdo_sqlite php8-tokenizer php8-pecl-redis php8-gd php8-pdo_pgsql php8-xmlreader' returned a non-zero code: 25
please help
@akbarsyidiqi try to check if this isn't happening because locally you have a
alpine:latest
version that's different from thealpine:latest
that the AWS pipeline is getting.As you can check in here, a new version of Alpine was released last month, and It is not unusual that packages have their names changed or adjusted after a version release.
The way I keep Dockerfiles is a double-edged sword because tagging the latest version of a base image like this will expose you to these errors sooner, as a result, you will always have your Dockerfile updated.
If you would rather not keep living this experience or, your project has constraints where this approach would cause too many problems, I would recommend you to set your Alpine version to something static like
alpine:3.14
.You're right..i have change my dockerfile image to FROM alpine:3.16 because i am using php 8.0 and then it was successfully built
But when Deploy process at AWS pipeline, it shows different error 404 not found "Task failed container health checks"
The expected healthcheck is "api/health" with response 200 status code, (the Api route is exists in my code)
but it shows 404 Not Found when deploying process
I am not understand, built success, but healthcheck failed when deploy
Do you know about this ?
I have already dockefile FROM php:8.0-fpm (it works, healthcheck works, api/health), but after aws scan image, it shows many vurnerabilities, so i decided to change my docker file to alpine image (FROM alpine:3.16) (to decrease vurnerabilities)
@akbarsyidiqi, could you share more information about this health check? Once your AWS pipeline finishes building the Docker image, it is trying to deploy this image where? EC2, ECS or EKS?
This
api/health
endpoint was defined by you at your Laravel app? Have you made sure that you can get the expected result of this health check locally before deploying?Its trying to deploy to ECS, endpoint api/health already defined in my laravel code (aws expected response status code 200, but received Not Found)
log in aws
service v2-stg-myrepo (port 80) is unhealthy in target-group city-v2-staging-myrepo due to (reason Health checks failed with these codes: [404]).
[28/Dec/2022:08:38:19 +0000] "GET /api/health HTTP/1.1" 404 146 "-" "ELB-HealthChecker/2.0" "-" ecs/fe-myrepo/b6e0e08ea84e43e7b50454fd2c2db
response api/health
this api/health will go the function like this:
public function health()
{
return response(["status"=>200], 200)
->header('Content-Type', 'application/json');
}
In my local env, everything works, api/health works, web run smoothly
Very useful and detailed, thanks!
One question: if both nginx and php-fpm are in the same container, should not be faster to use a socket instead of TCP?
Have you ever tried with swoole, in order to drop nginx entirely?
Thanks and keep up
Hey Fabio,
Even though sockets may be faster, it seems simple to use TCP over Socket in this scenario because the socket file is not automatically created.
About Swoole, I've never tried, and being completely honest I've never had heard about it until now. But I took a quick look into the documentation and under the HTTP Server section they mention the use of NGINX.
I guess you can not use NGINX, even with Laravel you can avoid the usage of NGINX by just doing a php artisan serve and the CMD of the container, but you will lose the ability to do some fine-tuning about request handling that NGINX provides.
I don't discourage anyone to try to remove NGINX completely, this is just me sharing the way I do things in production. Not only that, but I don't know other people's context, so I'm not going to say much more than there are fine-tuning that you can do in NGINX that may improve your app performance.
Thanks for having the time to answering me :)
Yes I was pointed both topics out because I guess that using "micro" distro such as Alpine it is almost mandatory if You have to deploy containers in a serverless/managed/whatever context, when the size of the artifacts (builds, images, registries, etc.) is very important as long as "internal" optimizations.
IMHO, at least in my experience, the setup and tuning of these containers is quite different between local development, production with all features that Laravel brings so well and production for services or "microservices", especially if You have to deploy them, for example, in Google Cloud Run or similar;
Swoole itself contains a full HTTP(S)/UDP/SOCKET server (cfr: swoole.co.uk/docs/modules/swoole-h...) with async support (and many other features);
as You can see (and I tell this from a PHP/Nginx/Laravel true lover) configure a proper "env" for PHP and all the dependencies required by Laravel is not so "simple and clean", if we compare to other solutions such as Node, Python and Golang (especially for services they do not require a "full" HTTP server);
I think Nginx is just another "dependency" to install, maintain and configure "properly" but I guess it is mandatory if You have to serve static files or other stuff related to a full powerfull HTTP server;
Swoole has nothing to do with "php artisan serve" (which is very slow and should never be used in production) so the "best fit" is for "services", and so should be the use for "Alpine" and in general "micro" distros;
quoting the man page:
"Compare with PHP-FPM, the default Golang HTTP server, the default Node.js HTTP server, Swoole HTTP server performs much better. It has the similar performance compare with the Nginx static files server."
that - at least for me - is very exciting and with the upcoming release of PHP8 and its JIT compiler I think that is actually possibile to write great applications and/or services with Docker/PHP/Laravel/Lumen, even if "PHP haters" are not so convinced :D
Thanks
Notes from a Beginner:
1) Make sure you are in the directory containing the "app" directory (not your project directory) and that your docker file is located in that directory before executing the docker build command. Then run the container with the next step.
2) Probably second nature to all Docker Pro's but I needed a little reminder...
docker run -d -p 80:80 lavarel-alpine:latest
Other than that, this image built and ran with no issues for me as published. It took me a 3 days to make it work, but that's on me for not being in the proper directory when the copy . . command was executed.
I would like to change this to a docker-compose.yml setup, that will be my next step so I can build the SQL, REDIS, MAILHOG, etc containers to interact.
Thank you so much for this. I learned alot from your example as a beginner. It was just what I was looking for. There is alot I don't understand regarding the supervisor and PID aspect. I will delve into that on my own and try to understand it and why you used it.
Hi @timhuey,
I'm super glad that this post has helped you! I'm reaching out to let you know that I also have a post about how to extend the Docker file of this post into a docker-compose.yml, you can read it in here dev.to/jackmiras/docker-compose-fo....
About the supervisor and PID aspects, it would be my pleasure to help you understand their role in the Dockerfile, you can find my email at my profile, feel free to mail me with any doubts you have.
Thanks for sharing, it is useful for everyone who uses Laravel and any PHP framework.
In our case is more useful Nginx and PHP be together like that, because we are planning to use Amazon ECS with Fargate for a smaller project.
And a sidecar container for each PHP task would make the project to expensive make it unfeasible.
Amazing post, thank you very much for the writeup!
Suggestion: to setup composer you can use the same logic of a base file and simply do:
Appreciate this post, but as written it doesn't work. There's the socked issue mentioned earlier ...though it looks to me like the solution in the comments is incorrect—if you're using a socket you need to set php-fpm to listen to that.
Also for some reason tinker connects to my database, but the app does not. I'm assuming this is some sort of permissions issue (will report back when/if I figure it out)
I ended up rewriting a lot, best I can figure making the socket and php-fpm pid is the culprit. Go with the default for the socket. Also change ownership of anything created to
nobody
and use that user to start things up.Hey @jonnylink ,
I've just updated the article because I've notice that it was missing some configs related to NGINX and after updating I saw your comments, and it seems related with my current update of the article.
In case you have the time I would appreciate your review of the article to double-check if everything is working the way it is supposed to work, in case you have any problems I would be happy to help.
Hi! Thank you for the wonderful tutorial! It appears to be the closest one that meets my requirements. However, would it be possible to upload the source code to any repository? I'm encountering some issues with PHP 8.3 (as we need to stay up to date). Additionally, in the FPM section, you have the line:
This seems to refer to an older version, whereas you use PHP 8.2 elsewhere in the tutorial.
Some comments have been hidden by the post's author - find out more