When deploying a Laravel application, the goal is to make sure that the deployment process is as fast and secure as possible. A big part of achievi...
Some comments have been hidden by the post's author - find out more
For further actions, you may consider blocking this person and/or reporting abuse
Thank you for sharing this great and detailed post, it really helped me.
I've have just a question as I am bloody beginner here: You are combining PHP and NGINX into one container. Until now, I always tried to avoid this by following the one function per container principle.
Especially if you want to scale the application you will scale NGINX as well, which might not be necessary. Is there es special reason why you go this way and do not use a separate container for NGINX?
Hi @myfriendlyusername,
I use both ways, but here is what I usually try to consider before doing one way or the other. I usually like to analyze the context of the teams and projects. Decouple NGINX can make your container smaller or make your container use a little less hardware, but it's not necessarily the simplest or the easier or the safest way to go through, specially if you have a few projects.
In my opinion decouple NGINX from PHP it makes sense when in your context your team has a good understanding of containers and minimum knowledge of server architecture or if you have a significant number of microservices, and you explicitly want to use NGINX as a reverse proxy to upstream requests to the containers.
But if your reverse proxy runs in a single container you will be creating a single point of failure and if your container goes offline even for a few seconds all of your container will be offline and this may cause problems, specially if you have a high volume of requests/transactions, with NGINX embedded with the application container you don't have the proxy created by you in the middle of the request process removing this single point of failure.
Also, the technology being used to manage the container will influence in your decision as well, let's take AWS ECS and AWS EKS as example:
If you choose to run your containers using ECS it doesn't matter that much if NGINX is, or it isn't, embedded in the container image because ECS is a simpler cluster abstraction that accept both ways, and you could suffer with problems that I've described above.
But if you are running your containers on EKS it may be preferable to not have NGINX because Kubernetes has the ingress controller "component" tight to the cluster, the ingress controller is a special implementation of NGINX, so you could just configure the upstream and your proxy would be running and in this case, we don't have a single point of failure because if the ingress controller stop working the entire cluster would go offline, so it's not a problem of how you architected your infrastructure but a problem with a piece of your infrastructure.
Finally, I would like to reinforce that I don't discourage anyone to try to remove NGINX completely, this is just me sharing the way I do things. Not only that, but I don't know other people's context, so fell free to adapt anything that you saw in the article to your context, and if there is anything that I can help, let me know.
Don't sound like much of a beginner to me! I agree 100% on separating your web server from your PHP app container, especially if you're using nginx as a reverse proxy to php-fpm. It's super easy to setup using docker-compose and the base nginx container. You can still use alpine as the base for your app container.
I also don't see the need for some of the adds. You don't need/want composer in your app--you can use the composer docker image as part of the build process to install app dependencies. Of course, you can use a docker compose override to add it to your local for dev purposes as well. Same goes for node/npm, supervisor, and bash.
Hi James ... sorry for a question years later ... but just starting my "docker containers" journey. I began with separate nginx and php-fpm containers. I pass php requests via the internal network - whatever:9000
This works fine, all good. However I mount the same (host) web dir into both containers (using volumes - and I would prefer not to have the same codebase copying to 2 containers). I'm building an API using Laravel/Breeze ... there are no static files other than project docs that I can handle in the nginx config.
That said, this is not 'reverse proxying' ... which I cannot understand since php-fpm is of course not handling http requests. So, if I start another webserver as either a separate container or integrated in one with php-fpm, have I not just added complexity? and pretty much recreated the author's solution?
What am I missing in this "super easy to setup" reverse proxy in container land ?
Any advice or directions-to-docs etc would be appreciated.
Thanks for this! Any idea how to resolve this error? I maybe missing some steps
2021/03/17 09:49:01 [crit] 12#12: *6 connect() to unix:/run/php/php7.4-fpm.sock failed (13: Permission denied) while connecting to upstream, client: 172.29.0.1, server: _, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/run/php/php7.4-fpm.sock:"
Hi ! I am getting the same error. Did you find any solution to this?
Hello. Sorry for the late reply. Do you still need help with this?
@atreya Hi, I have the same problem. Could you share the solution if you managed to solve this problem?
This is happening because
nginx
process running as the nginx user cannot access laravel files because the laravel files were copied as the root user to the container. One way to solve this is to add the following to thewww.conf
file used forphp-fpm
.The above section will already be present if I am not mistaken. You just have to set
user = nginx
And then after that, when you are copying the Laravel files in the Dockerfile. Do it like this.
Basically, you are changing the owner to
nginx
when copying the files so that the nginx process can access the laravel files via the php-fpm process and the php-fpm process is also running as the nginx user because of the above setting in thewww.conf
file.Let me know if this solved your problem
@dziurka @lvidalio
@atreya can u show how to edit conf or edit your original post?? im getting the same error
Sorry for the late reply. This is the complete conf file.
If are you getting this error "ModuleNotFoundError: No module named 'pkg_resources'" you need to install python3:
RUN apk add --no-cache zip unzip curl sqlite nginx python3-dev python3 supervisor \
&& curl -O https://bootstrap.pypa.io/get-pip.py \
&& python3 get-pip.py
Thanks for that!
I have successfully build this dockerfile and my local web run successfully,
but when i push same codes to development stage, at aws pipeline, it shows error when installing package from dockerfile like this:
ERROR: unable to select packages:
required by: world[php8-pdo_pgsql]
php8-pdo_sqlite (no such package):
required by: world[php8-pdo_sqlite]
php8-pecl-redis (no such package):
required by: world[php8-pecl-redis]
php8-phar (no such package):
required by: world[php8-phar]
php8-simplexml (no such package):
required by: world[php8-simplexml]
php8-tokenizer (no such package):
required by: world[php8-tokenizer]
php8-xml (no such package):
required by: world[php8-xml]
php8-xmlreader (no such package):
required by: world[php8-xmlreader]
php8-xmlwriter (no such package):
required by: world[php8-xmlwriter]
php8-zip (no such package):
required by: world[php8-zip]
The command '/bin/sh -c apk add --no-cache php8 php8-common php8-fpm php8-pdo php8-opcache php8-zip php8-phar php8-iconv php8-cli php8-curl php8-openssl php8-mbstring php8-tokenizer php8-fileinfo php8-json php8-xml php8-xmlwriter php8-simplexml php8-dom php8-pdo_mysql php8-pdo_sqlite php8-tokenizer php8-pecl-redis php8-gd php8-pdo_pgsql php8-xmlreader' returned a non-zero code: 25
please help
@akbarsyidiqi try to check if this isn't happening because locally you have a
alpine:latest
version that's different from thealpine:latest
that the AWS pipeline is getting.As you can check in here, a new version of Alpine was released last month, and It is not unusual that packages have their names changed or adjusted after a version release.
The way I keep Dockerfiles is a double-edged sword because tagging the latest version of a base image like this will expose you to these errors sooner, as a result, you will always have your Dockerfile updated.
If you would rather not keep living this experience or, your project has constraints where this approach would cause too many problems, I would recommend you to set your Alpine version to something static like
alpine:3.14
.You're right..i have change my dockerfile image to FROM alpine:3.16 because i am using php 8.0 and then it was successfully built
But when Deploy process at AWS pipeline, it shows different error 404 not found "Task failed container health checks"
The expected healthcheck is "api/health" with response 200 status code, (the Api route is exists in my code)
but it shows 404 Not Found when deploying process
I am not understand, built success, but healthcheck failed when deploy
Do you know about this ?
I have already dockefile FROM php:8.0-fpm (it works, healthcheck works, api/health), but after aws scan image, it shows many vurnerabilities, so i decided to change my docker file to alpine image (FROM alpine:3.16) (to decrease vurnerabilities)
@akbarsyidiqi, could you share more information about this health check? Once your AWS pipeline finishes building the Docker image, it is trying to deploy this image where? EC2, ECS or EKS?
This
api/health
endpoint was defined by you at your Laravel app? Have you made sure that you can get the expected result of this health check locally before deploying?Its trying to deploy to ECS, endpoint api/health already defined in my laravel code (aws expected response status code 200, but received Not Found)
log in aws
service v2-stg-myrepo (port 80) is unhealthy in target-group city-v2-staging-myrepo due to (reason Health checks failed with these codes: [404]).
[28/Dec/2022:08:38:19 +0000] "GET /api/health HTTP/1.1" 404 146 "-" "ELB-HealthChecker/2.0" "-" ecs/fe-myrepo/b6e0e08ea84e43e7b50454fd2c2db
response api/health
this api/health will go the function like this:
public function health()
{
return response(["status"=>200], 200)
->header('Content-Type', 'application/json');
}
In my local env, everything works, api/health works, web run smoothly
Very useful and detailed, thanks!
One question: if both nginx and php-fpm are in the same container, should not be faster to use a socket instead of TCP?
Have you ever tried with swoole, in order to drop nginx entirely?
Thanks and keep up
Hey Fabio,
Even though sockets may be faster, it seems simple to use TCP over Socket in this scenario because the socket file is not automatically created.
About Swoole, I've never tried, and being completely honest I've never had heard about it until now. But I took a quick look into the documentation and under the HTTP Server section they mention the use of NGINX.
I guess you can not use NGINX, even with Laravel you can avoid the usage of NGINX by just doing a php artisan serve and the CMD of the container, but you will lose the ability to do some fine-tuning about request handling that NGINX provides.
I don't discourage anyone to try to remove NGINX completely, this is just me sharing the way I do things in production. Not only that, but I don't know other people's context, so I'm not going to say much more than there are fine-tuning that you can do in NGINX that may improve your app performance.
Thanks for having the time to answering me :)
Yes I was pointed both topics out because I guess that using "micro" distro such as Alpine it is almost mandatory if You have to deploy containers in a serverless/managed/whatever context, when the size of the artifacts (builds, images, registries, etc.) is very important as long as "internal" optimizations.
IMHO, at least in my experience, the setup and tuning of these containers is quite different between local development, production with all features that Laravel brings so well and production for services or "microservices", especially if You have to deploy them, for example, in Google Cloud Run or similar;
Swoole itself contains a full HTTP(S)/UDP/SOCKET server (cfr: swoole.co.uk/docs/modules/swoole-h...) with async support (and many other features);
as You can see (and I tell this from a PHP/Nginx/Laravel true lover) configure a proper "env" for PHP and all the dependencies required by Laravel is not so "simple and clean", if we compare to other solutions such as Node, Python and Golang (especially for services they do not require a "full" HTTP server);
I think Nginx is just another "dependency" to install, maintain and configure "properly" but I guess it is mandatory if You have to serve static files or other stuff related to a full powerfull HTTP server;
Swoole has nothing to do with "php artisan serve" (which is very slow and should never be used in production) so the "best fit" is for "services", and so should be the use for "Alpine" and in general "micro" distros;
quoting the man page:
"Compare with PHP-FPM, the default Golang HTTP server, the default Node.js HTTP server, Swoole HTTP server performs much better. It has the similar performance compare with the Nginx static files server."
that - at least for me - is very exciting and with the upcoming release of PHP8 and its JIT compiler I think that is actually possibile to write great applications and/or services with Docker/PHP/Laravel/Lumen, even if "PHP haters" are not so convinced :D
Thanks
Notes from a Beginner:
1) Make sure you are in the directory containing the "app" directory (not your project directory) and that your docker file is located in that directory before executing the docker build command. Then run the container with the next step.
2) Probably second nature to all Docker Pro's but I needed a little reminder...
docker run -d -p 80:80 lavarel-alpine:latest
Other than that, this image built and ran with no issues for me as published. It took me a 3 days to make it work, but that's on me for not being in the proper directory when the copy . . command was executed.
I would like to change this to a docker-compose.yml setup, that will be my next step so I can build the SQL, REDIS, MAILHOG, etc containers to interact.
Thank you so much for this. I learned alot from your example as a beginner. It was just what I was looking for. There is alot I don't understand regarding the supervisor and PID aspect. I will delve into that on my own and try to understand it and why you used it.
Hi @timhuey,
I'm super glad that this post has helped you! I'm reaching out to let you know that I also have a post about how to extend the Docker file of this post into a docker-compose.yml, you can read it in here dev.to/jackmiras/docker-compose-fo....
About the supervisor and PID aspects, it would be my pleasure to help you understand their role in the Dockerfile, you can find my email at my profile, feel free to mail me with any doubts you have.
Thanks for sharing, it is useful for everyone who uses Laravel and any PHP framework.
In our case is more useful Nginx and PHP be together like that, because we are planning to use Amazon ECS with Fargate for a smaller project.
And a sidecar container for each PHP task would make the project to expensive make it unfeasible.
Amazing post, thank you very much for the writeup!
Suggestion: to setup composer you can use the same logic of a base file and simply do:
Appreciate this post, but as written it doesn't work. There's the socked issue mentioned earlier ...though it looks to me like the solution in the comments is incorrect—if you're using a socket you need to set php-fpm to listen to that.
Also for some reason tinker connects to my database, but the app does not. I'm assuming this is some sort of permissions issue (will report back when/if I figure it out)
I ended up rewriting a lot, best I can figure making the socket and php-fpm pid is the culprit. Go with the default for the socket. Also change ownership of anything created to
nobody
and use that user to start things up.Hey @jonnylink ,
I've just updated the article because I've notice that it was missing some configs related to NGINX and after updating I saw your comments, and it seems related with my current update of the article.
In case you have the time I would appreciate your review of the article to double-check if everything is working the way it is supposed to work, in case you have any problems I would be happy to help.
Hi! Thank you for the wonderful tutorial! It appears to be the closest one that meets my requirements. However, would it be possible to upload the source code to any repository? I'm encountering some issues with PHP 8.3 (as we need to stay up to date). Additionally, in the FPM section, you have the line:
This seems to refer to an older version, whereas you use PHP 8.2 elsewhere in the tutorial.
If you are a macos user, ServBay.dev is worth to try. You don't need to spend some time or couple of days to setup anything. Just download it and you can use it immediately. You can run multiple PHP versions simultaneously and switch between them effortlessly.
Honestly, this tool has greatly simplified my PHP development and is definitely worth trying!
Love this article. But It seems like php-fpm8 run without create php-fpm8.pid file. I see no pid or any content in file. I tried remove command touch php-fpm8.pid file, and I did not see php-fpm8.pid file in /run/php. A little diff between alpine and ubuntu, right?
Nice article! Thanks
Where would I run the queue? Shall I add it inside supervisord file?
@bhaidar I didn't come across this use case after start using containers. But I've given some thought and to me, it makes sense to keep the queue start at the supervisord file, mainly because it's a way to centralize everything that to want to start with the container.
I have followed the instructions. but facing this nginx issue. Can anyone help?
@pranta-saha would you try again? I noticed with the most recently Alpine release a few paths and binaries became php82 not just php8, especially at the php-fpm.conf and supervisord.ini files.
@jackmiras Thanks for your quick response. Really appreciate that
I have edited the php-fpm.conf and supervisord.ini files accordingly but getting the same error. I have installed sail in existing laravel web app which uses nginx as default. may be something is conflicting with that
Hello @pranta-saha,
Maybe the way you are running the container image might be influencing in the result?
I've tried running locally using
docker run -p 80:80 laravel-scaffold:latest
and then in the browser accessedhttp://localhost:80
and the application was loaded successfully.How are you running the container you've built?
I have fixed the problem. Had to comment out "include /etc/nginx/http.d/*.conf" in nginx.conf file. Now its working
This version has nginx rewrite module ?