First Step
let's consider we have one nginx and one php node and we want to deploy on php node without down time
Our structure is like this
.
├── app
│ └── index.php
├── docker-compose.yml
├── nginx.Dockerfile
├── nginx.conf
└── php.Dockerfile
That is our docker-compose.yml
version: "3.7"
services:
cicd-nginx:
build:
context: .
dockerfile: nginx.Dockerfile
ports:
- "88:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
cicd-php:
build:
context: .
dockerfile: php.Dockerfile
That is our nginx.Dockerfile
FROM nginx:1.17-alpine
WORKDIR /app
COPY ./app ./
That is our nginx.conf
server {
listen 80;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /app;
server_name localhost;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass cicd-php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
That is our simple php.Dockerfile
FROM php:8.1-fpm-alpine
WORKDIR /app
COPY ./app ./
And at the end that is our simple index.php
<?php
echo "hi"
?>
Let's go to the next interesting part
First we change build
with image
because we want to first build the image and then just replace the new one with the old one, so our docker-compose.yml
will become like this:
version: "3.7"
services:
cicd-nginx:
image: ${NGINX_IMAGE_TAG}
ports:
- "88:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
cicd-php:
image: ${PHP_IMAGE_TAG}
First set the variables and the need to build the images
export PHP_IMAGE_TAG=phptest:1
export NGINX_IMAGE_TAG=nginxtest:1
docker build -f php.Dockerfile -t $PHP_IMAGE_TAG .
docker build -f nginx.Dockerfile -t $NGINX_IMAGE_TAG .
Then we just run up -d
Now I want to show you what will happen if you just want to deploy the new code
First we prepare a bash script for it (./script.sh):
#!/bin/bash
while :
do
echo "Your response : "
curl 127.0.0.1:88
echo " "
date
sleep 2
done
Now we just change the php file to sth like
<?php
echo "hi hi :)"
?>
And then we try to build it with new name
export PHP_IMAGE_TAG=phptest:2
docker build -f php.Dockerfile -t $PHP_IMAGE_TAG .
Now we want run the script and then run these commands to update the php container
sh ./script.sh
And
docker-compose down
docker-compose up -d
Result
You will see sth like this and that is the problem
...
Your response :
hi
Mon Oct 31 00:31:23 UTC 2022
Your response :
hi
Mon Oct 31 00:31:25 UTC 2022
Your response :
hi
Mon Oct 31 00:31:27 UTC 2022
Your response :
hi
Mon Oct 31 00:31:30 UTC 2022
Your response :
curl: (7) Failed to connect to localhost port 88: Connection refused
Mon Oct 31 00:31:32 UTC 2022
Your response :
curl: (7) Failed to connect to localhost port 88: Connection refused
Mon Oct 31 00:31:34 UTC 2022
Your response :
curl: (7) Failed to connect to localhost port 88: Connection refused
Mon Oct 31 00:31:36 UTC 2022
Your response :
curl: (7) Failed to connect to localhost port 88: Connection refused
Mon Oct 31 00:31:38 UTC 2022
Your response :
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.17.10</center>
</body>
</html>
Mon Oct 31 00:31:40 UTC 2022
Your response :
hi hi :)
Mon Oct 31 00:31:42 UTC 2022
Your response :
hi hi :)
Mon Oct 31 00:31:44 UTC 2022
Your response :
hi hi :)
Mon Oct 31 00:31:46 UTC 2022
Your response :
hi hi :)
Mon Oct 31 00:31:48 UTC 2022
...
You can feel what is the problem right now so let's try to fix it
We want to use docker swarm
First init it
docker swarm init
Then update your docker-compose.yml like this:
version: "3.7"
services:
cicd-nginx:
image: ${NGINX_IMAGE_TAG}
ports:
- "88:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
deploy:
mode: replicated
replicas: 2
update_config:
order: start-first
failure_action: rollback
delay: 5s
cicd-php:
image: ${PHP_IMAGE_TAG}
deploy:
mode: replicated
replicas: 2
update_config:
order: start-first
failure_action: rollback
delay: 5s
Then run this command and now you are up
docker stack deploy -c docker-compose.yml <stack_name>
Let's look at containers
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6cc2b3516a3a phptest:2 "docker-php-entrypoi…" About a minute ago Up About a minute 9000/tcp website_cicd-php.1.ijd0lpj995318hpffx6gx5t44
7cffa4115598 phptest:2 "docker-php-entrypoi…" About a minute ago Up About a minute 9000/tcp website_cicd-php.2.jbd2sg2qm6ounji76q99fk2ql
fcedfb42b44a nginxtest:1 "nginx -g 'daemon of…" About a minute ago Up About a minute 80/tcp website_cicd-nginx.2.8z4spuco9rtdxcpc8y0fux79p
99e1642cf461 nginxtest:1 "nginx -g 'daemon of…" About a minute ago Up About a minute 80/tcp website_cicd-nginx.1.eo6erxose7asze8ref2poracc
We have 2 php node and 2 nginx node :)
Let's update the php node and see the result
Change the php file and add one hi
<?php
echo "hi hi hi :)"
?>
And then we try to build php node it with new name
export PHP_IMAGE_TAG=phptest:3
docker build -f php.Dockerfile -t $PHP_IMAGE_TAG .
Now we want run the script and then run the command to update the php container
sh ./script.sh
And
docker stack deploy -c docker-compose.yml <stack_name>
Result
...
Your response :
hi hi :)
Mon Oct 31 00:53:53 UTC 2022
Your response :
hi hi :)
Mon Oct 31 00:53:55 UTC 2022
Your response :
hi hi :)
Mon Oct 31 00:53:57 UTC 2022
Your response :
hi hi :)
Mon Oct 31 00:53:59 UTC 2022
Your response :
hi hi :)
Mon Oct 31 00:54:01 UTC 2022
Your response :
hi hi hi:)
Mon Oct 31 00:54:03 UTC 2022
Your response :
hi hi :)
Mon Oct 31 00:54:05 UTC 2022
Your response :
hi hi hi:)
Mon Oct 31 00:54:07 UTC 2022
Your response :
hi hi :)
Mon Oct 31 00:54:09 UTC 2022
Your response :
hi hi hi:)
Mon Oct 31 00:54:11 UTC 2022
Your response :
hi hi hi:)
Mon Oct 31 00:54:13 UTC 2022
Your response :
hi hi hi:)
Mon Oct 31 00:54:15 UTC 2022
Your response :
hi hi hi:)
Mon Oct 31 00:54:17 UTC 2022
Your response :
hi hi hi:)
Mon Oct 31 00:54:19 UTC 2022
Your response :
hi hi hi:)
Mon Oct 31 00:54:21 UTC 2022
Your response :
hi hi hi:)
Mon Oct 31 00:54:23 UTC 2022
Your response :
hi hi hi:)
Mon Oct 31 00:54:25 UTC 2022
...
Wow And that is the magic
We do not have any down time and docker swarm completely handle it for us
let's make .gitlab-ci.yml for it
Step one
Create a project in gitlab
Step two
Add project to the git and then push it
git init
git add .
git commit -m "init"
git remote add origin git@gitlab.com:azibom/cicd.git
Step three
Add .gitlab-ci.yml
to the project
image: docker:20.10.16
stages:
- publish
- deploy
variables:
PHP_IMAGE_TAG: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:php-$CI_COMMIT_SHORT_SHA
NGINX_IMAGE_TAG: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:nginx-$CI_COMMIT_SHORT_SHA
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:20.10.16-dind
before_script:
- 'command -v ssh-agent >/dev/null || ( apk add --update openssh )'
- eval $(ssh-agent -s)
- echo "${SSH_PRIVATE_KEY}" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
publish:
stage: publish
script:
- docker build -f php.Dockerfile -t $PHP_IMAGE_TAG .
- docker build -f nginx.Dockerfile -t $NGINX_IMAGE_TAG .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $PHP_IMAGE_TAG
- docker push $NGINX_IMAGE_TAG
deploy:
image: alpine:latest
stage: deploy
script:
- ssh -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "cd $PROJECT_DIR && docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY && docker pull $NGINX_IMAGE_TAG && docker pull $PHP_IMAGE_TAG && export PHP_IMAGE_TAG=$PHP_IMAGE_TAG && export NGINX_IMAGE_TAG=$NGINX_IMAGE_TAG && docker stack deploy --with-registry-auth -c docker-compose.yml website"
only:
- master
Step Four
Add some variables to the gitlab ci from setting
PROJECT_DIR && SERVER_IP && SERVER_USER && SSH_PRIVATE_KEY
You can find SSH_PRIVATE_KEY
with run this command
cat ~/.ssh/id_rsa
Also run this on server
cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
Step five
Define your own runner
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh > script.deb.sh
sudo bash script.deb.sh
sudo apt install gitlab-runner
systemctl status gitlab-runner
sudo gitlab-runner register -n \
--url https://gitlab.com/ \
--registration-token REGISTRATION_TOKEN \
--executor docker \
--description "My Docker Runner" \
--docker-image "docker:20.10.16" \
--docker-privileged \
--docker-volumes "/certs/client"
(bring REGISTRATION_TOKEN from gitlab.com)
Step six
Push to your server and look at pipline
Step seven , Done
Best wishes
Top comments (6)
Nice article. Swarm is a very simple orchestrator and the people dont look at it. For smaller deployments i would prefer swarm than Kubernetes. At Convenia we have a microservices application built in PHP and running 100% on swarm.
Thank you for your comment
we have the same opinions :)
That is a nice write up.
Since this is for a PHP application. Let me introduce you to "git pull"
stackoverflow.com/questions/231360...
[note to keep it simple, keep the credentials in the code, have it run in a non-web-accessible directory and add the end add another single line command to copy the files from there to the live directory]
I'm being sarcastic, just poking a bit of fun at the complexity of some deployment processes.
There's nothing wrong with what you're doing. I prefer to opt for simpler processes when it comes to PHP applications.
Simpler is always better but the container deployment brings a bunch of possibilities. For a simple blog i would consider a git pull. For a system built in PHP I would consider The swarm approach.
Thank you for your comment
But I have a question, does the method you are saying work for big projects?
There's no reason it can't. I'd actually opt to use github actions so you can push to a production branch to deploy to the live instance.
If you need to do it on a very large system with load balancing and auto scaling, you could set it up on AWS and deploy to the EFS [Elastic File System] and have that mounted onto the auto scaled instances so they all use the same files.
If it's an application that builds into static files, you again could use github actions to run the prod build and deploy it to a static host like digital ocean apps, or netlify.
I've been doing web dev for over 16 years and have never once dealt with a project where container instances solved anything. At first they were nice on some projects, but longterm they were a pain to work with, especially when something breaks as you have no visibility inside the container to go in and debug it.
You essentially have to debug locally , blindly in a way, do a new build and hope it works on prod too.
Some will disagree, but that's their choice. I just know what i've experienced and i see zero need to over-automation as i call it.