There are many tools or services on the internet for making this kind of server. They are almost all easy to work with front-end frameworks such as React, Angular and Astro. There are others for back-end frameworks as well. However, this tutorial will be useful to learn about how a server works or to practice becoming a devops.
Besides that, these tools are useful when you are starting or practicing. But, if you are working on many projects, they will quickly become expensive. In those cases, it’s better to build your own cloud and configure it to adapt it to your own needs.
The small server’s specifications:
- A small cloud server for deploying different sites or services like APIs, Websites, commands, etc.
- They can be developed using any technology and language: PHP, Python, JavaScript, and HTML; along with any framework: React, Laravel, Django, WordPress, etc.
- With or without any kind of database.
- With multiple domains or subdomains.
- Connecting to any other AWS service like S3, RDS (database), CloudFront (to make a CDN), SES (to send notifications by email), etc.
This cloud will be able to host from 5 to 10 different projects. This will depend on the projects, of course.
Requirements:
- An AWS account.
- At least one domain bought.
- At least one project developed.
- Docker and Github/Bitbucket/GitLab.
Step by step:
First: Dockerize the project.
Docker containers can be used for any project. It doesn’t matter what language or framework is used. However, each framework or project has a different method to dockerize it.
For example, a React Project needs a webserver to run for it. Because when the project is built (npm run build ), it becomes a HTML site. It means, the hole project will be one index.html file with a bunch of images, CSS and two or three JS bundles. This is how ReactJS works. That’s why you will need a web server to make this htm run when the browser asks for it.
Therefore, in case of a ReactJS project, you need to add Apache or NGINX server to the container.
To dockerize any project only a Dockerfile is needed. This is a file named “Dockerfile” (without an extension), which has a list of instructions inside. It tells Docker how to build the image and what to do when the container runs.
For example: take nodeJS, copy “package.json”, install dependencies, build a react app, etc. Configure NGINX server and expose it on port 80.
The Dockerfile must be placed in the root of the project, next to “package.json”.
This is an example of a Dockerfile with Nginx and ReactJS.
### STAGE 1: Build ###
FROM node:20 as build
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
COPY package.json /usr/src/app/package.json
RUN npm install --silent
COPY . /usr/src/app
RUN npm run build
### STAGE 2: Production Environment ###
FROM nginx:1.21.0-alpine
COPY --from=build /usr/src/app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Besides this Dockerfile, NGINX require a file named “nginx.conf” for configuration. Create a folder named “nginx" and put this file inside:
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
# to redirect all the requests to index.html,
# useful when you are using react-router
try_files $uri /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Before going to AWS or any server, it is a good practice to test it all locally. It is possible to build the image and run the container in a local machine:
#building the docker image
docker build -t myimage .
#running the container
docker run -t -d -i --name=mycontainer -p 8000:80 myimage
If everything goes well, the site will run on http://localhost:8000 in any web browser.
Every time you run a container, you must specify on which port it will run. Therefore, it is possible to have many containers running, each of them on different ports: 8000, 8001, 8002, etc. This is useful because it allows the same server to have many projects running.
Secondly, build and configure our machine.
The next step is to create an EC2 machine in AWS.
AWS has many tools. They call them services. An EC2 is a virtual machine which is created with the hardware specifications wanted and the software needed. AWS has many pre-built images: Ubuntu, Cent, Alpine or even Windows.
Note: Be careful, because the hardware or software you choose is related to the price you pay. My advice is start with the smallest one and go higher only if you need it.
For example, I am going to use a small one (Nano), this will be useful for 5 to 10 sites without any problem and will help us for this tutorial purpose.
I found this good step-by-step tutorial written by Abhishek Pathak which guides you on how to run and connect your first EC2 if you don’t know how to do it: https://dev.to/scorcism/create-ec2-instance-1d59
Thirdly, deploy the code into the EC2 machine.
There are three ways for deploying the project code to any EC2 machine:
- By SSH if the EC2 machine has the connections opens.
- Pulling a Docker image already built.
- Cloning and pulling a git repository to the EC2 machine.
Uploading files with SSH is the first choice. But it is not too comfortable. For pulling and pushing Docker images, a Docker Hug is needed. A Docker hug is an online repository like GitHub, but for Docker images. AWS has a hub service called ECR. It could be used to upload images and pull them to your EC2 running.
However, it is also possible to clone a git repository in a EC2 and then build docker image to run the container. This is the easiest way, and it is useful as well; because sometimes the developer who is working on the code it is not the same person deploying the site. Besides, there is a regular practice using GitHub, BitBucket or GitLab; so, anybody has the code already upload there.
It as easy as it sounds:
- Clone the repository
- Build the Docker image
- Run the Docker container
However, the EC2 machine needs to be prepare first because it doesn’t come with Docker, Git or NGINX installed.
Connecting to your EC2:
#connecting to EC2
ssh -i /path/of/key/downloaded ubuntu@public.ip.address
# this command give you full server access
sudo su
Installing Docker.
#update your repositories
apt-get update
#install Docker
apt-get install docker.io -y
#start Docker daemon
systemctl start Docker
#Check if everything is fine
docker –version
#Enable Docker to start automatically when the instance starts
systemctl enable docker
And installing git:
# install git
apt install git
#Check if everything is fine
git –version
#configure git
git config --global user.name "Your Name"
git config --global user.email "youremail@domain.com"
With GIT and Docker installed, the EC2 machine is ready to receive the repositories.
#clone repository
git clone myreposytoryurl
cd myreposytoryname
#build the image
docker build -t myimage .
#running the container
docker run -t -d -i --name=mycontainer -p 8000:80 myimage
Great!! The first container is now running, but it is possible to run more of the same repository or from any other as well.
#running another container
docker run -t -d -i --name=mycontainer -p 8001:80 myimage
#running another container
docker run -t -d -i --name=mycontainer -p 8002:80 myimage
#running another container
docker run -t -d -i --name=mycontainer -p 8003:80 myimage
Notice that the first container is running on port 8000 and the second one on port 8001, and the third one on port 8002. In this case, the three of them are the same. However, this is useful for having different projects on each port.
Look at the running command:
docker run -t -d -i --name=mycontainer -p 8003:80 myimage
The “-p” flag connects the EC2 port you want to use on the server with the container port exposed. The right number is the container itself, it means, what it is exposed in the Docker image; the left number is the EC2 machine port will be using.
#testing the container
curl localhost:8001
This must respond with the index.html file of the ReatJS project created in the step one.
The containers were created, and the projects are running. However, if anybody try to connect from the browser, they will not see anything. The next step is to connect domains with each container.
Fourthly: Create a balancer.
In this EC2 there will be many projects inside. Each project will be a container and it will be running on a different port. The first one runs on port 8000, the next one 8001, 8002, etc. Each project could have different domains: Myproject1.com, myproject2.com, etc. Or also a different subdomain: subdomain1.projects.com, subdomain2.projects.com.
In both cases, domains or subdomains must be redirect to the EC2 public IP.
Check again here https://dev.to/scorcism/create-ec2-instance-1d59 (Point 10).
It will be a simple A redirect to the IP number. However, this don’t work either, because the first container created it is not running on port 80 or 443.
So, the EC2 needs a balancer that receives the browser petitions and redirects to the correct container running.
For this purpose, it is possible to install NGINX server and to use it as a reverse proxy. This Nginx will be receiving all the petitions for this EC2 machine and redirecting to the correct container.
#installing nginx
apt update
apt install nginx
If it is everything fine, the IP public in a web browser shows the Welcome-to-Nginx-page:
Any domain or subdomain redirected to this IP must show this page as well.
Any time a new project is added NGINX must be configured to know what to do with them. NGINX has a config file for each project, and it must be copied twice; first on “/etc/nginx/sites-available”, and the same file on “/etc/nginx/sites-enabled/”. The name of the file it is the name of the domain or subdomain. For example: “subdomain.mydomain.com”
This is a basic example of the file.
server {
root /var/www/subdomain.mydomain.com/html;
index index.html index.htm index.nginx-debian.html;
server_name subdomain.mydomain.com;
location /{
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Host $host;
}
}
Look at the file above.:
The fourth line indicates Nginx that when the petition comes from subdomain.mydomain.com. It must redirect to http://127.0.0.1:8000. It means the container running on port 8000. The domain or subdomain only needs to match with the port container running the project.
After creating the files, NGINX need to restart.
Test if the files are well created.
nginx -t
restart nginx
systemctl restart nginx
or reload nginx
systemctl reload nginx
After restarting NGINX, the project will be exposed to the public. And anybody will be able to reach it from the web browser.
This cloud server is for 10 or 20 projects. If a project would need a database, it is possible to connect to any AWS services as RDS or DocumentDB. However, it is also possible to run another container with the database on the same EC2 machine. The EC2 can also connect with S3, or any AWS service needed.
The same steps must be repeated to add any new project. And the EC2 has a monitoring section in the AWS console to know if the server is saturated or has space available for more projects.
I hope you like this tutorial. The next step is to add SSL certificate so you can use https!!! Ask me if you need it. :D
Thanks!
Top comments (0)