This guide describes how to deploy a static website to a $4 Droplet at DigitalOcean. We will be using Nginx to serve our website and Certbot to manage TLS certificates issued by Let's Encrypt. Finally, we setup GitHub Actions to automate the deployment of the website.
Easier alternatives to deploy and host static websites are available of course, most notably Cloudflare Pages, Netlify, Vercel and Render. But sometimes you want to have close control over your webserver, or you don't want these parties to manage your DNS, which is usually required. In that case, managing your own server at is a great solution. I like DigitalOcean for hosting my virtual machines (called Droplets), but with a little imagination you can apply this guide to any other provider of virtual machines.
Prerequisites
- DigitalOcean account.
- GitHub account, if you want to automatically deploy your website using GitHub Actions.
- SSH client, which is usually already available in your OS.
- Domain name and access to DNS.
Setup Droplet
SSH key pair
Droplets can be accessed using SSH. Before we create a Droplet in the DigitalOcean Control Panel, we need to make sure we have a valid SSH key pair installed on our local machine and upload the public key to DigitalOcean.
It is also possible to access the Droplet using username and password, but this is not recommended, access via SSH is much more secure.
Use this article to create a SSH key-pair if you don't already have one and make sure it is added to your SSH agent (which is described in the same article). For this guide I use a SSH key-pair without a passphrase.
Login to the DigitalOcean Control Panel, go to Settings -> Security:
Use the Add SSH Key
button:
You need to add the public key of your SSH key-pair. You can get the content of the public key like this in your terminal:
cat ~/.ssh/name-of-your-key.pub
Copy-paste the output inside the Public Key
field and give the key a proper name, so you can identify it later on.
Create Droplet
Now we are ready to create a new Droplet using the 'Create' button in the header of the Control Panel. Select the following properties:
- Region and datacenter: Select the region and datacenter that's nearest to your customers
- Use the default VPC
- Image: Ubuntu latest version
- Droplet type: Basic
- CPU options:
- Regular, Disktype SSD
- $4 per month instance (scroll to the left to make it visible)
- Authentication method: SSH Key
- Pick the SSH key you have just uploaded
- Add improved metrics monitoring and alerting (it's free)
- Advanced options (expand to reveal the IPv6 option)
- Enable IPv6 (doing this later requires manual configuration)
- Quantity: 1 Droplet
- Specify a proper host name so you can easily identify the Droplet in your Control Panel
Use the 'Create Droplet' button to start the provisioning of the Droplet.
Reserved IP
When the Droplet is created, it is assigned an IPv4 address automatically. In case our Droplet becomes unstable and we want to create a new one, we want to have a fixed IPv4 address so we don't have to update our DNS.
DigitalOcean offers this service in the form of Reserved IP addresses. This is a free service as long as your reserved IP is assigned to a Droplet. Due to shortages in IPv$ addresses, DigitalOcean wants to prevent holding on to unused IPv4 addresses, and makes you pay for a reserved IP when you don't use it.
Visit your Droplet in the Control Panel and select enable now
next to the Reserved IP
label.
Follow the instructions. Once the reserved ip
is created, we can use it to update the DNS.
Reserved IPv6 addresses are not supported by DigitalOcean, in case of a change we need to manually update the DNS for IPv6 addresses.
Update DNS
Visit your DNS provider and add the following records to the DNS (you can skip the records for the www
sub-domain if you want, especially if you are deploying to a sub-domain instead of a root domain):
- A record for
your-domain
pointing to the reserved IPv4 address - A record for
www.your-domain
pointing to the reserved IPv4 address - AAAA record for
your-domain
pointing to the IPv6 address - AAAA record for
www.your-domain
pointing to the IPv6 address
The A
records are for IPv4 traffic, this will be the bulk of your visitors and AAAA
records are for IPv6 traffic.
You could also add CAA records for making sure only Let's Encrypt is allowed to issue certificates for this domain. I'll skip this for now, because adding the correct values of CAA records differ per DNS provider. I don't want you to get stuck later on in this guide when we setup a certificate, due to incorrect CAA DNS records.
Access Droplet using SSH
To access your droplet, you can now use SSH to connect:
ssh root@your-reserved-ip-address
or when the DNS changes are propagated:
ssh root@your-domain
Firewall
After accessing the Droplet, the first thing we need to do is enable the firewall. We could use DigitalOcean's firewall, but we'll be using the UFW firewall that is installed with Ubuntu. It's not recommended to use both.
Before we enable the firewall, we need to allow OpenSSH access, otherwise we lock ourselves out:
ufw allow OpenSSH
Now we can enable the firewall:
ufw enable
Unattended upgrades
Make sure Unattended upgrades
is enabled in order to automatically retrieve and install security patches and other essential upgrades for your server.
systemctl status unattended-upgrades.service
If you want to allow reboots after updates, you need to edit the configuration file of the Unattended Updates Service to enable reboots when required. This is usually the case when the kernel is updated:
nano /etc/apt/apt.conf.d/50unattended-upgrades
Find the line Unattended-Upgrade::Automatic-Reboot
, uncomment and set to true
:
// Automatically reboot *WITHOUT CONFIRMATION* if
// the file /var/run/reboot-required is found after the upgrade
Unattended-Upgrade::Automatic-Reboot "true";
Save and close the file by pressing Ctrl+X
to exit, then when prompted to save, Y
and hit Enter
.
Restart the Unattended Updates service:
systemctl restart unattended-upgrades.service
More information on the Unattended Upgrades Service: https://linux-audit.com/using-unattended-upgrades-on-debian-and-ubuntu/
Install and configure Nginx
Install
Make sure you are connected to your Droplet via SSH and use the following commands to install Nginx:
apt update
apt install nginx
Update firewall
We need to allow Nginx to pass through the firewall. Applications can register their profiles with UFW upon installation. These profiles allow UFW to manage these applications by name. Nginx also registers its profile upon installation. You can list the available applications like this:
ufw app list
We will allow Nginx Full, which contains both HTTP and HTTPS connections:
ufw allow 'NGINX Full'
Use the following command to view the current status of the firewall
ufw status
The status should show that OpenSSH and Nginx FULL are allowed on the firewall for both IPv4 and IPv6 addresses.
If you browse to http://your-domain you should see the Nginx welcome message.
Sometimes the browser forces you to use HTTPS, and you cannot view the website yet. Don't worry, when the certificate is installed, it will work. In the mean time you can use curl to view the HTML:
curl your-domain
Configuration
With the Nginx web server, server blocks can be used to host more than one domain from a single server.
We leave the default server block in place to be served if a client request does not match any other site. We will set up a new server block for our domain.
Make you sure you are logged into your server and create a folder for your domain which will contain the the website's content:
mkdir -p /var/www/your-domain/html
Create a sample index.html file:
nano /var/www/your-domain/html/index.html
Add some content:
<html>
<head>
<title>Yes</title>
</head>
<body>
<p>It's working</p>
</body>
</html>
Save and close the file by pressing Ctrl+X
to exit, then when prompted to save, Y
and hit Enter
.
In order for Nginx to serve this content, we also need to create a configuration file containing the server block:
nano /etc/nginx/sites-available/your-domain
Configuration is almost the same as the default server block, except for the root directory and the server name:
server {
listen 80;
listen [::]:80;
root /var/www/your-domain/html;
index index.html;
server_name your-domain www.your-domain;
location / {
try_files $uri $uri/ =404;
}
}
Don't forget to update the configuration with the correct domain name. I've included a variant with the www
subdomain, if your are not using the www
sub-domain, you can leave it out of the configuration file.
Enable the server block by creating a symlink of the configuration file to the sites-enabled
directory, which Nginx reads from during startup:
ln -s /etc/nginx/sites-available/your-domain /etc/nginx/sites-enabled/
Nginx uses a common practice called symbolic links, or symlinks, to track which of your server blocks are enabled. Creating a symlink is like creating a shortcut on disk, so that you could later delete the shortcut from the
sites-enabled
directory while keeping the server block insites-available
if you wanted to enable it.>
Now restart Nginx to apply the new configuration
systemctl restart nginx
Check http://your-domain with your browser or curl, you should see the 'it's working' message.
Certbot
We haven't installed a TLS certificate yet. We'll be using Certbot to provision a certificate from Let's Encrypt.
Install Certbot
We install Certbot using snap
, this is the recommended way. Snap automatically updates Certbot. Source: https://certbot.eff.org/.
Login to your server and install Certbot:
snap install --classic certbot
Execute the following instruction in the terminal of the server to ensure that the certbot
command can be run.
ln -s /snap/bin/certbot /usr/bin/certbot
Provision certificate
Certbot needs to be able to find the correct server
block in our Nginx configuration for it to be able to automatically configure TLS. Specifically, it does this by looking for a server_name
directive that matches the domain you request a certificate for. The server_name
is defined in our server configuration file, it's called your-domain
and www.your-domain
.
Use Certbot to provision the certificate (skip the www
domain if you are not using it):
certbot --nginx -d your-domain -d www.your-domain
Enter the email address at which you want to receive urgent renewal and security notices from Letsencrypt. Accept the terms and the provisioning of the certificate starts.
Visit https://your-domain in the browser, you should be able to connect via HTTPS now.
You can test the certificate at https://www.ssllabs.com/ssltest/. You should receive an A grade, which is nice for a change.
Auto renew cert
Let’s Encrypt’s certificates are only valid for ninety days. This is to encourage users to automate their certificate renewal process. The Certbot packages on your system come with a cron job or systemd timer that will renew your certificates automatically before they expire. You will not need to run Certbot again, unless you change your configuration. You can test automatic renewal for your certificates by running this command:
certbot renew --dry-run
Security headers
To tighten security we can add some response headers to our server configuration file. We will add the following headers:
- Strict-Transport-Security: Forces a web browser to connect directly via HTTPS when revisiting your website. This helps preventing man-in-the-middle attacks. Also known as HSTS.
- X-Content-Type-Options: Avoids mime sniffing
- Referrer-Policy: Controls how much referrer information (sent with the Referer header) should be included with requests.
- X-Frame-Options: Used to indicate whether a browser should be allowed to render a page in a frame.
- Content-Security-Policy: Helps to detect and mitigate certain types of attacks, including Cross-Site Scripting (XSS) and data injection attacks.
Edit the server configuration file for you domain:
nano /etc/nginx/sites-available/your-domain
You may notice the config has been altered by Certbot to allow for HTTPS traffic.
Add the following headers in the server section where the root is specified, you can add it after the location
block:
add_header Strict-Transport-Security "max-age=31536000" always;
add_header X-Content-Type-Options "nosniff";
add_header Referrer-Policy "same-origin";
add_header X-Frame-Options "DENY";
add_header Content-Security-Policy "default-src 'self'; base-uri 'none'; frame-ancestors 'none'; form-action none;";
Restart Nginx for the changes to take effect:
systemctl restart nginx
Use a tool like internet.nl to test the configuration of your web server for security issues.
With a real website, you will probably have to alter the Content-Security-Policy (CSP) to allow for loading assets from certain external sources.
Content
Now that the web server is up-and-running and correctly configured, you can copy the content of your static site from your local machine. We will automate this later with GitHub Actions, but for now we do it manually.
Let's create an example website on your local machine. Start with creating a folder which will contain the content. Make sure you are working on you local machine, exit the ssh session if you are still working on the server.
mkdir my-awesome-website
Create the index.html
as welcome message
touch ./my-awesome-website/index.html
Add the following content to your index.html
<!DOCTYPE html>
<html lang="en-US">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>My awesome website</title>
</head>
<body>
<h1>This is where the magic happens</h1>
</body>
</html>
Our website is ready, we can copy the content from the folder to the server like this (run this command from your local machine, not when logged into your server):
rsync -a ./my-awesome-website/ root@your-domain:/var/www/your-domain/html
If you visit your domain in the browser, you should see your site now.
Instead of using the example website, you can point rsync to the folder with your own content to send it to the server. Make sure you have all the content you want to deploy in one folder. Most static site generators have an output folder which you can use.
Custom error pages
Nginx includes some default error pages, usually you want to serve your own error pages.
First create a custom error page on your local machine
touch ./my-awesome-website/404.html
Add some content
<h1>404 - You are definitely in the wrong place</h1>
Update your website with the same rsync command we used before:
rsync -a ./my-awesome-website/ root@your-domain:/var/www/your-domain/html
For Nginx to use your custom error page, you need to change the server configuration of your domain to point to the 404 page in your content. Make sure you logged into the server and edit the configuration:
nano /etc/nginx/sites-available/your-domain
Add the following to the server configuration in the same block as the security headers (don't forget to use your own domain):
error_page 404 /404.html;
location = /404.html {
root /var/www/your-domain/html;
internal;
}
Restart Nginx
systemctl restart nginx
The 404 page should be showing now. You can repeat this process for other status codes, like 500 - Internal Server Error
.
GitHub Actions
Instead of copying the content manually every time we want to update the site, it's better to automate the deployment. We can use GitHub Actions for this, if your code is hosted at GitHub.
GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate all kinds of software tasks, like building and deploying your website. You can create workflows that are triggered whenever you push a change to your repository.
First, we will to add our code to GitHub. If you already host your code at GitHub you can skip this part. We will add our private SSH key to the repository secrets to enable GitHub Actions to access your server. Finally we will create a workflow that deploys our website.
Create repository
Create a repository at https://github.com. Create a public or private repository. Decide if you want to create a .gitignore
, licence or README.md
file, it doesn't matter for the example website.
The default branch of the GitHub repository is called main
. The name of the default branch in the local repository should be the same. Git uses master
by default, but you can override this default. Initialize the local GIT repository with the following command to use main
as the default branch name from within your website's folder:
cd my-awesome-website
git init --initial-branch=main
Now, you can add the remote repository to your local repository:
git remote add origin git@github.com:robinvanderknaap/my-awesome-website.git
Make sure to use the correct git URL of your repository, you can't use mine :)
Pull the files from the remote repository if you addedREADME.md
, license or .gitignore
file during the creation of your GitHub repository:
git pull origin main
Now commit the changes to the local repository:
git add .
git commit -m "My awesome website"
And push the changes to the remote repository and set the upstream branch:
git push --set-upstream origin main
Add private SSH key as repository secret
For GitHub to deploy to your site, it needs to have access to the private key of the SSH key-pair we created at the beginning of this guide. We will store the key in a repository secret called SSH_KEY_DEPLOY
.
Browse to your repository at GitHub and navigate to the repository settings. Select Secrets and variables -> Actions
.
Use the New repository secret
button to create a new secret called SSH_KEY_DEPLOY
. Paste the contents of your private key into the secret
field.
You can get the content of your private key like this (don't accidentally use the public key with the .pub
extension):
cat ~/.ssh/name-of-your-key
Create workflow file for deployment
GitHub Actions are triggered by adding workflow files to your repository:
mkdir -p ./.github/workflows
Workflows are declared using YAML files:
touch ./.github/workflows/deploy.yaml
Add the following workflow:
on:
push:
branches:
- main
name: Deploy website
jobs:
web-deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Get latest code
uses: actions/checkout@v4
# Uncomment if you need Node.js to build your site
# - name: Use Node.js
# uses: actions/setup-node@v2
# with:
# node-version: '20'
- name: Build Project
run: |
# Add steps here to build you website
# npm install
# npm run build
mkdir ./public
mv index.html ./public
mv 404.html ./public
- name: Rsync
uses: burnett01/rsync-deployments@7.0.1
with:
switches: -avzr --delete
path: public/
remote_path: /var/www/your-domain/html
remote_host: your-domain
remote_user: root
remote_key: ${{ secrets.SSH_KEY_DEPLOY }}
This is a pretty straightforward workflow. The script will trigger only when commits are pushed to the main
branch and contains the following steps:
- Pull code from repository
- Build website. You can add commands here to build your site. If you need Node.js to build your site, uncomment the
Use Node.js
step. For the example website, we just copy our two html files to a public folder. - A plugin is used to rsync the contents to our server
- Make sure to use the correct
path
from which to deploy content - Don't forget to specify the correct domain at
remote-path
andremote-host
properties. - Notice the SSH Key is retrieved from the repository secrets.
- Make sure to use the correct
The workflow script is triggered every time you push a commit to the main
branch of the remote repository. and the website will automatically be deployed.
Commit and push the build script
git add .
git commit -m "Added deploy workflow"
git push
You can view the progress of your deployment in the Actions
tab of your repository at GitHub.
That's it! Happy coding!
Top comments (0)