Semaphore 2.0 launched recently with CI/CD pipelines that can be customized for any workflow and pay-as-you-go pricing model. Whether you've used Semaphore in the past or not, it brings a lot of new things to the table.
In this post we'll get to know the basic features by setting up a build, test and deploy pipeline for a static website. I'd love to hear what kind of a project/stack you'd like to see covered in one of the future articles — please let me know in the comments.
What is Semaphore anyway?
Semaphore is a cloud-based automation service for building, testing and deploying software. In other words you use it to implement continuous integration (CI) and continuous delivery (CD) pipelines.
The first version of Semaphore launched back in 2012, dubbed "hosted CI built for speed and simplicity" as an antidote to tools that are complicated to use and require a lot of maintenance. On the contrary Semaphore requires no maintenance and is the fastest cloud-based CI there is.
Semaphore 2.0 takes the concept further by introducing pipelines as code, which make it possible to model any software delivery process without pulling your hair. It also removes limitations in terms how many builds you can run and scales automatically to any workfload, priced per second of execution, similar to AWS Lambda.
A hello world
Start by signing up with your GitHub account. This will give you access to $20 of free credit every month, which is plenty for small scale projects.
At this point Semaphore will show you three commands to run in your terminal. First, install the sem CLI which you'll use to create projects:
curl https://storage.googleapis.com/sem-cli-releases/get.sh | bash
Connect sem to your fresh organization account:
sem connect ORGANIZATION.semaphoreci.com ACCESS_TOKEN
Finally, run sem init
inside a Git repository. The command creates a deploy key and webhook on GitHub, so that Semaphore can access your code as it changes, and creates a pipeline definition file .semaphore/semaphore.yml
.
After you follow the last instruction to git push
the file, you should see the pipeline running in your browser. Whew!
Let's unpack what's happening here.
The building blocks of pipelines
In our hello world example, we have one pipeline with four distinct blocks, which run sequentially.
If we wanted to introduce some conditions in the process, for example to run deployment only on the master branch, or to shut down temporary infrastructure if blocks fail, we'd define a promotion. Promotions can be automatic or triggered manually and lead to other pipelines.
In this way, we can chain as many pipelines as we'd like. Generally, each git push
triggers a new Semaphore workflow, which contains one or more pipelines.
Now, our code runs inside blocks. A block contains at least one job, which is a sequence of commands. We can define more jobs, and then they'd all run in parallel. Each job is a universe for itself — a fully isolated VM which spins up in a second and inherits the configuration of its' parent block, plus anything extra we feed it. You can run native code, Docker containers, or change any system package. In our hello world example, the third block contains three parallel jobs.
Defining the build pipeline for our project
The diagram above mentions a few more concepts but we've covered a lot of theoretical ground already, so let's do something hands-on and learn more along the way.
In this demo my goal is to build and deploy a Gatsby.js blog. I'm starting from the gatsby-starter-blog template and the process includes the following:
- Get the code
- Install dependencies
- Build the website
- Deploy the website to S3 on every change in master branch
- Bonus: run some UI tests on live website
Let's open our .semaphore.yml
and strip it to bare bones:
# .semaphore/semaphore.yml
version: v1.0
name: Gatsby build pipeline
agent:
machine:
type: e1-standard-2
os_image: ubuntu1804
blocks:
- name: ⏬ Install dependencies
task:
jobs:
- name: npm install
commands:
- checkout
- npm install
Notice the agent
property: Semaphore offers several machine types with different amounts of CPU/memory capacity to choose from. This allows you, for example, to use a more powerful machine for heavy integration tests and the smallest machine for deployment work.
At the moment Semaphore provides one VM image, based on Ubuntu 18.04 LTS, with more to come. It'll serve us well for this project.
The first command in our "Build code" job is checkout
. This is a required command whenever you're working with your source code — it downloads the revision associated with the workflow. The command is actually a script which is open source.
Caching dependencies
For demonstration purposes, we'll build our website in a separate block. Let's see something which won't work as intended:
blocks:
- name: ⏬ Install dependencies
task:
jobs:
- name: npm install
commands:
- checkout
- npm install
- name: 🧱 Build site
task:
jobs:
- name: build
commands:
- checkout
- npm run build --prefix-paths #EEEK 💥
The npm run
command will fail complaining about lack of dependencies. This is because files created in one job, or block, are by default not shared anywhere unless we explicitly make it so.
What we want is to cache the node_modules directory and reuse it across pipelines and blocks. The Semaphore environment provides a cache CLI which can manage shared files on a per-project basis:
blocks:
- name: ⏬ Install dependencies
task:
jobs:
- name: npm install
commands:
- checkout
# Try to restore node modules from a previous run,
# first for the current version of package.lock, but if that fails
# get any previous bundle:
- cache restore node-modules-$(checksum package-lock.json),node-modules-
- npm install
# Store new content in cache:
- cache store node-modules-$(checksum package-lock.json) node_modules
- name: 🧱 Build site
task:
jobs:
- name: build
commands:
- checkout
- cache restore node-modules-$(checksum package-lock.json),node-modules-
- npm run build --prefix-paths
With this configuration we dynamically generate a cache key based on the content of package-lock.json
. We're also using a fallback key that cache restore
can try to partially match. As a result, most workflows will experience a cache hit, which reduces the CI run time by more than 20 seconds.
Note: for npm run build
to work without installing gatsby-cli as a global package, I modified package.json
to include it in the list of dependencies:
"dependencies": {
"gatsby": "^2.0.19",
"gatsby-cli": "^2.0.19"
}
and linked the binary:
"scripts": {
"gatsby": "./node_modules/.bin/gatsby"
}
Configuring a promotion for continuous deployment
At the end of our build pipeline as defined in .semaphore.yml
, we've produced website files in public/
directory that are ready to be uploaded. It's time to set up a promotion which would trigger a deployment pipeline on the master branch. Our final configuration for the build pipeline is as follows:
# .semaphore/semaphore.yml
version: v1.0
name: Gatsby build pipeline
agent:
machine:
type: e1-standard-2
os_image: ubuntu1804
blocks:
- name: ⏬ Install dependencies
task:
jobs:
- name: npm install
commands:
- checkout
- cache restore node-modules-$(checksum package-lock.json),node-modules-
- npm install
- cache store node-modules-$(checksum package-lock.json) node_modules
- name: 🧱 Build site
task:
jobs:
- name: build
commands:
- checkout
- cache restore node-modules-$(checksum package-lock.json),node-modules-
- npm run build --prefix-path
# Store the website files to be reused in the deployment pipeline:
- cache store website-build public
promotions:
- name: Deploy to production
pipeline_file: production-deploy.yml
auto_promote_on:
- result: passed
branch:
- master
In our case, we used the auto_promote_on
property to define a promotion which runs automatically whenever the pipeline runs successfully on the master branch. Many more options are available, as described in Semaphore documentation. We'll define what happens next in a new pipeline configuration file, production-deploy.yml
.
The deployment pipeline
We'll deploy the website to AWS S3. The details of preparing for that on AWS are outside the scope of this article, but essentially you need to:
- Create a new bucket;
- Disable all public access restrictions, in Permissions > Public access settings tab;
- Allow everyone to list bucket content, in Permissions > Access Control List tab;
- Enable the "Static website hosting" property.
You can find more details in AWS documentation.
Once the bucket is ready, we need to execute something like:
aws s3 sync "public" "s3://name-of-our-bucket" --acl "public-read"
aws s3 sync
works when you have AWS CLI installed and connected to a valid account. Semaphore environment has AWS CLI preinstalled, so what's left is to provide credentials. A safe way to do that is to create a secret and mount it on our deployment pipeline.
Managing sensitive data with secrets
Private information like API keys or deploy credentials shouldn't be stored in Git. On Semaphore you define these values as secrets, using the sem CLI. Secrets are shared by all projects in the organization.
Assuming that you want to pass your local ~/.aws
credentials to Semaphore, execute the following command:
sem create secret aws-credentials \
--file ~/.aws/config:/home/semaphore/.aws/config \
--file ~/.aws/credentials:/home/semaphore/.aws/credentials
You can inspect a secret's definition to verify — you'll see the content of files base64-encoded:
$ sem get secret aws-credentials
apiVersion: v1beta
kind: Secret
metadata:
name: aws-credentials
id: 5d41c356-7d72-491b-b705-6a86667c50f3
create_time: "1544622386"
update_time: "1544622386"
data:
env_vars: []
files:
- path: /home/semaphore/.aws/config
content: W2RlZmF1bHRdCnJlZ2lvbiA9IHFzLWVhc3QtMQo=
- path: /home/semaphore/.aws/credentials
content: W2RlZmF1bHRdCmF3c19hY2Nlc3Nfa2V5X2lkID...
Looks good — if we include the aws-credentials
secret in a pipeline configuration, our ~/.aws
files will be available in the home directory of Semaphore's VM, and all aws
commands will work as intended.
The secret is an editable resource: you can run sem edit secret aws-credentials
to add environment variables or additional files.
Production deployment pipeline
Finally let's define our production deployment pipeline:
# .semaphore/production-deploy.yml
version: v1.0
name: Deploy website
agent:
machine:
type: e1-standard-2
os_image: ubuntu1804
blocks:
- name: 🏁 Deploy
task:
secrets:
- name: aws-credentials
jobs:
- name: Copy to S3
commands:
- cache restore website-build
- aws s3 sync "public" "s3://bucket-name" --acl "public-read"
Because we defined our promotion to run automatically only on the master branch, if you push this file for the first time in a feature branch, the deployment pipeline will not run. However, you can still trigger it manually from the UI by clicking on the "Promote" button.
Note the lack of checkout
command, as we don't need our source code at this point.
The URL of your bucket is http://bucket-name.s3-website-us-east-1.amazonaws.com
. Replace bucket-name
with the name of your bucket, and us-east-1
with another region's code in case you didn't go for the default. If all went well, you should see your website:
So from now on, every change in your blog's source code will be automatically deployed. 🎉
Run post-deploy UI tests
As a bonus, let's extend our deployment pipeline to do one more thing — run tests against the live website.
We'll use Nightwatch.js, so start by adding it to your package.json
dependency list:
"dependencies": {
"nightwatch": "^0.9.21"
}
Also in the same file, define a script shortcut:
"scripts": {
"nightwatch": "./node_modules/.bin/nightwatch"
}
Update the dependency list with npm install
.
Next step is to create a test file. To keep things simple we'll just verify the presence of expected page title. You can find more information on testing web pages in Nightwatch documentation.
// tests/postdeploy.js
module.exports = {
'Test live website' : function (client) {
client
.url('http://bucket-name.s3-website-us-east-1.amazonaws.com')
.waitForElementVisible('body', 1000)
.assert.title('Gatsby Starter Blog')
.end();
}
};
Of course, remember to replace bucket-name
with the name of your S3 bucket.
To run tests with Nightwatch, we need Selenium. Download the latest Selenium driver and place it in a new directory inside your project:
mkdir .bin
wget https://selenium-release.storage.googleapis.com/3.141/selenium-server-standalone-3.141.59.jar
mv selenium-server-standalone-3.141.59.jar .bin/selenium.jar
Copy the example nightwatch.json
file from the Nightwatch Getting Started guide, and modify the selenium
settings to automatically start the Selenium server when running tests based on our local server file:
// nightwatch.json
"selenium" : {
"start_process" : true,
"server_path" : ".bin/selenium.jar"
}
Finally, let's extend our deployment pipeline with another block:
# .semaphore/production-deploy.yml
version: v1.0
name: Deploy website
agent:
machine:
type: e1-standard-2
os_image: ubuntu1804
blocks:
- name: 🏁 Deploy
task:
secrets:
- name: aws-credentials
jobs:
- name: Copy to S3
commands:
- cache restore website-build
- aws s3 sync "public" "s3://bucket-name" --acl "public-read"
- name: 🔍 UI tests
task:
jobs:
- name: Check live website
commands:
- checkout
- cache restore node-modules-$(checksum package-lock.json),node-modules-
- npm run nightwatch
When you commit and push all new files and changes, your deployment pipeline should run tests with the following output:
npm run nightwatch
> gatsby-starter-blog@1.0.0 nightwatch /home/semaphore/gatsby-blog
> nightwatch
Starting selenium server... started - PID: 3239
[Postdeploy] Test Suite
===========================
Running: Test live website
✔ Element <body> was visible after 98 milliseconds.
✔ Testing if the page title equals "Gatsby Starter Blog".
OK. 2 assertions passed. (2.891s)
And your full Semaphore workflow looks like this:
Sweet! We've successfully set up a continuous delivery pipeline, including a safety net of automated tests which will alert us in case we broke anything. We deserve a drink. 🍻🥂🥃🥛🥤
All code and configuration is available on GitHub:
markoa / gatsby-blog
A simple CI/CD project for a static website using Semaphore.
Example CI/CD project with Semaphore 2.0 and Gatsby.js
- App: Gatsby starter blog
- CI/CD: Semaphore
- Deployed to S3 (view demo)
Run locally:
npm install
npm run start
npm run nightwatch
For a full step-by-step guide, read the article on dev.to.
Thanks for reading! I hope this article helps you get started with CI/CD and Semaphore 2.0. If there's a CI/CD project that you'd like to see covered in a future post, please let me know in the comments. ✌️
Top comments (4)
Great article thanks for sharing.
What's the difference between semaphore and bitbucket CI/CD?
Hi Tzelon,
Thanks for your question. Semaphore specializes in CI/CD so offers better performance (== faster builds) and more flexibility. For example:
Thanks Marko.
I will give it a try.
Niceee. Great article.