Introduction
"why, why, WHY?"
After seeing @bdougieyo build a ProBot app and @blackgirlbytes fresh take on deploying ProBot to AWS Lambda, I figured I would spice things up a bit by researching the most cost-effective solution to run a serverless GitHub application.
Before I go on, you might be thinking things like:
- only care about money?!
- AWS Lambda is dirt cheap!
- it's all a configuration war you cannot win!
Thinking about these hypothetical objections, my inner dialogue goes on:
Me: "Hold up, whaaaat?!"
Other me: "Yes it's CloudFlare Workers!"
The simple explanation is that I'm proposing use of the Service Worker API. Cloudflare offers a flat, free, 100k requests a day if you can keep it cutting edge, has local development and testing options with miniflare and a key/value (KV) store.
If you still have your doubts, it might be because you know that the build system would be using Webpack 4 out of the box. However, this means it can do Rollup, and so it can do Vite. Yes, @mtfoley, this is preparing to be another converting to Vite series!
We'll be applying our solution to catsup-app GitHub application being developed in the Open Sauced org. For each repo that has the application installed, our Discord will be updated when an issue has the good first issue
label applied.
Technical part
"how to how-to X!"
Requirements
This is going to hurt:
- make the existing Probot code compatible
- writing less-than-browser-compatible code
- <10 ms CPU execution time due to the workers' limits
- automated releases over an open-source repository
- secure deployments
Code
Assuming the workers PR will eventually be production ready, the code should be visible over at:
open-sauced / catsup
This app will index your pr and issue data.
📖 Prerequisites
In order to run the project locally we need node>=16
and npm>=8
installed on our development machines.
🖥️ Local development
To run the GitHub App code against a test repository, add TEST_REPOSITORY
to your local .env
file.
To start the server locally at port 8888
:
npm start
📦 Deploy to production
Netlify account
Install the Netlify GitHub app in the repository and configure the environment variables listed in .env.example.
GitHub App
Register a new GitHub application with the permissions issues:write
and metadata:read
. Set the webhook URL to <your netlify domain>/api/github/webhooks
.
Once registered, you will be able to obtain all the GITHUB_APP_*
credentials from the app settings.
It is advised you generate the WEBHOOK_SECRET
using the following command:
# random key strokes can work too if you don't have ruby(??)
ruby
…In order for the project to be shipped as a service function, the node environment can not be used in any of the production code. Reviewing Probot source, one might see a dead end in that it uses require("dotenv").config()
. However, its underlying framework, OctoKit does not come with any opinionated code in this regard.
Simply expanding the script into the Probot equivalent while dodging the node imports was very easy and has been done before. Being able to see existing working code made the process a lot more enjoyable:
gr2m / cloudflare-worker-github-app-example
A Cloudflare Worker + GitHub App Example
cloudflare-worker-github-app-example
A Cloudflare Worker + GitHub App Example
The worker.js file is a Cloudflare Worker which is continuously deployed using GitHub Actions (see .github/workflows/deploy.yml).
The worker does 2 things
-
GET
requests: respond with an HTML website with links and a live counter of installations. -
POST
requests: handle webhook request from GitHub
universal-github-app-jwt
. For the time being, you could define a secret path that that webhook requests by GitHub are sent to, in order to prevent anyone who knows your workers URL from sending fake webhook requests. See #1
Step-by-step instructions to create your own
Note that you require access to the new GitHub Actions for the automated deployment to work.
-
Fork this…
Using the same probot/smee-client shipped by Probot we divert the webhook URL to one on localhost for the development application, and for the production application we will enter a custom route.
While it might look like an anti-pattern, setting up a private local-only application in the workers configuration file is perfectly safe and meant as the most basic way to ensure all environment variables are encrypted for the production environment. In fact, a helpful property of workers lies in the fact that we are unable to deploy the app if the requisite environment variables don't exist, and the only way to add them is by encrypting them as secrets.
The above secrets definition pattern requires that we set up the GitHub application and Discord hooks before trying to deploy the service worker, as it would otherwise fail with unencrypted or loose values.
Setting up the service worker
1. Cloudflare worker
Set up a cloudflare account and enable workers, change account_id
in wrangler.toml to your account id.
Go to your workers dashboard and create a new worker, select any template, adjust name
in wrangler.toml if the existing one is taken.
Write the "Routes" URL provided by the worker down somewhere for the next parts. It will serve as webhook return URL.
2. GitHub application
Create a new GitHub application with scopes issues:write
and metadata:read
while also enabling tracking events.
Upon creation you should have plain-text values for APP_ID
, CLIENT_ID
.
Click the "Generate a new client secret" button and copy the resulting value of CLIENT_SECRET
.
In the webhook return URL copy the value of your worker route as described in the last step of the Cloudflare setup.
If you have Ruby installed, it is advised you generate the WEBHOOK_SECRET
using the following command:
# random key strokes can work too if you don't have ruby
ruby -rsecurerandom -e 'puts SecureRandom.hex(20)'
Now, go to the very bottom and click "Generate a new private key" and open a terminal in the location of the downloaded file.
Rename this file to private-key.pem
for the next command to work:
openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in private-key.pem -out private-key-pkcs8.key
Copy the contents of private-key-pkcs8.key
to APP_PK
.
3. Discord webhook
Go to your server of choice, click "Settings" and then "Integrations", create a new webhook and copy the URL and paste that value into DISCORD_URL
.
Now you are good to use the wrangler release workflows and deploy to production!
4. Environment variables
Select the "Settings" tab on your newly created worker and click "Variables", add the following variables with the values described in the previous steps:
-
APP_ID
-
APP_PK
-
DISCORD_URL
-
CLIENT_ID
-
CLIENT_SECRET
-
WEBHOOK_SECRET
Encrypt all of them and deployment will start working both locally and in the CI workflows!
Deployment
The PR code, as well as the maintainers, are not yet sure of the best way to approach deployment to multiple environments. There is a minor concern for the CI action leaking the target URL which would give the possibility of service disruption. Making the deployment target fully private, i.e. deploying from wrangler
locally, would make the discovery process partially visible to us in application installations and limit outbound attack vectors considerably. Sitting behind 2 of the biggest global CDNs would also help a lot!
Local publish
Login to cloudflare with your account credentials, letting the browser open an OAuth dialog with:
npm run wrangler -- login
Now you can test all the variables are correct by publishing from the terminal:
# npm run wrangler -- publish
npm run publish
Open up a production real time log using:
npm run wrangler -- tail
GitHub actions
Create a new GitHub actions secret named CF_API_TOKEN
, get its value from Cloudflare's create a new token using the "Edit Cloudflare Workers" template.
Push new code to the server, after a release the new code should be sent to the server and instantly propagate.
Conclusion
"easy, what next?!"
Some things come to mind as potential improvements:
- switching the build system to vite
- implement testing and coverage commands
- move secrets to KV namespaces for easier environment deployments
- dockerize repository
Top comments (1)
Great write up. Looking forward to playing with cloudflare for this.