Webhooks are a way of communicating between two systems via a network. They are often used as notification systems, where a system can notify another about an event. So we can say that they are event-driven.
Coding a webhook engine is easy: coding an efficient webhook engine is another story.
I am working on a webhook engine called Sendhooks written in Golang and today, we will explain how to use it to not worry about sending webhooks anymore.
Requisites
The requisites for this article are not that high. Some experience with Docker and NGINX is recommended as we will mostly use them for simplicity. However, I will try my best to introduce those technologies.
Without further due, let's start with the project.
The pains of webhooks
It is quite simple to build a webhook engine. It does not take much. You need to ensure that you can send data to a specific endpoint. To make the process non-blocking, we can use an asynchronous language or spawn a background task ( Django + Celery). However, when you start dealing with millions of webhooks to deliver, you want to use efficient technologies such as a much more powerful language with better concurrency management and powerful other tools.
The Sendhooks engine is written in Golang to take advantage of goroutines, and interesting patterns to handle concurrency. As gateways to receive the data to be sent, we are using Redis, as it is much much faster and the Redis streams feature helps with reliability in case the receiver or the sendhooks engine is down for a few moments.
In the next section, I will introduce Sendhooks by quickly discussing architecture.
Using the Sendhooks Engine to Send Webhooks
Sendhooks uses Redis Streams to read the data that needs to be sent to a specific URL. Redis is a fast and lightweight solution that is easy to set up on your local machine or Docker. One of the main advantages of Redis Streams is that they act as log records, which can be read by specific groups or users, providing a reliable way to manage data.
Typically, Redis stores data in the machine's memory, which poses a risk of data loss if the machine reboots. This makes Redis channels less suitable for webhooks when reliability is essential. In contrast, Redis Streams write data to a file on a disk and then load it into the machine's memory. This ensures that, even if the service goes down, the data can still be retrieved and used, maintaining reliability and continuity in data handling.
The data flow in Sendhooks begins with Redis, a fast and lightweight data store, which acts as the initial recipient of the data to be sent. From Redis, the data is seamlessly transferred to Sendhooks, an efficient and reliable webhook engine designed to handle high concurrency with ease. Sendhooks listens attentively for incoming data from Redis, ensuring that every piece of information is captured accurately. Once the data is received, Sendhooks processes it and promptly sends it to the specified URL.
This streamlined process ensures that data delivery is both reliable and efficient, leveraging the strengths of Redis and the advanced capabilities of Sendhooks.
Data sent
The data sent to the Sendhooks engine follows a specific structure, ensuring that all necessary information is included for proper processing and delivery. Here is the detailed shape of the data:
{
"url": "http://example.com/webhook",
"webhookId": "unique-webhook-id",
"messageId": "unique-message-id",
"data": {
"key1": "value1",
"key2": "value2"
},
"secretHash": "hash-value",
"metaData": {
"metaKey1": "metaValue1"
}
}
Let's describe the shape of the data:
url: A string that specifies the endpoint where the webhook should be sent. For example,
"
http://example.com/webhook
"
.webhookId: A unique identifier for the webhook, represented as a string. This ensures that each webhook can be uniquely tracked and referenced.
messageId: A unique identifier for the message being sent, also represented as a string. This helps in tracking individual messages within the webhook system.
data: An object containing the main payload of the webhook. It includes key-value pairs where keys and values are strings. For example,
{ "key1": "value1", "key2": "value2" }
.secretHash: A string that represents a hash value used for security purposes. This ensures that the webhook data has not been tampered with and can be verified by the receiver.
metaData: An object containing additional metadata about the webhook. It includes key-value pairs for extra information. For example,
{ "metaKey1": "metaValue1" }
.
This structure ensures that all necessary information is included, making the webhook processing efficient, secure, and reliable.
Now that we understand more about how Sendhooks works, let's focus on how integrate Sendhooks in an application.
Integrating Sendhooks
In this section, we'll create a quick project using Flask and Sendhooks. We'll use Docker to manage the connections between the services and to launch Redis, MongoDB, and the Sendhooks monitoring service.
First, in your working directory, create a new directory called api
. Inside this directory, add the following files: requirements.txt
, Dockerfile
, and app.py
.
mkdir api
touch requirements.txt Dockerfile app.py
The requirements.txt
file contains the libraries used in the Flask application. The app.py
file will contain a code exposing an endpoint called api/send
through Flask, so we can send a request to the API that will then contact the Sendhooks service via Redis. The Dockerfile
contain instructions on building an image to run the flask service with docker.
In the next section, let's write the code for the Flask API.
Writing the Flask API
In this section, we are going to write the code for the Flask API. We just need an endpoint accepting POST
requests, that will use the payload received from the requests and send them to Redis.
Let's add the code :
# api/app.py
from flask import Flask, request, jsonify
import redis
import json
app = Flask(__name__)
r = redis.Redis(host='redis', port=6379, db=0)
@app.route('/api/send', methods=['POST'])
def send_data():
payload = request.json
# Use xadd to add the message to the Redis Stream named 'hooks'
r.xadd('hooks', {'data': json.dumps(payload)})
return jsonify({"status": "sent to stream"})
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
Next, step let's add the content of the requirements.txt file. This file will be used by the Dockerfile to set up the Flask API service.
Flask
redis
Next, step, let's add the code of the Dockerfile
.
# Dockerfile.flask
FROM python:3.11-slim
WORKDIR /app
COPY . .
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python", "app.py"]
Great! The Flask API is ready, now we can focus on adding the Sendhooks service and it just takes a few seconds.
Adding Sendhooks
In this section, we will add the Sendhooks service in a docker-compose file. Now we are working at the root of the project, the same directory where the api
directory is. Before doing that, we need a configuration file config.json
for the service, but we will also need to write a .env.local
file for the sendhooks-monitoring
service.
Let's start with the config.json
file.
{
"redisAddress": "redis:6379",
"redisPassword": "",
"redisDb": "0",
"redisSsl": "false",
"redisStreamName": "hooks",
"redisStreamStatusName": "hooks-status"
}
The configuration parameters for Sendhooks are crucial but all are optional as default values are provided. Define these in the config.json
file:
redisAddress: Redis server address. Default is
127.0.0.1:6379
.redisDb: Redis database to use. Default is
0
.redisPassword: Optional password for accessing Redis. No default value.
redisSsl: Enables/disables SSL/TLS. Default is
false
. If this parameter istrue
, you will need to add more configuration.redisStreamName: Redis stream for webhook data. Default is
hooks
.redisStreamStatusName: Redis stream for status updates. Default is
sendhooks-status-updates
.
Next, let's create a file called .env.local
and add the following content.
BACKEND_PORT=5002
MONGODB_URI=mongodb://mongo:27017/sendhooks
REDIS_HOST=redis
REDIS_PORT=6379
STREAM_KEY=hooks-status
ALLOWED_ORIGINS=http://localhost:3000
Great. With those files ready, we can now write the docker-compose.yaml
file.
To set up a complete development environment for your Sendhooks project, you'll need a docker-compose.yaml
file that defines and manages all necessary services. This Docker Compose file includes the following services:
Redis: A fast, in-memory data store.
Mongo: A NoSQL database for storing application data.
Sendhooks: The primary service for sending webhooks.
Sendhooks-monitoring: A monitoring service for tracking the status of webhooks.
Flask API: A Flask-based API to interact with Sendhooks.
Here's the content of the docker-compose.yaml
file:
version: '3.9'
services:
redis:
image: redis:latest
hostname: redis
restart: always
ports:
- "6379:6379" # Expose Redis on localhost via port 6379
mongo:
image: mongo:latest
container_name: mongo
restart: always
volumes:
- ./mongo:/data/db # Persist Mongo data on the host
sendhooks:
image: transfa/sendhooks
restart: on-failure
depends_on:
- redis
volumes:
- ./config.json:/root/config.json # Mount config.json from host to container
flask-api:
build: ./api/
restart: on-failure
ports:
- "5001:5000" # Expose Flask API on localhost via port 5001, internal port 5000
depends_on:
- sendhooks
sendhooks-monitoring:
image: transfa/sendhooks-monitoring
container_name: sendhooks-monitoring
restart: on-failure
env_file:
- .env.local # Load environment variables from .env.local
ports:
- "5002:5002"
- "3000:3000" # Expose monitoring service on ports 5002 and 3000
depends_on:
- sendhooks
- mongo
- redis
Great! Now run the following command:
docker compose up -d --build
Once the services have started, we will mainly three web services:
Navigate to http://localhost:3000 in your web browser to access the dashboard of sendhooks-monitoring.
The backend of sendhooks-monitoring is available at http://localhost:5002.
The Flask API is available at http://localhost:5001, and provides a RESTful interface for interacting with the system.
If you need a URL for testing the webhook service, you can get one for free here https://webhook.site. It is however limited to 100 requests, but that should be sufficient for testing.
Whether you are using Postman, cURL, or any HTTP client or script, here is an example payload to use for sending:
{
"url": "https://webhook.site/4654ee94-5d82-4b56-98fe-6bf1c7a6d735",
"webhookId": "webhook-12345",
"messageId": "message-67890",
"data": {
"order_id": "abc123",
"amount": "100.00",
"currency": "USD",
"status": "processed"
},
"secretHash": "e99a18c428cb38d5f260853678922e03",
"metaData": {
"ip_address": "192.168.1.1",
"user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
}
}
After sending some webhooks, you should see them in the dashboard.
Clicking on the ID of each will give you information about these webhooks.
🚀 You now know how to use the Sendhooks engine! The next section is optional but that might help you deploy the Sendhooks service with monitoring on a server.
Deploying on a VPS using Docker, NGINX and Let's Encrypt
NGINX is a high-performance web server and reverse proxy known for its stability, rich feature set, simple configuration, and low resource consumption. Let's Encrypt is a free, automated, and open certificate authority that provides SSL/TLS certificates to enable secure HTTPS connections for websites.
In this section, we'll guide you through setting up a quick project using Flask and Sendhooks. We'll use Docker to manage the connections between the services and to launch Redis, MongoDB, and the Sendhooks monitoring service. Additionally, we'll configure NGINX to handle incoming requests and secure the connections using Let's Encrypt.
NGINX Configuration
In the root of the project, add the following NGINX configuration file (nginx.conf
):
upstream webapp {
server flask_api:5001;
}
upstream sendhooksmonitoring {
server sendhooks_monitoring:3000;
}
upstream sendhooksbackend {
server sendhooks_monitoring:5002;
}
server {
<span class="kn">listen</span> <span class="mi">443</span> <span class="s">ssl</span><span class="p">;</span>
<span class="kn">server_name</span> <span class="s">API_DOMAIN</span> <span class="s">MONITORING_DOMAIN</span> <span class="s">BACKEND_DOMAIN</span><span class="p">;</span>
<span class="kn">server_tokens</span> <span class="no">off</span><span class="p">;</span>
<span class="kn">client_max_body_size</span> <span class="mi">20M</span><span class="p">;</span>
<span class="c1"># SSL configuration</span>
<span class="kn">ssl_certificate</span> <span class="n">/etc/letsencrypt/live/API_DOMAIN/fullchain.pem</span><span class="p">;</span>
<span class="kn">ssl_certificate_key</span> <span class="n">/etc/letsencrypt/live/API_DOMAIN/privkey.pem</span><span class="p">;</span>
<span class="kn">ssl_trusted_certificate</span> <span class="n">/etc/letsencrypt/live/API_DOMAIN/chain.pem</span><span class="p">;</span>
<span class="kn">ssl_dhparam</span> <span class="n">/etc/letsencrypt/dhparams/dhparam.pem</span><span class="p">;</span>
<span class="c1"># Location blocks for different domains</span>
<span class="kn">location</span> <span class="n">/</span> <span class="p">{</span>
<span class="kn">if</span> <span class="s">(</span><span class="nv">$host</span> <span class="p">=</span> <span class="s">"API_DOMAIN")</span> <span class="p">{</span>
<span class="kn">proxy_pass</span> <span class="s">http://webapp</span><span class="p">;</span>
<span class="p">}</span>
<span class="kn">if</span> <span class="s">(</span><span class="nv">$host</span> <span class="p">=</span> <span class="s">"MONITORING_DOMAIN")</span> <span class="p">{</span>
<span class="kn">proxy_pass</span> <span class="s">http://sendhooksmonitoring</span><span class="p">;</span>
<span class="p">}</span>
<span class="kn">if</span> <span class="s">(</span><span class="nv">$host</span> <span class="p">=</span> <span class="s">"BACKEND_DOMAIN")</span> <span class="p">{</span>
<span class="kn">proxy_pass</span> <span class="s">http://sendhooksbackend</span><span class="p">;</span>
<span class="p">}</span>
<span class="kn">proxy_set_header</span> <span class="s">Host</span> <span class="nv">$host</span><span class="p">;</span>
<span class="kn">proxy_set_header</span> <span class="s">X-Forwarded-For</span> <span class="nv">$proxy_add_x_forwarded_for</span><span class="p">;</span>
<span class="kn">proxy_redirect</span> <span class="no">off</span><span class="p">;</span>
<span class="p">}</span>
}
Explanation:
upstream blocks: Define backend services (Flask API, Sendhooks Monitoring).
server block: Configures the server to listen on port 443 with SSL enabled and sets up location blocks to handle requests based on the domain name.
Docker Compose Configuration
Next, we'll update the docker-compose.yaml
file to include all necessary services:
version: '3.9'
services:
nginx:
container_name: nginx
restart: on-failure
image: jonasal/nginx-certbot:latest
environment:
- CERTBOT_EMAIL=YOUR_MAIL
- DHPARAM_SIZE=2048
- RSA_KEY_SIZE=2048
- ELLIPTIC_CURVE=secp256r1
- USE_ECDSA=0
- RENEWAL_INTERVAL=8d
volumes:
- nginx_secrets:/etc/letsencrypt
- ./nginx.conf:/etc/nginx/nginx.conf
- static_volume:/app/static
ports:
- "80:80"
- "443:443"
depends_on:
- flask-api
- sendhooks
- sendhooks-monitoring
redis:
image: redis:latest
hostname: redis
restart: always
ports:
- "6379:6379" # Expose Redis on localhost via port 6379
mongo:
image: mongo:latest
container_name: mongo
restart: always
volumes:
- ./mongo:/data/db
sendhooks:
image: transfa/sendhooks
restart: on-failure
depends_on:
- redis
volumes:
- ./config.json:/root/config.json # Mount config.json from host to container
flask-api:
build: ./api/
container_name: flask_api
restart: on-failure
ports:
- "5001:5000" # Expose Flask API on localhost via port 5001, internal port 5000
depends_on:
- sendhooks
sendhooks-monitoring:
image: transfa/sendhooks-monitoring
container_name: sendhooks_monitoring
restart: on-failure
env_file:
- .env.local # Load environment variables from .env.local
ports:
- "5002:5002"
- "3000:3000" # Expose monitoring service on ports 5002 and 3000
depends_on:
- sendhooks
- mongo
- redis
volumes:
nginx_secrets:
static_volume:
Explanation:
-
nginx: Uses
jonasal/nginx-certbot
image for NGINX with Let's Encrypt integration. It restarts on failure and depends on Flask API, Sendhooks, and Sendhooks Monitoring services.- environment: Sets environment variables for Certbot configuration.
- volumes: Mounts volumes for SSL certificates and the NGINX configuration file.
- ports: Exposes ports 80 and 443 for HTTP and HTTPS traffic.
redis: Runs a Redis server with automatic restart and port exposure.
mongo: Runs a MongoDB server with data persistence.
sendhooks: Runs the Sendhooks service with a mounted configuration file.
flask-api: Builds and runs the Flask API, exposed on port 5001.
sendhooks-monitoring: Runs the Sendhooks Monitoring service with environment variables loaded from
.env.local
and exposed on ports 5002 and 3000.
Domain Configuration
After configuring the Docker services, link your server to a domain name by adding the necessary entries in your DNS configuration panel.
Once the DNS configuration is done, you can start working on the deployment process.
Then, on your VPS just spin the services using the command docker compose up -d --build
, and your Sendhooks infrastructure is deployed.🚀
Conclusion
In this guide, we've shown how to set up and deploy a webhook engine using Sendhooks, along with supporting services like Redis, MongoDB, Flask, and Docker. We also covered securing the deployment with NGINX and Let's Encrypt.
By following these steps, you now have a scalable and secure webhook infrastructure in place. For more details and the complete code, visit the repository.
Top comments (1)
Great write-up, we have a bunch of articles on Go in our Newsletter, check it out - packagemain.tech