DEV Community

Lulu
Lulu

Posted on

Deploying the Free WAF SafeLine on Kubernetes

As I’ve been learning Kubernetes, I decided to practice by deploying SafeLine Community Edition. Here’s a detailed breakdown of the steps I followed.

Environment Setup

  • OS: Ubuntu 22.04
  • Specs: 2C8G
  • Disk: 40G
  • Tools: Minikube v1.31.1

Configuration Files

The configuration files are based on YAML generated by the kompose tool, with some modifications. They consist of two parts: the WAF's core running module configuration and the database configuration. Since there is no persistent data configuration in the default database setup, you can modify the configuration (Steps 1 and 2) if you have your own database cluster.

First, download the configuration files and move them to the appropriate directory:

tar -xzvf safeline-ce-k8s-configs.tar.gz  
tar -xzvf safeline-ce-k8s-db.tar.gz
Enter fullscreen mode Exit fullscreen mode

Step 1: Upload Images

You need to upload the offline images to your Docker registry and then load them into your Minikube cluster using the following script:

minikube image load chaitin/safeline-tengine \
                    chaitin/safeline-mgt-api \
                    chaitin/safeline-mario \
                    chaitin/safeline-detector
Enter fullscreen mode Exit fullscreen mode

After running the script, you can check the images using minikube image ls.

Step 2: Modify Database Information

1.Open the management-deployment.yaml file. Replace safeline-ce:${POSTGRES_PASSWORD} with your database username and password. Update the part after @safeline-postgres with the domain name of your PostgreSQL service within the K8s cluster.

Image description

2.Open the mario-deployment.yaml file. Similarly, replace safeline-ce:${POSTGRES_PASSWORD} and ${REDIS_PASSWORD} with the correct database information, and replace the domain name accordingly.

Image description

3.If you don't have an existing database, you can use the provided test database configuration (for testing purposes only). Generate random passwords using the following script:

echo "POSTGRES_PASSWORD=$(LC_ALL=C tr -dc A-Za-z0-9 </dev/urandom | head -c 32)" >> .env  
echo "REDIS_PASSWORD=$(LC_ALL=C tr -dc A-Za-z0-9 </dev/urandom | head -c 32)" >> .env  
cat .env
Enter fullscreen mode Exit fullscreen mode

Then, open the postgres-deployment.yaml and redis-deployment.yaml files, and replace the corresponding passwords.

  • postgres-deployment:

Image description

  • redis-deployment:

Image description

Step 3: Start Containers

Make sure your database is up and running before starting the WAF. To apply all configurations, run:

# Navigate to the configuration file directory
cd safeline-ce-k8s-configs
bash ./start.sh
Enter fullscreen mode Exit fullscreen mode

To check the status of your pods, run:

kubectl get all
Enter fullscreen mode Exit fullscreen mode

Step 4: Testing

First, you can start a test server by running the following command in the SafeLine configuration directory:

kubectl apply -f test-server.yaml
Enter fullscreen mode Exit fullscreen mode

This server runs a simple Python HTTP server on port 8089. In the configuration, a node port (30007) is opened for external access.

Open the management-deployment.yaml file and check the user-port section under nodePort, which you can modify or leave to be automatically assigned by K8s.

Image description

Next, run kubectl get node -o wide to get the IP address of the node. You can then access the WAF management interface at http://<node-ip>:30018.

For direct access to internal ports (e.g., management on port 9443), use port forwarding:

kubectl port-forward service/safeline-management 9440:9443
Enter fullscreen mode Exit fullscreen mode

Now, in a new terminal, you can access the interface at http://localhost:9440.

You can configure the site after opening the administration page:

Image description

Final Thoughts

While the proxy was successfully set up, initial attack tests showed that Tengine failed to forward traffic to the detector, meaning attack signatures weren’t intercepted. This likely stems from misconfigurations within Tengine’s internal Nginx setup.

Top comments (0)