As March unfolds its chapter in 2024, I was thrilled when I saw this LinkedIn post by @forrestbrazeal.
And from that post I knew it will be a a tale of exploration, learning, and growth in the vast expanse of cloud computing, a realm where innovation meets opportunity.
Join me as I unveil the beginning of this adventure, a journey fueled by passion, curiosity, and the desire to elevate my skills to new heights.
Scenario
We have to deploy an e-commerce website. This is a modern web application which poses challenges about Scalability, Consistency, and Availability. To address these, we've opted for a solution that harnesses containerization managed by Kubernetes.
Prerequisites
In this challenge, certain prerequisites are necessary, outlined in detail here. However, among these, the most crucial for me is gaining familiarity with the Application Source Code, accessible here.
Implementation
Before commencing each step, it's crucial to fulfill all necessary requirements as progression to subsequent steps may be impeded otherwise. However, Step 1 might be an exception to this rule.
Step 1: Certification
Fortunate to have this certification already secured, yet it's been a while since I delved into Kubernetes. This challenge serves as a refresher, ensuring I'm up to speed with this essential technology.
Step 2: Containerize Your E-Commerce Website and Database
Web Application Containerization
In this step, our task is to craft our own Docker image starting from the base image of php:7.4-apache
and configure the essential components. Fortunately, the provided hints are thorough, guiding us through the process. Let's proceed by crafting a Dockerfile to translate these hints into commands.
This Dockerfile
will then be build into a Docker Image and this image will then be pushed into our Docker Hub account. You can use this command to achieve this.
# Build Docker Image
docker build -t "<docker_username>/<repositry_name>:<tag>" .
# Push to Docker Hub
docker push "<docker_username>/<repositry_name>:<tag>"
Database Containerization
For our Database component, we won't need to create a custom Docker image; instead, we'll simply pull an image from the Public DockerHub.
Notice that there is a db-load-script.sql
script, it's essential to understand its functionality before proceeding confidently. Let's delve into its purpose to ensure we're well-prepared for the next steps.
USE ecomdb;
CREATE TABLE products (id mediumint(8) unsigned NOT NULL auto_increment,Name varchar(255) default NULL,Price varchar(255) default NULL, ImageUrl varchar(255) default NULL,PRIMARY KEY (id)) AUTO_INCREMENT=1;
INSERT INTO products (Name,Price,ImageUrl) VALUES ("Laptop","100","c-1.png"),("Drone","200","c-2.png"),("VR","300","c-3.png"),("Tablet","50","c-5.png"),("Watch","90","c-6.png"),("Phone Covers","20","c-7.png"),("Phone","80","c-8.png"),("Laptop","150","c-4.png");
Step 3: Set Up Kubernetes on a Public Cloud Provider
We're now at the stage where we must set up our Kubernetes cluster. For this, I've opted for AWS (EKS). I've taken the initiative to create the necessary resources, starting from the AWS VPC components and extending up to the EKS cluster.
Reminder: In the EKS cluster setup, ensure that you have permission to assume the IAM role on which you'll to create the cluster as you will not be able to access the cluster without it. Unless, you will configure the authentication_mode
to API_AND_CONFIG_MAP
and cluster_endpoint_public_access
to true
.
Please be patient as the creation of the EKS cluster may take between 10 to 20 minutes.
Once the cluster is successfully created, you'll want to verify your access by ensuring that you can execute kubectl
commands.
To connect to the cluster, add a new cluster context to your .kubeconfig
file. You can achieve this by using the following command:
aws eks update-kubeconfig --region <aws_region> --name <cluster_name> --profile <profile_name>
If you're utilizing an EKSClusterCreatorRole
IAM Role, you can assume the role and execute the aforementioned command. An effective tool for this purpose is aws-vault.
99designs / aws-vault
A vault for securely storing and accessing AWS credentials in development environments
AWS Vault
AWS Vault is a tool to securely store and access AWS credentials in a development environment.
AWS Vault stores IAM credentials in your operating system's secure keystore and then generates temporary credentials from those to expose to your shell and applications. It's designed to be complementary to the AWS CLI tools, and is aware of your profiles and configuration in ~/.aws/config
.
Check out the announcement blog post for more details.
Installing
You can install AWS Vault:
- by downloading the latest release
- on macOS with Homebrew Cask:
brew install --cask aws-vault
- on macOS with MacPorts:
port install aws-vault
- on Windows with Chocolatey:
choco install aws-vault
- on Windows with Scoop:
scoop install aws-vault
- on Linux with Homebrew on Linux:
brew install aws-vault
- on Arch Linux:
pacman -S aws-vault
- on Gentoo Linux:
emerge --ask app-admin/aws-vault
(enable Guru first) - on FreeBSD…
Step 4: Deploy Your Website to Kubernetes
In this step, I've generated a Kubernetes definition file to instantiate a Kubernetes Deployment
resource. This deployment utilizes the Docker image we previously pushed to our Docker Hub repository.
In this step, we've also set up another Deployment
resource to host our mariadb
image. However, configuring this resource involves additional steps, such as specifying the ROOT PASSWORD
for the database and setting up the db-load-script.sql
.
To set the ROOT PASSWORD
, you can define your desired value as an Environment Variable
named MYSQL_ROOT_PASSWORD
.
As for the db-load-script.sql
, we've created a ConfigMap
Kubernetes resource to store its data.
A useful trick to streamline this process is employing kubectl
commands to automatically generate the Kubernetes definition file. For instance, if you wish to create a Deployment definition file, you can execute the command:
kubectl create deploy --image=busybox sample --dry-run=client -o yaml > sample.yaml
This will generate a sample.yaml
file and below are the contents.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: sample
name: sample
spec:
replicas: 1
selector:
matchLabels:
app: sample
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: sample
spec:
containers:
- image: busybox
name: busybox
resources: {}
status: {}
And if you want to get the properties of an existing resource, you can do this command instead.
kubectl get deploy <existing_deployment_name> -o yaml > sample.yaml
Note: This commands are not limited to Deployment
resource only, you can also use other resources such as Pod
, Service
, etc.
Configure the Database
In line with best practices, it's recommended to deploy the database before the website. This allows for thorough testing of the database configuration and the creation of a new Database User
, as it's considered best practice to avoid using root
for day-to-day tasks.
Once the database Deployment
is created, you can remotely connect to the Pod
.
Once connected, login to the database.
Check if there are Database
created, specifically, the ecomdb
.
+--------------------+
| Database |
+--------------------+
| ecomdb |
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
Check if there are data inserted to the ecomdb
database.
Check if there is an created products
table and see if there are data inserted.
Note: All of these predefined data are created by the db-load-script.sql
.
Once everything is verified, create a Database user.
Take note of the username and password that we've given to the user as website Pod
will use this user the to connect to the database.
Configure the Website
With our database preparations complete, our website now possesses the requisite variables for authenticating with the database.
However, before proceeding, we need to make adjustments in the app/
index.php
file to ensure our PHP application can fetch the database connection strings that we will provide via Environment Variables
.
This Environment Variables
will then be defined in our Deployment
definition file.
This step is undeniably both laborious and pivotal in ensuring the functionality of our web application. I'm grateful for the little to none hints offered in the challenge steps, as they motivated me to explore diverse strategies to surmount this obstacle. I swear after completing this step, it will be even more exhilarating!
Step 5: Expose Your Website
Now, it's time to set up a Kubernetes Service
to make our Deployment
accessible. We'll opt for a LoadBalancer
type Service
, which will generate an AWS Load Balancer to expose our Web Application beyond the confines of our Kubernetes cluster.
It's crucial to ensure the selector
section in your definition file contains the correct values. Below is a sample Service definition file for your reference.
apiVersion: v1
kind: Service
metadata:
name: <name_of_service>
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
selector:
<label_key_of_webapp_pod>: <label_value_of_webapp_pod>
Even though its not mentioned in the step, I think its also beneficial to create another Service
for our Database Deployment
resource so it can be reached in a consistent manner using the Service
endpoint.
Once the website Service
is established, it generates a DNS endpoint through which you can access the web application. You can find this endpoint either by retrieving the details of the Service
or within the AWS Management Console as a Load Balancer.
For instance, you can access my website at this link as an example of such an endpoint.
You might encounter several issues here, specifically regarding the authentication of our web application to our database, such as:
ERROR 1045 (28000): Access denied for user 'username'@ 'localhost' (using password: NO)
In such cases, it's crucial to exercise extra caution and ensure that you're providing the correct credentials as configured in the previous step.
I'm concerned that the length of this blog post might be making it feel tedious. However, it's important to remember that despite being only halfway through our journey, each step we take brings us closer on completing this challenge.
Before we embark further into our cloud journey, I invite you to stay connected with me on social media platforms. Follow along on Twitter, LinkedIn. Let's continue this exploration together and build a thriving community of cloud enthusiasts. Join me on this exciting adventure!
Top comments (1)
Great work, looking forward to your take on the challenge.