Note to the Reader
In this post, I'm experimenting with a more concise format. The highly detailed posts are taking 2-3 days to complete, so in this one, I’ll focus on describing Kubernetes Namespaces and their components before moving directly into exercises.
If you saw the content of the post move around a little, it is because I was getting errors trying to post it and what I had written had not been saved.
Kubernetes Namespaces
Namespaces are a way to divide cluster resources between multiple users or applications. They help organize objects in Kubernetes and manage access, often in scenarios where different environments (e.g., development, staging, and production) coexist within the same cluster. With namespaces, we can isolate resources while ensuring that different teams or applications can share the same infrastructure.
Namespaces are ideal when you need logical separation but want to avoid the complexity of multiple clusters.
Create Two Namespaces: ns1
and ns2
Create the first namespace:
kubectl create namespace ns1
Create the second namespace:
kubectl create namespace ns2
Note: At this point, it may be easier to work in two terminal windows. In the first terminal, connect to the ns1
namespace. In the second terminal, connect to the ns2
namespace using the following commands:
- For the first terminal:
kubectl config set-context --current --namespace=ns1
- For the second terminal:
kubectl config set-context --current --namespace=ns2
Create a deployment with a single replica in each of these namespaces with the image as nginx and names as deploy-ns1 and deploy-ns2, respectively
Deployment for ns1
– Save this configuration as ns1-deployment.yml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-ns1
namespace: ns1
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: deploy-ns1
template:
metadata:
labels:
app: deploy-ns1
spec:
containers:
- name: nginx
image: nginx:1.23.4-alpine
ports:
- containerPort: 80
Deployment for ns2
– Save this configuration as ns2-deployment.yml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-ns2
namespace: ns2
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: deploy-ns2
template:
metadata:
labels:
app: deploy-ns2
spec:
containers:
- name: nginx
image: nginx:1.23.4-alpine
ports:
- containerPort: 80
Apply each YAML configuration to create the deployments
kubectl apply -f ns1-deployment.yml
kubectl apply -f ns2-deployment.yml
Get the IP address of each of the pods
Namespace ns1
kubectl get pods -o wide -n ns1
Example Output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-ns1-68b96c55c4-db4b9 1/1 Running 0 13m 10.244.2.8 cka-cluster-worker <none> <none>
Namespace ns2
kubectl get pods -o wide -n ns2
Example Output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-ns2-7c6646cf97-76xcz 1/1 Running 0 14m 10.244.1.10 cka-cluster-worker2 <none> <none>
Exec into the pod of deploy-ns1
and try to curl the IP address of the pod running on deploy-ns2
kubectl exec -it deploy-ns1-68b96c55c4-db4b9 -n ns1 -- curl 10.244.1.10
Upon successful execution, you should see an HTML response indicating that the deploy-ns1 pod can successfully connect to the deploy-ns2 pod, confirming that the NGINX server is up and running:
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p>
<p>For online documentation and support, please refer to <a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Scale both deployments from 1 to 3 replicas
Note: I chose to scale through the YAML file for source control purposes.
Update the replicas
field in each deployment YAML file:
-
ns1-deployment.yml
apiVersion: apps/v1 kind: Deployment metadata: name: deploy-ns1 namespace: ns1 labels: app: nginx spec: replicas: 3 # Updated to 3 replicas selector: matchLabels: app: deploy-ns1 template: metadata: labels: app: deploy-ns1 spec: containers: - name: nginx image: nginx:1.23.4-alpine ports: - containerPort: 80
-
ns2-deployment.yml
apiVersion: apps/v1 kind: Deployment metadata: name: deploy-ns2 namespace: ns2 labels: app: nginx spec: replicas: 3 # Updated to 3 replicas selector: matchLabels: app: deploy-ns2 template: metadata: labels: app: deploy-ns2 spec: containers: - name: nginx image: nginx:1.23.4-alpine ports: - containerPort: 80
Create two services to expose both of your deployments and name them svc-ns1 and svc-ns2
apiVersion: v1
kind: Service
metadata:
name: svc-ns1
namespace: ns1
spec:
selector:
app: deploy-ns1
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
apiVersion: v1
kind: Service
metadata:
name: svc-ns2
namespace: ns2
spec:
selector:
app: deploy-ns2
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
Apply the Service Definitions
Once you’ve created both YAML files (let's say svc-ns1.yml
and svc-ns2.yml
), you can apply them using the following commands:
kubectl apply -f svc-ns1.yml
kubectl apply -f svc-ns2.yml
Step 5: Verify the Services
After applying, you can verify that your services are up and running by executing:
kubectl get services -n ns1
kubectl get services -n ns2
This should show you the svc-ns1
and svc-ns2
services, along with their cluster IPs and ports.
Exec into each pod and try to curl the IP address of the service running on the other namespace.
kubectl exec -it <pod-name> -n ns1 -- curl <ip-address>
This curl should work (it did).
Now try curling the service name instead of IP
You will notice that you are getting an error and cannot resolve the host.
kubectl exec -it deploy-ns1-68b96c55c4-db4b9 -n ns1 -- curl svc-ns2
Now use the FQDN of the service and try to curl again
This should work:
kubectl exec -it deploy-ns1-68b96c55c4-db4b9 -n ns1 -- curl svc-ns2.ns2
In the end, delete both the namespaces
This should delete the services and deployments underneath them:
kubectl delete namespace ns1
kubectl delete namespace ns2
Summary of Learning on Kubernetes Namespaces and Services
In this blog post, we explored the concept of Kubernetes namespaces and their role in organizing cluster resources. We went through a series of exercises to solidify our understanding of namespaces, deployments, and services. Here’s a recap of what we covered:
Understanding Kubernetes Namespaces
- Definition: Namespaces provide a mechanism to divide cluster resources between multiple users or applications, allowing for logical separation without the complexity of managing multiple clusters.
- Use Cases: They are particularly useful in scenarios where different environments (development, staging, and production) coexist within the same cluster, enabling teams to share infrastructure while isolating resources.
Creating Namespaces
- We created two namespaces,
ns1
andns2
, using the following commands:
kubectl create namespace ns1
kubectl create namespace ns2
Deployments in Namespaces
- We deployed a single replica of Nginx in each namespace:
-
Deployment in
ns1
: Configuration saved asns1-deployment.yml
. -
Deployment in
ns2
: Configuration saved asns2-deployment.yml
.
-
Deployment in
We used the kubectl apply
command to create these deployments.
Scaling Deployments
- We scaled both deployments from 1 to 3 replicas using the YAML configuration files for source control, ensuring that our changes were tracked.
Exposing Deployments with Services
- We created two services to expose our deployments:
-
Service for
ns1
: Configuration saved asns1-service.yml
. -
Service for
ns2
: Configuration saved asns2-service.yml
.
-
Service for
This step enabled external access to our Nginx deployments.
Pod-to-Pod Communication
- We executed commands to curl the IP addresses of the services from within the pods:
kubectl exec -it <pod-name> -n ns1 -- curl <ip-address>
- We then attempted to curl the service name directly, which resulted in a resolution error, demonstrating the importance of DNS in Kubernetes.
Fully Qualified Domain Name (FQDN) Usage
- By using the FQDN of the service, we successfully accessed the services across namespaces:
kubectl exec -it deploy-ns1-68b96c55c4-db4b9 -n ns1 -- curl svc-ns2.ns2
Cleanup
- To maintain a tidy workspace, we deleted both namespaces, which also removed the associated services and deployments:
kubectl delete namespace ns1
kubectl delete namespace ns2
Key Takeaways
- Namespaces are essential for organizing resources and managing access in a Kubernetes cluster.
- Deployments and Services work hand-in-hand to provide scalable applications and expose them for external access.
- FQDN is critical for cross-namespace communication, emphasizing the importance of DNS within Kubernetes.
- Maintaining cleanliness in our cluster by deleting unnecessary namespaces and resources is crucial for efficient management.
Tags and Mentions
@piyushsachdeva
Day 8 video
Top comments (0)