Deployment ensures that the desired number of Pods are up and running with the desired configuration at any given point of time.
But, when a new Pod is added (due to scaling or version changes)... We know that Pod has an IP to reach, and we use port forwarding to reach it from the outside world.
When the Pod changes the IP also changes along with it. Given this, maintaining your application with Pods whose IPs can frequently change is a challenge. In order to ensure the seamless communication of your application with the outside world, the K8s Service comes as the saviour!
The K8s Service
The virtual component that consists of the set of ipTable rules for the cluster is the K8s Service.
It is used to expose Pods, instead of talking to the pods you end up talking with just the service.
The service takes the responsibility of routing the traffic and/or communicating with the Pods.
Linking the service with your K8s Deployment or Replicaset or Pod is simple and same - As usual just the match labels does this for you :)
Types of Services
There are four types of K8s Services.
- ClusterIP
- NodePort
- LoadBalancer
- ExternalName
By default, K8s creates a ClusterIP type of service. We can build different kinds of services by specifying the type in a spec.type property of the service configuration file.
Let's explore them one by one!
To ensure demoing the LoadBalancer type of service, I have created a Cluster in Azure and creating these services there.
ClusterIP
Exposes your service within your cluster on a cluster-internal IP. Applications can interact with other applications internally using the ClusterIP.
The Service is unreachable from outside the cluster.
This is the default ServiceType.
Demo time!
clusterIp-service-demo.yml
This configuration file creates a deployment managing three nginx Pods and exposes them under the ClusterIP service.
These Pods can be accessed using the exposed InternalIP 10.0.248.71
by other applications within the cluster.
NodePort
Exposes your service outside your cluster. This creates a mapping of Pods to it's hosting node on a static port. Thus the service is accessible at the NodeIP:NodePort.
NodeIP is the IP address of your node and NodePort is the port which you decide to expose the service at. Usually something taken between 30000 - 32767.
Demo Time!
Applying this configuration, results in three more Pods.
These Pods are now exposed through Node Port service to the outside world at :30003 (30003 as you have mentioned in configuration file, else K8s randomly picks from the allowed range.)
LoadBalancer
This creates load balancers in various Cloud providers like AWS, GCP, Azure, etc., and exposes our application to the Internet.
The Cloud provider will provide a mechanism for routing the traffic to the services.
Demo Time!
Applying the configuration, creates Pods exposed via loadbalancer.
Now, the application is accessible at 20.198.164.24 to the world. (Don't try to access, I will delete the service shortly to save my penny :P)
ExternalName
Maps the Service to the contents of the externalName field. Accessing your service within your cluster redirects to the externalName you have provided.
It is not to any typical selector labels. Rather it is attached to the CNAME of the external server.
This creates a service which when accessed at my-service.default.svc.cluster.local redirects to the content of my.service.com.
Hope this gives an introduction to Service K8s object. See you in the next blog.
Happy learning!
Top comments (0)