Anaïs Urlichs avatar
By Anaïs Urlichs


Learn how to access your application through the LoadBalancer type in Kubernetes and the potential vulnerabilities of using it. Consider using an ingress or service mesh instead.


Introduction to load balancers

Welcome back to another Kubernetes service introduction video. In this video, we're going to look at the type load balancer. In the previous videos, we looked at why we need Kubernetes services, how they work, what they do, and the different types. The last one is the load balancer type that we will discuss now. Let's assume we have a Kubernetes cluster, K3s running on Civo. And we have here different nodes within our Kubernetes cluster.

So, now we have external traffic. It is the traffic through which we want to access our Kubernetes cluster. So, external traffic is coming to our Kubernetes cluster to allow us to access the application. The application, however, is here sitting and chilling within our different pods that are running within our nodes.

We can set up a cluster IP, for instance, a default type for our Kubernetes service. And we can then serve the traffic coming in here through additional rules, something like an ingress, something like external load balancers, and so on, to access our application running within our pods and our nodes.

What is a load balancer?

However, let's say we want to try it out real quick. We can use the type LoadBalancer. A LoadBalancer, basically as a type-service, allows us to dynamically route the traffic between services to those different pods within those different nodes. So, we will be able to access from the outside world through an external IP address, our application running within our Kubernetes cluster. Now, I'm telling you to be careful because this will open up an external IP address to your application without any additional rules. So, that's where in most cases, you would want to use an ingress or a service mesh of sorts that utilizes ingress to access your application from the outside world.

So, the load balancer is the outside world, and the Kubernetes cluster is our walled garden. And we want to keep it a walled garden. We want to keep our applications running securely within our Kubernetes cluster.

If we open it up to everybody here, this might expose us to vulnerabilities. So, we want to generally use something like an ingress or a service mesh with an ingress, which is something like Istio or Linkerd or something. So, this is our service mesh, and the service mesh usually uses an ingress as well. And that's what we prefer.

However, in this case, we want to see how a load balancer works because a load balancer allows us to, for example, provide easy demos. So, going back to our terminal, as you can see, right now, we have the type NodePort running within our cluster. But, it's not a type LoadBalancer. So, we want to define within our resource that we want to change the type NodePort to type LoadBalancer. And now, we want to apply our updates, so we will use the command kubectl apply -f manifests/service.yaml -n example. And we want to apply this Service YAML file to our namespace example, where we have all our resources running right now.

We will now verify the updates using the kubectl get all -n example command. So, checking back on our resources, it should now be of type LoadBalancer. And before, we never saw an external IP address to find. In this case, we're seeing our external IP address defined, and this is our external IP address. Now we can use this address and port 3000 to access our application. That's the accessed cluster right through this port to connect to our different nodes and our different pods within our nodes. However, we want to access our application through the external IP through the load balancer that we've set up here.

Accessing an application through load balancer

So, if I now go ahead and I open. If I do localhost 3000, it won't do anything right now because I'm not forwarding it. So, it will just be stuck in loading here and fail. However, I now want to access through the external IP and remember that I'm not doing port forwarding. As discussed in previous videos, I'm just specifying the external IP and the port I'm using where my application runs. So, now I'm accessing it, and I can see here, again, my Hello World. So, the load balancer allows me to dynamically route the traffic through the service to my different pods here. So, I could be deleting my pods, and it would still work.

So, let's delete one of those pods. These pods have been spun up in addition to the others to serve my LoadBalancer. So, I can go ahead and use kubectl to delete my pods. I'm going to delete it by using kubectl delete [POD_NAME] -n example, and I'm going to delete this pod in -n example. So, while this is deleting, I will still be able to access my application even though one of the pods has been deleted and spun up again because the load balancer type doesn't care. It doesn't care which pods I'm going to connect. It's just going to connect to any pods that are right now healthy and in a running state.

So, as you can see, they are running so that I can connect to them. Additionally, it sets up a DaemonSet, and this is the DaemonSet having all these different pods that serve my load balancer in this case. So, as you can see, I can now access my application from the outside world, from anywhere. Now, as you can see, this is not secure. This is nothing I want to leave open lying around. This is something I want to use in an instance. If I want to provide my application to the outside world, I will use an ingress service mesh to provide additional configuration and do it securely.

Civo course complete badge

You've successfully completed our course on Kubernetes services

We hope you enjoyed learning and encourage you to check out our other courses to further expand your knowledge on Kubernetes