In the second part of our “Kubernetes interview questions” series, we have outlined ten questions to help those that want to take their Kubernetes knowledge to the next level. Read on to learn more about the difference between Kubernetes and Docker Swarm. We’ll also be covering how an organization can keep costs low using Kubernetes.

If you missed part one, check it out here.

What is the difference between Kubernetes and Docker Swarm?

Whilst both Kubernetes and Docker Swarm are container orchestrators, they differ from each other in various ways, including:

  • Containers: Kubernetes allows you to run containers on physical and virtual machine clusters, whereas Docker Swarm only allows you to run Docker containers across virtual machine clusters.

  • Production: Kubernetes in production runs across a cluster; however, Docker Swarm can only be run on a single node when serving containers from a single host in production.

  • Runtime: Kubernetes requires a container runtime like CRI-O, containerd, etc., to run and orchestrate a container. This compares to Docker Swarm, which can be used without any container runtime because of its integration with the Docker ecosystem.

  • Scaling: Kubernetes supports autoscaling, whilst Docker Swarm supports manual scaling.

  • Monitoring: Kubernetes has components available that are able to perform monitoring, which helps to manage your Kubernetes cluster by tracking the uptime of your application, monitoring utilization of your cluster resources, and interactions between your cluster components. On the other hand, Docker Swarm doesn’t have any in-built monitoring, resulting in the need for third-party tools to monitor your cluster.

  • Load Balancer: Kubernetes requires you to set up a Load Balancer manually, which compares to Docker Swarm, which has an automated Load Balancer helping to balance traffic and requests across containers/hosts of the clusters.

  • CLI: Kubernetes needs a separate CLI (Command Line Interface) tool, such as Kubectl, to operate, whereas Docker Swarm integrates with the Docker CLI.

Learn more about the comparison between Kubernetes and Docker from this blog post here.

What is the difference between K8s and K3s?

  • K3s supports the automatic deployment of manifests: YAML files can be dropped directly into a directory and scanned. This will allow the file to be automatically picked up and run through the Kubernetes process.

  • K3s has custom resources for helm chart installations: Helm charts can be easily installed without needing to install a client on your CI or the local machine.

  • K3s is small in size and requires fewer resources: Single node or control plane with worker node installations are possible with K3s, making it great for edge computing. From this, K3s can launch clusters within seconds as all the dependencies of K3s are bundled into a single binary, resulting in excellent performance irrespective of resources.

  • K3s replaced etcd with kine for database storage: Proper databases like MySQL and PostgreSQL can scale really well.

If you are interested in learning more about how k3s is different from K8s, check out our “K3s vs K8s” article, which outlines further differences between K8s and K3s.

What are the key disadvantages of Kubernetes?

Despite the importance of Kubernetes, it has several disadvantages, including:

  • Complexity: Local deployment environments and simple applications are made complex, which can downgrade productivity and affect the productivity timeline of an organization.

  • Debugging and troubleshooting: Extensive training and experience are required to navigate debugging and troubleshooting on Kubernetes.

  • The Kubernetes landscape: With ongoing innovative additions to the technology used, the Kubernetes transition can be slow, complicated, and difficult to manage.

How can an organization keep costs low using Kubernetes?

Keeping low costs is one of the important factors an organization considers when they migrate to new technologies. The following points will help you understand how you can keep your budget under control while using Kubernetes:

  • Monitoring of the cluster environment: Monitoring clusters provides a better understanding of the used resources and the associated costs. This presents a clearer picture of computing, storage and network utilization and the distribution of costs among them.

  • Scaling your resources: You can scale your application up or down based on your needs or user demand. This ensures you only pay for the resources you use for a given time.

  • Scheduling of the pods and nodes: Utilization of the Kube-scheduler can provide node selectors, affinities, taints, and tolerations to mitigate performance waste and price increases by optimizing the scheduling of containers on your clusters.

  • Implementing cloud-native best practices: Having best practices across delivery pipelines can be complex and time-consuming, but it will help you create robust and flexible delivery pipelines. Practices such as DevOps, GitOps, etc., are widely used in the industry. Learn more about these practices from our blog on cloud-native here.

Discover more about keeping your costs low while using Kubernetes with our white paper that explores the complexities of cloud costs for businesses and alternatives beyond the hyperscalers.

What is the job of the Kube-scheduler?

The Kube-scheduler determines the node each pod will go in by progressing through two phases that identify the best solution. Through the first phase, the scheduler tries to filter out the nodes that do not fit the pod profile. This includes nodes with insufficient CPU and memory resources. In the second phase, the scheduler ranks the nodes to identify the best fit for the pod. It uses a priority function to assign a score to the nodes by calculating the number of free resources. This then assigns the pod to the node with the maximum number of available resources.

How do you monitor Kubernetes clusters?

Monitoring is essential to Kubernetes as it helps to examine your application performance. Through monitoring, you can examine the pods, containers, services, and cluster characteristics, which will help you remove the bottlenecks and improve the overall application performance. With the help of resource metrics pipelines and full metrics pipelines, you can collect monitoring statistics.

  • Resource metrics pipelines provide you with a set of metrics that are related to the cluster components. This includes the Horizontal Pod Autoscaler controller and the kubectl top utility. The Kubernetes Horizontal Pod Autoscaler helps scale a workload automatically to meet demand by updating a workload resource automatically. You can see the metrics of your pods and nodes using the kubectl top command. These measurements are gathered by the in-memory metrics server and made available via the metrics.k8s.io API.

  • Full metrics pipelines give more curated metrics to monitor a Kubernetes cluster. Kubernetes can respond to these metrics by autoscaling or adapting to the cluster based on the current state using mechanisms like the Horizontal Pod Autoscaler. The full metrics monitoring pipeline fetches the metrics from the Kubelet and then exposes them to Kubernetes via an adapter. It is done by implementing the custom.metrics.k8s.io or external.metrics.k8s.io API.

Nate Matherson, Co-Founder & CEO of ContainIQ, put together a tutorial outlining Kubernetes monitoring best practices, the importance of alerting, and solutions. To get the most out of your Kubernetes clusters, visit the tutorial here.

What is the role of the Load Balancer in Kubernetes?

In Kubernetes, the Load Balancer does the following tasks:

  • Assistance in the distribution of network loads and service requests among multiple instances.

  • Helps in the improvement of scalability by making Kubernetes respond automatically to changes in incoming traffic.

  • Sends workloads to healthy pods, which ensures the high availability of your application.

A Load Balancer is a service that acts as a traffic controller by directing client requests to the nodes, which can serve them quickly and effectively. If one host goes down and becomes unresponsive, the Load Balancer redistributes its workloads among the other nodes, ensuring the application’s availability. If a new node joins a cluster, the Load Balancer will automatically send requests to the pods attached to it.

Our team has created a range of tutorials surrounding Load Balancers which aim to help you discover how to use them and the process of creating them. Find more information here.

Why should you use namespaces?

In a Kubernetes cluster, you can organize resources in namespaces (similar to a virtual cluster inside a cluster). There are several reasons for using namespaces, including:

  • Grouping resources: Suppose you have a complex application and are provided with a default namespace. This will be filled with different components of your application, making it difficult to get an overview. In this case, you can group resources in multiple namespaces, separating the different components, and getting a better overview of what is inside.

  • Sharing resources inside a cluster: If you want to host both staging and development in the same cluster, you can group the resource components in different namespaces allowing the same resources to be used across both parts. This also means that if you want to host two production versions in the same cluster, you can make separate namespaces for the two versions and use the same resources.

  • Limit resources that each namespace consumes: Defining resource quotas per namespace is possible, allowing you to limit the number of resources one can utilize. This helps to prevent someone from using too many resources, which can result in others facing shortages.

  • Avoiding conflicts between multiple teams: If two teams use the same cluster, one team's deployments can easily override another if using the same namespace. One team can have an application deployed with a configuration, but if the other team decides to make a deployment with the same name (but a different configuration), it will overwrite the deployment created by the first team. Each team can use their own namespaces to avoid these conflicts and work seamlessly.

  • Limit access to namespaces across multiple teams: If you are working with multiple teams, you have the option to limit a team’s access to their namespace only, which will allow them to create, update, and delete resources in that namespace. This will help them work in their own isolated environment and minimize the risk of accidentally interfering with other teams’ work.

Saiyam Pathak explores how to create a Kubernetes namespace, change the namespace context, and create objects with identical names in different namespaces in our Civo Academy course, Kubernetes concepts.

What are some examples of security measures for Kubernetes?

Security best practices are needed for keeping Kubernetes workloads safe in a production environment. We will have a look at some of them:

  • Enabling Role-Based Access Control or RBAC for Kubernetes: RBAC can help you determine the number of people accessing the Kubernetes API and the permissions they have (automatically enabled by default from Kubernetes v1.6 or higher).

  • Integrating Kubernetes with a third-party authentication provider: With this integration, you will have multi-factor authentication and the assurance that the Kube-apiserver won’t change when users are added or removed.

  • Isolating Kubernetes nodes from public networks: Kubernetes nodes should be on a separate network from the general corporate network. This is possible by isolating the Kubernetes control and data traffic. Configuring the nodes with an ingress controller will only allow connections between the control plane node and the specified port through the network access control list.

  • Monitoring network traffic: Observation of the active network traffic and comparing it against the traffic allowed by Kubernetes network policy will help you understand the interaction of your application and the identification of rogue communications.

Saiyam Pathak covered more about Kubernetes security in our Civo Academy course, Kubernetes configuration and security. Here you can learn more about securing your Kubernetes cluster with guides and demos on ConfigMaps, Secrets, access control (RBAC, authentication), and more.

Start learning more

You can learn even more about Kubernetes through Civo Academy. Or, how about putting your knowledge to the test? Sign up to Civo today to launch your first cluster in under 90 seconds!