A beginners guide to container network interfaces (CNI)

Alex Jones avatar
By Alex Jones

Description

Learn about how container network interfaces (CNI) work in Kubernetes and the different technologies and layers involved in routing packets between nodes and pods.


Transcription

What is a Container Network Interface (CNI)?

Today we're going to talk about container network interfaces. The CNI project and CNCF help combine various network technologies and produce a set of plugins that can be switched in and out of Kubernetes clusters through the kubelets to offer different features and use cases. They are a requirement for inter-node networking, and there are many different options and flavors. Some offer VXLAN capabilities, dual stacking, extended Berkeley Packet Filter, logging, etc. But essentially, they're all designed so that you can talk between containers at the base layer. The reason that CNI is popular and why it was necessary is that Kubernetes doesn't provide its own networking implementation, it just defines the model, and it leaves us to fill the gaps and build that model out.

What is Flannel and how does it work?

Flannel is a CNI that is fairly simplistic but represents the power, and we can also follow through on how it creates inter-host networking. It's a virtual network layer encapsulated, and in this example, we can see we have a VPC, and this VPC could be an AWS VPC or any other cloud provider. We have a /19 at the top level of addressable IPs in this VPC. We then have something called a Flannel pod network, which is a /16. So, instantly you can see that this is a much larger address range than the VPC level of the /19, and, you know, that's 65,536 addressable IPs in there. In addition, we also then get with Flannel the ability to carve up at the host level as well as at the machine level. Then you'll get, in fact, a /24 on each of these host machines. What's important about that is that this gives you three layers of addressable IP ranges, and it also forms the backbone of how Flannel will do the subnet to host mapping, which we'll talk about later.

There's a simple flat overlay network that gets created inside of your VPC on your cluster. Essentially, we'll follow the journey of how a packet would be sent from one virtual machine or one Kubernetes node to another in the cluster and then understand how to route it to the right pod. In this example, let's pick this pod here with the container 100.96.1.2, and we want to send it to a pod on this host within the cluster over here. This host might have many other containers and pods within that are operating and running on this host. Therefore, we need a way to route the IP packets to the destination accurately. Let's think about how these will look. We have our source and destination here, but how will that journey look? The first thing that happens, as we spoke about previously, is that we cross the network namespace within the pod, go across the veth pair and down to the L3 Linux bridge, and that bridge then forwards the packets through to the flannel0. Now, flannel0 is a TUN device simulating a network layer device, and it operates by sending bidirectional packets between the userland and the kernel. This is the cornerstone of how Flannel works because it can intercept operations coming out of the kernel level space.

What effectively happens is when flannel0 is set up, flanneld will be setting up kernel routing table rules. This means packets are dropped through flannel0 when they are identified as part of the wider /16 overlay network. They go into the kernel, and the kernel lane attempts to route these packets but ultimately decides to push them back up through the TUN0 device, which is forwarded instead to the flanneld daemon process. This daemon process is where all of the logic happens. The daemon process can query etcd for the subnet range of the target host IP, and that subnet range can be resolved to the underlying VM VPC level address.

It then uses a UDP Send to the node in question, which is intercepted by the flannel daemon running there, which performs the inverse activity going down through the flannel0 tunnel into the kernel, then back up. Still, the routing table will push it towards the local Docker bridge. It will move up over the L3 into the veth, across the network namespace, and up into the container. Therefore, that illustrates how Flannel enables quite an esoteric problem in a very simplistic way. It gives us a ton of addressable IPs and enables us to plug this in and not think about it too much so that once it's configured, it just starts working. The thing to say about Flannel in terms of performance is that there is a penalty of the transaction between user and kernel level that other CNIs have ways around by slightly different implementations. And this is one of many of a variety of implementations on how to use IPAM and send IP packets across the network.

I hope this has been useful to give a high level of what a CNI is, how you can implement a CNI, the technologies behind the principles, and the layers they're working on to get your packets where they need to go.

Thank you very much.

Civo course complete badge

You've successfully completed our course on Kubernetes networking

We hope you enjoyed learning and encourage you to check out our other courses to further expand your knowledge on Kubernetes