Why are network policies in Kubernetes so hard to understand?
We often hear that network policies are too complex to understand. In this tutorial, we will take a look at how network policies work by exploring concrete examples.
Written by
Head of Developer Relations @ vCluster
Written by
Head of Developer Relations @ vCluster
In Kubernetes, the concept of network policies allows you to control the traffic flow within a cluster. Essentially, by creating policies, you determine which pods can access others, streamlining the process of restricting traffic between different applications within the cluster.
You will also run into many microservices in different namespaces within a Kubernetes environment. These applications are run as pods, which in turn run containers. These containers are your applications and are capable of communicating with every other pod, either directly or through services. However, this open communication model isn't always secure. Fortunately, Kubernetes offers the concept of network policy, implemented by various network providers, to provide out-of-the-box functionality for controlling this aspect securely.
The community often voices that network policies are complex, but by exploring concrete examples, we can gain a clearer understanding of how they work in action.

Prerequisites
To follow along with this tutorial, you need to ensure you have the following in place:
Creating a Kubernetes cluster with Cilium
To begin with, let’s create a Civo Kubernetes cluster with Cilium as the CNI. You can create the cluster from the UI or the CLI.
For the purpose of this tutorial, we will be using Civo Kubernetes, but you can go with any Kubernetes cluster and CNI where network policies will work.
Interacting with the cluster
Once you have the cluster created, you can export the KUBECONFIG variable in your terminal and point it to the downloaded kubeconfig file for the cluster. From here, you should be able to interact with the cluster:
kubectl get nodesNAME STATUS ROLES AGE VERSIONk3s-networkpolicies-7aed-fb151a-node-pool-71b0-krmn8 Ready <none> 71s v1.28.2+k3s1k3s-networkpolicies-7aed-fb151a-node-pool-71b0-jl4q3 Ready <none> 69s v1.28.2+k3s1k3s-networkpolicies-7aed-fb151a-node-pool-71b0-6ko85 Ready <none> 67s v1.28.2+k3s1
Create 2 namespaces dev1 and dev2:
kubectl create ns dev1namespace/dev1 createdkubectl create ns dev2namespace/dev2 created
Create a pod demo1 and pod demo2 in respective namespaces with NGINX image:
kubectl run demo1 --image=nginx -n dev1pod/demo1 createdkubectl run demo2 --image=nginx -n dev2pod/demo2 createdkubectl get pods -owide -n dev1NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdemo1 1/1 Running 0 97s 10.0.1.147 k3s-networkpolicies-7aed-fb151a-node-pool-71b0-6ko85 <none> <none>kubectl get pods -owide -n dev2NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdemo2 1/1 Running 0 94s 10.0.1.185 k3s-networkpolicies-7aed-fb151a-node-pool-71b0-6ko85 <none> <none>
Testing the connectivity
Let’s now test the connectivity of one pod from another:
kubectl exec demo1 -n dev1 -- curl 10.0.1.185% Total % Received % Xferd Average Speed Time Time Time CurrentDload Upload Total Spent Left Speed0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style>html { color-scheme: light dark; }body { width: 35em; margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif; }</style></head><body><h1>Welcome to nginx!</h1><p>If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.</p><p>For online documentation and support please refer to<a href="http://nginx.org/">nginx.org</a>.<br/>Commercial support is available at<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p></body></html>100 615 100 615 0 0 405k 0 --:--:-- --:--:-- --:--:-- 600k
In this code, we exec into pod demo1 in the dev1 namespace and try to curl the IP of pod demo2 in dev2 namespace, this shows that any pod can communicate with any other pod in any namespace.
Now, how can we fix this? You are right! Using NetworkPolicy.
Creating a network policy
Let’s create a network policy in the dev2 namespace so that no traffic can reach the pods in the dev2 namespace.
cat << EOF | kubectl apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: deny-allnamespace: dev2spec:podSelector: {}policyTypes:- IngressEOFnetworkpolicy.networking.k8s.io/deny-all created
Above is the manifest for all the pods in dev2 namespace. The purpose of this policy is to restrict all incoming traffic to the pods within the dev2 namespace. Here's a breakdown of how it works:
apiVersion:networking.k8s.io/v1: Specifies the API version for the network policy resource.kind:NetworkPolicy: This specifies the kind of Kubernetes resource you're defining, which in this case is a Network Policy.metadata: Contains metadata about the network policy, including its name (deny-all) and the namespace (dev2) it is applied to.spec: Defines the specifications of the Network Policy.podSelector: This is set to an empty object ({}), which means the policy applies to all pods within the specified namespace (dev2 in this case). You could specify label selectors here if you wanted to target specific pods.policyTypes: Specifies the types of policies. In this case, it includes - Ingress, which means the policy will apply to incoming traffic to the pods. By not specifying Egress in the policy types, this policy does not restrict egress (outgoing) traffic from the pods.
Ingress: Since no rules are defined under the ingress section (which is implicitly understood from the lack of an ingress field under spec), it means no inbound connections are allowed to any pods in the dev2 namespace. You would define rules here if you wanted to allow specific types of ingress traffic.
The following outcome should appear:
kubectl exec demo1 -n dev1 -- curl --connect-timeout 5 10.0.1.185% Total % Received % Xferd Average Speed Time Time Time CurrentDload Upload Total Spent Left Speed0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0curl: (28) Failed to connect to 10.0.1.185 port 80 after 5001 ms: Timeout was reachedcommand terminated with exit code 28
You can see that after applying this policy no traffic can reach the pods in the dev2 namespace.

Receiving incoming TCP traffic
Next, let’s try to cover a couple more scenarios to understand the concept more clearly. Next, we will create a network policy that would allow the traffic from pods in dev1 namespace to dev2 namespace over port 80.
cat << EOF | kubectl apply -f -apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: kube-demonamespace: dev2spec:podSelector: {}ingress:- from:- namespaceSelector:matchLabels:kubernetes.io/metadata.name: dev1ports:- protocol: TCPport: 80EOFnetworkpolicy.networking.k8s.io/kube-demo created
This network policy allows pods in the dev2 namespace to receive incoming TCP traffic on port 80 from any pod in the dev1 namespace. No other ingress traffic is permitted by this policy, effectively isolating the pods in dev2 from unwanted or unsolicited incoming traffic from pods in other namespaces, except for the allowed traffic from dev1.
Network policy example
Interestingly, if you have a previous policy of deny all and this policy as well, it will be a combination, and the resultant will allow traffic on port 80 from pods in dev1 namespace to dev2 namespace.
You can check the output below:
kubectl exec demo1 -n dev1 -- curl --connect-timeout 5 10.0.1.185% Total % Received % Xferd Average Speed Time Time Time CurrentDload Upload Total Spent Left Speed0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 100 615 100 615 0 0 267k 0 --:--:-- --:--:-- --:--:-- 300k<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style>html { color-scheme: light dark; }body { width: 35em; margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif; }</style></head><body><h1>Welcome to nginx!</h1><p>If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.</p><p>For online documentation and support please refer to<a href="http://nginx.org/">nginx.org</a>.<br/>Commercial support is available at<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p></body></html>
To check that traffic is not allowed from any other namespace or on any other port, let's create a pod in the dev2 namespace listening on a different port than 80 and a pod in default namespace as well:
kubectl run default --image=nginxpod/default createdkubectl get pods -owideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdefault 1/1 Running 0 26s 10.0.2.237 k3s-networkpolicies-7aed-fb151a-node-pool-71b0-jl4q3 <none> <none>kubectl exec default -- curl --connect-timeout 5 10.0.1.185% Total % Received % Xferd Average Speed Time Time Time CurrentDload Upload Total Spent Left Speed0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0curl: (28) Failed to connect to 10.0.1.185 port 80 after 5000 ms: Timeout was reachedcommand terminated with exit code 28
Now, lets try to create a pod and service to listen on port 8080:
kubectl run http-echo --image=hashicorp/http-echo -n dev2 -- -listen=:8080 -text="Hello from http-echo"pod/http-echo createdkubectl get pods -n dev2 -owideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdemo2 1/1 Running 0 80m 10.0.1.185 k3s-networkpolicies-7aed-fb151a-node-pool-71b0-6ko85 <none> <none>http-echo 1/1 Running 0 26s 10.0.1.115 k3s-networkpolicies-7aed-fb151a-node-pool-71b0-6ko85 <none> <none>
Creating a service
Create the service using the following:
kubectl expose pod http-echo -n dev2 --port=8080service/http-echo exposedkubectl get svc -n dev2NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEhttp-echo ClusterIP 10.98.167.221 <none> 8080/TCP 28s
kubectl exec demo1 -n dev1 -- curl --connect-timeout 5 10.98.167.221% Total % Received % Xferd Average Speed Time Time Time CurrentDload Upload Total Spent Left Speed0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0curl: (28) Failed to connect to 10.98.167.221 port 80 after 5000 ms: Timeout was reachedcommand terminated with exit code 28
The kube-demo NetworkPolicy that was applied to all pods in the dev2 namespace (since podSelector is empty), allows them to receive incoming TCP traffic on port 80 only from pods within the dev1 namespace (identified by the label kubernetes.io/metadata.name: dev1). All other incoming traffic from different namespaces, or different ports, will be denied by default, as this is the standard behavior of Kubernetes NetworkPolicies when they are applied to a set of pods. We proved that with a pod in the default namespace and also by running a pod on port 8080 and trying to connect to it also failed.
Summary
Throughout this tutorial, you should now have a better understanding of how you can apply network policies within your Kubernetes cluster to limit the ingress/egress traffic for the pods. Another interesting way to learn more about this topic is by using this tool which allows you to create network policies for Kubernetes and gain a better understanding of the concept.

If you want other resources to keep learning more about this topic, I recommend checking out the following:

Head of Developer Relations @ vCluster
Saiyam Pathak is Head of Developer Relations at vCluster and a prominent advocate in the cloud-native and Kubernetes community. He is also the founder of Kubesimplify, a platform dedicated to simplifying Kubernetes and cloud-native technologies through educational content.
Saiyam has previously worked at organizations including Civo, Walmart Labs, Oracle, and HP, gaining experience across machine learning platforms, multi-cloud infrastructure, and managed Kubernetes services. He actively contributes to the community through technical content, meetups, and open-source initiatives.
Share this article
Further Reading
3 April 2024
Exploring network policies with Cilium
12 January 2018
Networking 101 - IP addressing
24 January 2024