Kubernetes helps improve the developer experience by abstracting as much of the infrastructure layer as possible. This automates the creation and management of your container applications by being programmatically aware of your infrastructure resources and their usage. This knowledge helps Kubernetes run your workloads. To begin with, I will run through some key terms that will be used throughout this tutorial.

When your manifest is applied to your cluster, the Controller creates pods, which are the smallest executable units of your application. These pods contain your application, configs, resource quotas, etc., as declared in your manifest. The Scheduler then assigns your pods to the Node most equipped to run the workload.

Kubernetes Nodes are virtual or physical machines that house your Kubernetes components and run your workloads. They are primarily of two types; Control Plane and Worker Nodes

Node Components are available on every node in your cluster and differ depending on whether it's a Control Plane Node or Worker Node.

Through this tutorial, I will provide you with an understanding of using Node Exporter to monitor the nodes of your Kubernetes cluster. This will allow us to look at an overview of Prometheus Exporters and Collectors and how they help implement monitoring.

Kubernetes Node Monitoring

Prometheus is an open-source monitoring and alerting toolkit which collects and stores metrics as time series data. It has a multidimensional data model which uses key / value pairs to identify data, a fast and efficient query language (PromQL), service discovery, and does not rely on distributed storage. Most Kubernetes clusters expose cluster-level metrics via the metrics API, which can be regularly scraped by your monitoring server.

However, System level metrics from the machines that make up your cluster, such as CPU, Disks, Memory, Networking, Processes, etc., and container-level metrics are essential in keeping track of the overall health of your infrastructure and applications, and are not exposed by the metric server.

Operating systems that expose their metrics in other formats, we would like to monitor, Prometheus Exporters are used for collecting metrics from them.

Prometheus exporters help us monitor systems we are unable to instrument metrics for. They fetch non-Prometheus metrics, statistics, and other types of data, convert them into the Prometheus metric format, start a server and expose these metrics at the /metrics endpoint.

Exporters can include:

  • Exporters for HTTP such as Webdriver exporter, Apache exporter, HAProxy exporter etc.

  • Exporters for messaging systems such as Kafka exporter, RabbitMQ exporter, Beanstalkd exporter, etc.

  • Exporters for Databases such as MySQL server exporter, Oracle database exporter, Redis exporter, etc.

Node Exporter for Kubernetes Node Metrics

Prometheus Node Exporter is a Prometheus exporter for hardware and OS metrics. It is equipped with collectors exposing various system-level metrics, which can then be scraped by the Prometheus server for your monitoring needs.

Node Exporter is able to perform its operations through the help of Collectors.

Collectors represent a set of metrics and combinedly, they are able to perform the job of our exporter. Examples of some collectors included in our Node Exporter include:

  • CPU: Exposes CPU statistics.
  • Diskstats: Exposes disk I/O statistics.
  • Filesystem: Exposes filesystem statistics, such as disk space used.
  • Loadavg: Exposes load average.
  • Netstat: Exposes network statistics from /proc/net/netstat.
  • Thermal_zone: Exposes thermal zone & cooling device statistics from /sys/class/thermal.

Node Exporter supports dozens of collectors and encourages writing custom collectors to extend functionality.

Monitoring Kubernetes Nodes with Prometheus and Grafana

For our cluster, we will use Civo’s managed Kubernetes service. Civo’s cloud-native infrastructure services are powered by Kubernetes and use the lightweight Kubernetes distribution K3s for superfast launch times.

Prerequisites

To get started, we will need the following:

After setting up the Civo command line with our API key using the instructions in the repository, we can create our cluster using the following command:

civo kubernetes create civo-cluster

Our cluster civo-cluster is created.

Your Alt Text

Now that our cluster is running, we will create a namespace to hold all our monitoring resources.

We can create a namespace using the following command:

kubectl create ns monitoring 

With a namespace to hold our monitoring resources, we will now deploy the Prometheus operator to our cluster.

The Prometheus Operator facilitates the deployment and management of Prometheus and related monitoring components as Custom Resource Definitions. These components include:

  • Prometheus
  • Alertmanager
  • Service Monitor
  • Pod Monitor
  • Probes
  • Prometheusrules

We will deploy the Prometheus Operator using the following command:

kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/master/bundle.yaml -n monitoring

The Prometheus Operator will deploy and manage our instances of Prometheus. We will define our Prometheus deployment declaratively using the following code:

 apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  name: prometheus
  labels:
    app: prometheus
spec:
  serviceAccountName: prometheus
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelector: {}
  podMonitorSelector: {}
  resources:
    requests:
      memory: 400Mi

So our Prometheus deployment can operate freely within our cluster, we need to give it some permissions using a service account and roles.

We can define our permissions using the following code:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/metrics
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["get"]
- nonResourceURLs: ["/metrics", "/metrics/cadvisor"]
  verbs: ["get"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: monitoring

We will create our service account using the following command:

kubectl apply -f serviceaccount.yaml

Now we can view our Prometheus User Interface by exposing our deployment using the Kubectl port-forward command:

kubectl port-forward {POD_NAME} 9090:9090

We can reach our Prometheus UI at the following URL: http://localhost:9090/

Your Alt Text

Install Node Exporter in Kubernetes

Node exporter runs on each Node in the Kubernetes cluster, so we can install it as a Deamonset.

Kubernetes Daemonset deploys our applications in a way that ensures a copy of the application is running on every node in our cluder.

We can deploy the Node Exporter using the following code:

---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: node-exporter
  name: node-exporter
  namespace: prometheus
spec:
  selector:
    matchLabels:
      app: node-exporter
  template:
    metadata:
      annotations:
        cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
      labels:
        app: node-exporter
    spec:
      containers:
      - args:
        - --web.listen-address=0.0.0.0:9100
        - --path.procfs=/host/proc
        - --path.sysfs=/host/sys
        image: quay.io/prometheus/node-exporter:v0.18.1
        imagePullPolicy: IfNotPresent
        name: node-exporter
        ports:
        - containerPort: 9100
          hostPort: 9100
          name: metrics
          protocol: TCP
        resources:
          limits:
            cpu: 200m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 30Mi
        volumeMounts:
        - mountPath: /host/proc
          name: proc
          readOnly: true
        - mountPath: /host/sys
          name: sys
          readOnly: true
      hostNetwork: true
      hostPID: true
      restartPolicy: Always
      tolerations:
      - effect: NoSchedule
        operator: Exists
      - effect: NoExecute
        operator: Exists
      volumes:
      - hostPath:
          path: /proc
          type: ""
        name: proc
      - hostPath:
          path: /sys
          type: ""
        name: sys
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: node-exporter
  name: node-exporter
  namespace: prometheus
spec:
  ports:
  - name: node-exporter
    port: 9100
    protocol: TCP
    targetPort: 9100
  selector:
    app: node-exporter
  sessionAffinity: None
  type: ClusterIP
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    app: node-exporter
    serviceMonitorSelector: prometheus
  name: node-exporter
  namespace: prometheus
spec:
  endpoints:
  - honorLabels: true
    interval: 30s
    path: /metrics
    targetPort: 9100
  jobLabel: node-exporter
  namespaceSelector:
    matchNames:
    - prometheus
  selector:
    matchLabels:
      app: node-exporter

Next, we will create the resources in our cluster by running the following command:

kubectl apply -f nodeexporter.yaml

We can also install Node Exporter using Helm. The Prometheus Community maintains a helm chart which is kept up to date. Using a helm chart, we can use the following commands to install Node Exporter.

First, add the charts to our repository:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

Next, we will update our repository:

helm repo update

Finally, we can install the chart:

helm install [RELEASE_NAME] prometheus-community/prometheus-node-exporter

Service Discovery with Prometheus

Service Discovery is Prometheus’ way of finding our desired endpoints to scrape. Although Prometheus and Node Exporter have been installed in our cluster, they have no way of communicating.

The Node Exporter is collecting Metrics from our operating system by our Prometheus server isn't pulling metrics from our Node Exporter.

The Service Monitor object is Prometheus Custom Resource Definition, enabling us to configure our scrape target.

We can declaratively tell Prometheus the applications and namespace we want to collect metrics from, we can also configure scrape frequency, endpoints, and ports.

The following code tells Prometheus to scrape our /metrics endpoint, where our Node Exporter publishes its metrics.

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: prometheus
  labels:
spec:
  selector:
    matchLabels:
  namespaceSelector:
    any: true
  endpoints:
    - path: /metrics
    - Port: http-metrics

Now that Prometheus has been configured to scrape our Node Exporter metrics, we can view them in our User Interface.

scrape our Node Exporter metrics

Visualizing Node Metrics With Grafana

Grafana is an open-source interactive data visualization platform helpful in visualizing metrics, logs, and traces collected from your applications. Grafana allows us ingest data from a huge number of data sources, Prometheus being one of the most prominent sources, and build interactive visual web applications using its powerful query language.

We can install Grafana with helm.

First, we will add the repository using the following command:

helm repo add grafana https://grafana.github.io/helm-charts

Next, we can install the chart with the following command:

helm install grafana grafana/grafana

Finally, to be able to access the Grafana User Interface, we expose our deployment using the following command:

kubectl port-forward grafana-5874c8c6cc-h6q4s 3000:3000

Users can access the Grafana User Interface via the following URL: http://localhost:/3000.

Grafana User Interface

Grafana requires log in information to access our dashboard. We will use the default username admin and get our password using the following command.

kubectl get secret grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

We are logged in!

Grafana requires log in information

To begin visualizing metric data with Grafana, we need to first add our data sources.

Data sources are typically the outputs from our monitoring implementations. We can add a variety of data sources including Prometheus.

To add a data source, at the left corner of your User Interface, click on settings, and then data sources. Here, we will select Prometheus as our data source.

Your Alt Text

Because Prometheus is in the same cluster as our Grafana service, they can communicate using their local DNS, so we can add Prometheus using its local DNS name.

Your Alt Text

Once we add our data source, we can see our list of data sources added.

Your Alt Text

With Grafana Panels we can build visualizations of our data sources from constructed queries. A group of related panels can be organized into dashboards.

Additionally, we can create templates of these dashboards which can be stored, shared, and reused. These templated dashboards are an be shared as JSON, URLs, or Grafana Dashboard ID.

The Grafana community supports a variety of dashboards which you can import easily with their IDs.

To import the community supported Node Exporter Dashboard, click on import from the dashboard page and input the dashboard ID.

Your Alt Text

Grafana loads the dashboard and we can now select the data source which we want to vizualise.

Your Alt Text

Once we have imported our Dashboard, we can see our panels showing the visualizations of the different metric types exposed by Node Exporter.

Grafana Node Exporter

Wrapping Up

By following this guide, we have gained an understanding of using Node Exporter to monitor the nodes of our Kubernetes cluster. We have gotten an overview of Prometheus Exporters and Collectors and how they help implement monitoring.

Finally, we collected the node metrics using Prometheus and visualized the data with Grafana.