Skip to main content

Managing a Kubernetes cluster's node pools

Overview

You can group a cluster's worker nodes into node pools. The nodes in each pool are all of the same size, so if you want a cluster to have nodes of different instance sizes, you must create a new pool for each size.

note

When creating nodes for GPU workloads, you will need to select the "GPU Optimized" Tab when selecting the size of the node.

Worker Node Allocatable Resources

Civo Kubernetes uses an intelligent resource allocation system to determine how much CPU and memory are available for your workloads on each worker node. This system reserves resources for essential system processes while maximizing the resources available for your applications.

How Resource Allocation Works

When you create worker nodes, not all of the node's CPU and memory are available for your pods. The system reserves resources for:

  • System daemons: Essential Kubernetes components like kubelet, container runtime, and system processes
  • Pod eviction: Buffer space to handle pod evictions gracefully
  • Kernel and OS: Operating system overhead

Memory Reservation Algorithm

Civo Kubernetes uses different memory reservation strategies depending on the node's total RAM.

Small nodes (RAM ≤ 2 GiB)

For nodes with 2 GiB or less of RAM (Extra Small and Small Standard sizes), the system uses fixed reservation values optimized to leave enough allocatable memory for workloads:

ComponentExtra Small (1 GiB)Small (2 GiB)
kube-reserved (memory)256 MiB512 MiB
system-reserved (memory)100 MiB100 MiB
eviction-hard (memory threshold)75 MiB100 MiB
Total reserved431 MiB (47%)712 MiB (35%)
Allocatable~487 MiB (53%)~1336 MiB (65%)
note

These fixed values were introduced to prevent memory pressure on small nodes. With the standard progressive algorithm, an Extra Small node would have 71% of its memory reserved, leaving insufficient room for even basic system pods to schedule.

Standard nodes (RAM > 2 GiB)

For nodes with more than 2 GiB of RAM, the kube-reserved memory is calculated using a progressive, tiered approach that scales with the total memory of the node:

Memory RangeReservation Rate
First 4 GiB25% of memory in this range
Next 4 GiB (4-8 GiB total)20% of memory in this range
Next 8 GiB (8-16 GiB total)10% of memory in this range
Next 112 GiB (16-128 GiB total)6% of memory in this range
Above 128 GiB2% of memory in this range

In addition to the calculated kube-reserved value, the following fixed reservations apply to standard nodes:

ComponentValue
kube-reserved (additional buffer)+100 MiB added to the calculated value
system-reserved (memory)200 MiB
eviction-hard (memory threshold)100 MiB

Allocatable Resources by Node Size

The following table shows the actual reserved and allocatable memory for each Standard (g4s.kube.*) node size:

SizeRAMkube-reservedsystem-reservedeviction-hardTotal ReservedAllocatableReserved %
Extra Small1 GiB256 MiB100 MiB75 MiB431 MiB~487 MiB47%
Small2 GiB512 MiB100 MiB100 MiB712 MiB~1336 MiB35%
Medium4 GiB1124 MiB200 MiB100 MiB1424 MiB~2672 MiB35%
Large8 GiB1943 MiB200 MiB100 MiB2243 MiB~5949 MiB27%
tip

Performance (g4p.kube.*), CPU Optimized (g4c.kube.*), and RAM Optimized (g4m.kube.*) node types use the same reservation algorithm. Larger nodes have higher allocation efficiency — for example, a 128 GiB node reserves only about 8% of its memory.

CPU Reservation Algorithm

CPU reservations follow a progressive model across all node sizes:

CPU CoresReservation Rate
First core6% of the core
Second core1% of the core
Next 2 cores (cores 3-4)0.5% per core
Above 4 cores0.25% per core

Benefits of This Approach

  • Predictable Performance: Ensures system stability by reserving adequate resources for essential processes
  • Optimized for All Sizes: Small nodes use fixed reservations tuned to avoid memory pressure, while larger nodes benefit from progressive scaling for higher allocation efficiency
  • Industry Best Practices: Uses proven resource allocation methodologies for optimal cluster performance
  • Workload Protection: Reserved buffer prevents resource starvation of critical system components

Adding a new node pool

You can add a new node pool to a running cluster by clicking on "Create new pool" on your cluster's information page.

Cluster node pool information

You will be taken to the pool creation page:

Adding a new node pool options

In this section, you can select the number of nodes to create in this new pool, and the specifications/size of the nodes to create. You can choose from the same sizes as when creating a cluster.

The cost per node of each type is displayed.

When you click "Create new pool" you will be taken back to the cluster information page and the new pool will be displayed as creating:

New node pool is being created

You can then specify specific tasks within your cluster to run on a specific pool's nodes, optimizing your cluster.

Deleting a node pool

You can delete a node pool entirely by clicking on the "Delete" button next to the node pool information.

Node pools information

A popup will appear asking you to confirm that you want to delete the node pool by entering its name:

Delete node pool popup

The pool will be deleted as soon as you click "Delete" and is irreversible. All workloads in that pool will be destroyed and re-allocated in your cluster.

Recycling nodes

If you need to rebuild nodes for whatever reason, you can use the recycle method to rebuild a single node.

note

Recycling a node will delete it entirely, rebuild a new node to match it, and attach that to your cluster. When a node is recycled, it is fully deleted. The recycle command does not drain a node, it simply deletes it before building a new node and attaching it to a cluster. It is intended for scenarios where the node itself develops an issue and must be replaced with a new one.

Recycling a node on the dashboard is done on the Kubernetes cluster management page in the Node Pools section. Each node will have its own button to recycle, highlighted in the image below:

Recycle node button

Once you click the recycle button, you will be prompted to confirm your choice:

Recycle node confirmation

The confirmation is important, as the node is immediately torn down and replaced when recycled.