Back-up recovery (abbreviated as BR in this article) is one of the most common solutions used by data-driven organizations to prevent loss of data during events such as the following:

  • Data corruption
  • Power outages
  • Fire breakout
  • Split partition

These preceding events have caused many real-world organizations to lose access to business data. Fortunately, there are many tools available for managing backup and recovery of data, both open-source and commercial tools. They all claim to be the most suitable data recovery tool for general usage. Yet not all BR tools are designed to be cloud-native.

The following are features of BR tools/solutions well suited for a cloud-native environment like Kubernetes (otherwise known as K8s).

  • Portability: Apart from the native Kubernetes engine, you can run it on other Kubernetes engines and distributions, such as K3s on Civo, Redhat OpenShift, Rancher Kubernetes Engine, Google Kubernetes Engine and so on.

  • Remote backup: The ability to take a snapshot or backup of objects in a Kubernetes cluster and replicate them in another remote Kubernetes cluster.

  • Offsite backup: Similar to the remote backup feature, but involves replicating backup objects in an off-site location (often making of use an Object store service such as S3, minIO, or ceph).

  • Microservices architecture: Some BR tools support multiple object replication in a cloud-native environment. For instance, Longhorn is able to replicate volumes across several Kubernetes clusters. So, in case a particular cluster goes down, there are volumes with exact copies of data in the other clusters that remain accessible.

  • Scheduled backups: You never know when there is going to be a power outage. Relying on manual triggering of backups is prone to being disrupted. Hence you need to schedule when it's appropriate to make a snapshot of an object in a K8s cluster. Most BR tools like Velero support scheduled backups.

  • Restore and expire backups: A snapshot in remote storage services need to be restored when a cluster goes down, and snapshots need to be removed to save space when no longer current. Most BR tools offer restore features to restore and expire backup objects.

In this tutorial, we take a look at how we can efficiently implement a backup recovery strategy to prevent unfortunate events that can lead to data loss in the future.

Deploy Civo Kubernetes Cluster

For this project, we deploy k8s objects on four worker nodes in a Civo Kubernetes cluster.

It's best if you have the following installed. We create a Civo k8s cluster with Civo CLI and manage k8s cluster with kubectl.

Create a Cluster with Civo CLI

First, you need to generate an API key by signing up for a Civo account if you don't have one. Then execute the command below to attach Civo CLI to your API key, which can be found in your Civo account profile:

civo apikey save

The preceding commmand displays the following prompts sequentially. Choose a name for your account/API key and enter the specific API key from your profile:

Enter a nice name for this account/API Key: 
Enter the API key: Saved the API Key:

Use the following command to check that the Civo CLI uses your API key that you have given it:

civo apikey show  

It should show the chosen name and the key that matches the one in your account:

| Name      | Key                                                |
| demo_test | ik6bG3h2BD9aGEDMMXLiG5q85Lc2VcmeD1pWafEtNTIkdPRA3C |

Attaching your API key to Civo CLI allows you to connect and manage your Civo K8s cluster via the terminal.

Now you can create a Civo K8s cluster via the following command:

civo kubernetes create `postgres-velero` --size "g3.k3s.medium" --nodes 3 --wait --save --merge --region NYC1

The preceding command creates a cluster named postgres-velero cluster. The --size flag specifies the size of nodes in the postgres-velero cluster. The --nodes flag defines the number of nodes in the cluster to be created – in this case, 3.

For every k8s cluster, there is a config file in the kube folder/directory which describes the state of the cluster.

So, when you include the --save flag and the --merge flag, your cluster settings are automatically connected to the config file in the kube folder/directory. This means kubectl will be able to interact with your cluster.

Also, the --wait flag will force the CLI to spin and wait for the cluster to be active whilst the --region states the location of the postgres-velero cluster. The postgres-velero cluster is specified to be created in the NYC1 Civo region.

The command civo region ls shows the current region too.

| Code | Name        | Country        | Current |
| NYC1 | New York 1  | United States  |         |
| FRA1 | Frankfurt 1 | Germany        |         |
| LON1 | London 1    | United Kingdom | <=====  |

Download and Install minIO Client

Use the following steps to download and install minIO on a Civo virtual server.

The wget command downloads the MinIO client tool from the URL:


The chmod command assigns execution permission via the +x argument to the MinIO client user:

chmod +x mc

The --help flag shows detailed information about MinIO client commands:

./mc --help   

Deploy minIO Storage Server

In production, It is recommended to deploy minIO in a distributed mode.

We will need to create a server to handle our storage. For this, we will use a Civo Compute instance for the purpose.

Create a virtual machine instance with the Civo CLI tool

Execute the following command to create a virtual machine with the hostname as The --hostname parameter defines a specific hostname for the virtual machine. Other parameters such as --size determine the size of the virtual machine whilst the --diskimage parameter defines the specific operating system ID.

civo instance create --hostname=minio-demo.test --size g3.xsmall  --diskimage=7dd2e2a2-f56a-464e-98fe-663f5d29f6ed --initialuser=root

You will be able to run civo instance list to check that the instance has booted up and is active.

You can get the login password from the Civo dashboard page for the instance, which will allow you to log in with SSH:

ssh root@{the instance IP}

Once you have logged in to the instance with SSH, execute the following commands to deploy the MinIO server on a Linux virtual server. The code block below retrieves the MinIO server from the URL via the wget utility. Then assigns execution permission via +x to the MinIO server. In this context, the MinIO server is started via the minio binary. So the sudo mv command moves the minio command to the path /usr/local/bin/ .

$ wget  
$ chmod +x minio
$ sudo mv minio /usr/local/bin/

Afterward, create a folder or directory to host the minio server. Then launch the minio server by attaching it to port 9001 via the --console-address flag.

Use the command sudo lsof -i :9001 to check which process is using port 9001 in case you encounter an error message such as port 9001 not available.

$ mkdir ~/minio
$ minio server ~/minio --console-address :9001

If you prefer to run the MinIO storage server as a service, check here for the MinIO systemd script and how to set it up.

Alternatively, you can deploy the MinIO storage server via this helm chart

Download and Install Velero Client Tool

Download the tar file for the Velero client tool at the Velero official site.

Next extract the downloaded tar file using the following, making sure you swap the filename for the version you have downloaded:

tar -xvf velero-v1.8.1-linux-amd64.tar.gz

Afterward, move the extracted velero binary to the following path /usr/local/bin:

sudo mv velero-v1.8.1-linux-amd64 /usr/local/bin  

Deploy Velero Server

There are two options available to deploy the Velero server in a Kubernetes environment. The following options are supported:

We use the velero install command to install the velero server within the velero namespace.

We need to create a credential file minio-cred to store login credentials for the minio storage server as shown below:

touch minio-cred

Edit the minio-cred file with the command sudo vim minio-cred. Then copy and paste the contents below into the minio-cred file.

The key aws_access_key_id refers to the username in charge of the MinIO server whilst the aws_secret_acess_key refers to the user's password.

aws_access_key_id = minioadmin
aws_secret_access_key = minioadmin

Execute the following command to install the Velero server via the velero CLI tool within the velero-postgresql namespace.

velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.0.0 \
--bucket postgres-velero \
--secret-file ./minio-cred \
--use-volume-snapshots=true \
--backup-location-config \

The preceding velero install command does the following:

  • Specifies an object storage provider to store backup objects from the velero-postgresql cluster via the velero-plugin-for-aws:v1.0.0
  • Specifies bucket to contain backup objects. The bucket postgres-velero serves as the main repository for backup objects.
  • Specifies the file containing credentials for minio server. Attach the minio-cred file to the --secret-file flag. This makes it possible for Velero to transfer backup objects to the MinIO storage server.
  • Decides whether to take snapshots of volumes. Either true or false value is accepted.
  • Specifies configuration settings for where the MinIO storage server is located via the --backup-location-config flag. The minio key refers to the minio region whilst the s3ForcePathStyle key means the bucket postgres-velero is addressed using the path style. Check here for detailed information on S3 addressing style. The s3Url key refers to the URL address of MinIO storage.

You can also use install the velero server via this helm chart from Vmware Tanzu.

NB: When the preceding command is executed to install the Velero server, it takes a few minutes to start as shown via the following command:

Defaulted container "velero" out of: velero, velero-velero-plugin-for-aws (init)
Error from server (BadRequest): container "velero" in pod "velero-5c8fc4f8c7-4hljc" is waiting to start: PodInitializing

After a few minutes run the command below to just to be sure that the Velero server has access to the MinIO storage server:

kubectl logs deployment/velero -n velero

If the Velero server can't locate or access the MinIO storage server, Velero logs display an error message like the one below:

time="2022-06-01T12:05:04Z" level=error msg="Current backup storage locations available/unavailable/unknown: 0/1/0)" controller=backup-storage-location 

Selecting a default volume for Postgres-Velero Cluster

Other kubernetes platforms may not come with pre-defined storage class but require users to deploy preferred ones. However Civo allows users to specify which Storage class serves as the default volume for a specific cluster.

You can disable pre-defined volume via the kubectl patch command as shown below:

kubectl patch storageclass civo-volume -p '{"metadata": {"annotations":{"":"false"}}}'

Also you can use the command below to enable a specific volume as the default storage class:

kubectl patch storageclass <storage-class-name> -p '{"metadata": {"annotations":{"":"true"}}}'

Postgresql Volume Management with Civo-volume

Here we take a look at how to create local persistent volumes for the Postgresql pod.

Create a PersistentVolumeClaim

Let's create a PersistentVolumeClaim to request for the openebs storage class defined previously.

Use the command vim postgres-hostpath-pvc.yaml to create a PVC file. The PVC below attaches itself to the underlying storageclass and then defines the type of accessmode it requires as well as the amount of storage needed. Save the following in the file:

apiVersion: v1
kind: PersistentVolumeClaim
  name: local-hostpath-pvc
  storageClassName: civo-volume
    - ReadWriteOnce
      storage: 1G

Use the command :wq to save and exit the vim editor.

Now apply the persistentvolumeclaim defined in the postgres-hostpath-pvc.yaml file to your cluster as shown below:

kubectl apply -f postgres-hostpath-pvc.yaml

Before we deploy and attach the postgresql pod to the persistentvolumeclaim defined in the postgres-hostpath-pvc.yaml file, let's check if the pvc is actually in a pending status via the command below:

kubectl get pvc 

We should see something like this:

local-hostpath-pvc   Pending                                      postgresql-path   15s

Postgresql Pod Deployment

Now let's take a look at how to deploy the postgresql pod on one of the nodes in the postgres-velero cluster.

Create Deployment File for Postgresql

Create a deployment file via the vim utility:

sudo vim postgresql-deployment.yaml   

Then copy and paste the following content into the postgresql-deployment.yaml file.

Security context is not specified in the postgres-deployment.yaml file. However, it is recommended to include a security context for pods running in production.

apiVersion:  apps/v1
kind: Deployment
  name: postgres-deployment
    app: postgresql
  replicas: 1
      app: postgresql
        app: postgresql
        - name: postgres
          image: postgres:13.4
          imagePullPolicy: "IfNotPresent"
            - containerPort: 5432
            - name: POSTGRES_DB
              value: test_db
            - name: POSTGRES_USER
              value: mikey
            - name: POSTGRES_PASSWORD
              value: testmikey
            - mountPath: /var/lib/postgresql/data
              name: local-storage

      - name: local-storage
          claimName: local-hostpath-pvc

Then save and exit the vim editor via the :wq command.

The Postgres postgresql-deployment.yaml file deploys postgresql pod with one replica. In addition, environmental variables such as POSTGRES_DB and POSTGRES_USER defines the database name and creates a role/user once the pod is initialized. The postgresql pod mounts volume at /var/lib/postgresql/data and chooses a specific storageclass via the persistentVolumeClaim key.

Execute the command below to deploy the postresql pod:

kubectl apply -f postgresql-deployment.yaml

You can execute the command kubectl get pods just to be sure of the status of the postgres pod:

NAME                                   READY   STATUS    RESTARTS   AGE
postgres-deployment-7549fdbf7b-8swv6   1/1     Running   0          101s

Schedule and Automate Postgres Volume Backup

Finally, let's look at how to schedule and automate Postgres backup with Velero.

We assume you have already downloaded and installed Velero server and integrated it with restic

Schedule Remote Backup with Velero

You can automate postgres-velero cluster backup via the velero backup schedule command. This makes it possible for Velero to backup objects in the postgres-velero cluster without any manual effort from anyone in due time.

Execute the command below to schedule a backup of the postgres-velero cluster to the remote storage server on daily basis. Velero schedules backup of objects in the postgres-velero cluster via the --schedule flag and initializes backup via the velero create command:

velero create schedule daily-backup --schedule="@every 24h" --include-namespaces post-velero

The following output is displayed after the preceding command is executed:

Schedule "daily-backup" created successfully.


Automating backups is a good idea in the sense that it prevents unexpected loss of data assuming the cluster goes down unexpectedly. However, the location of the object storage server should also be considered. In this example, we deployed a virtual machine in the same region as the cluster. While this works for a demonstration, in production cases you may want to consider having backups elsewhere.

Similar to Postgresql database servers running on bare metal servers, it's advisable to have a local backup as well as a remote backup. The importance of having a local backup is to prevent high latency in case you need to restore data quickly.

On the other hand, remote backups cannot guarantee low latency, partly due to the long distance between the production cluster and the remote storage server. However, we can rely on it for rare emergency cases such as fire outbreaks and so on.