This guide will show you how to setup backups of your persistent volumes to an S3 compatible backup destination. In this example I will be using MinIO but you could quite easily setup an Amazon S3 bucket if you wished. Setting up the S3 bucket on Amazon is beyond the scope of this post but there are plenty of guides out there if you wish to go down that route.
We will be using Civo for this guide, so if you want to follow along you will need a Civo account with access to the KUBE100 beta. KUBE100 is the world's first managed k3s solution, you can sign up to join the BETA program below and get $70 of credit each month for the duration of the beta! All you need to do is follow the link and have your application approved...
Setting up MinIO is pretty straight forward, you can follow this excellent guide from Alejandro @ Civo to get up and running. You will also find it in the Civo kubernetes marketplace, a one click installation!
Once you have MinIO setup there are 3 important things you need to have a record of:
- The URL to reach your MinIO server
- The aws_access_key_id
- The aws_secret_access_key
Make a note of these as they will be needed in a sec. If you were following the guide linked above, they are some of the first things the setup will give for you.
You may already have Longhorn set up on your cluster. If not, and you are using Civo K3s this is as easy as going to the marketplace and installing the app. After a minute you should see all the Longhorn pods up and running.
The first thing we need to do is store your MinIO connection information in a Kubernetes secret. To do this we need to convert each value to BASE64, replacing the values below with your URL, access key and secret:
Your MinIO URL should look something like: http://minio.somedomain.com:9000
echo -n MINIO_URL | base64 echo -n aws_access_key_id | base64 echo -n aws_secret_access_key | base64
You will see something like the following:
TUlOSU9fVVJM YXdzX2FjY2Vzc19rZXlfaWQ= YXdzX3NlY3JldF9hY2Nlc3Nfa2V5
Once you have these values we can generate the secret (make sure you replace the data values with your BASE64 ones):
cat <<EOF >>aws_secret.yml apiVersion: v1 kind: Secret metadata: name: aws-secret namespace: longhorn-system type: Opaque data: AWS_ACCESS_KEY_ID: TUlOSU9fVVJM AWS_SECRET_ACCESS_KEY: YXdzX2FjY2Vzc19rZXlfaWQ= AWS_ENDPOINTS: YXdzX3NlY3JldF9hY2Nlc3Nfa2V5 EOF
We can now apply the manifest to create the secret:
kubectl apply -f aws_secret.yml
You can check this has been created by running:
kubectl get secrets -n longhorn-system
NAME TYPE DATA AGE longhorn-service-account-token-9spgn kubernetes.io/service-account-token 3 30d default-token-szgv7 kubernetes.io/service-account-token 3 30d aws-secret Opaque 3 30d
Setting up the backup is pretty straight forward and intuitive so I'm not going to go overboard with the instructions! Let's create a simple Persistent Volume Claim (PVC) which will in turn create the Persistent Volume (PV) in your cluster, and the volume in Longhorn. If you already have a volume you're looking to back up, you can skip this bit.
cat <<EOF >>volume.yml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-claim labels: type: longhorn spec: storageClassName: longhorn accessModes: - ReadWriteOnce resources: requests: storage: 5Gi EOF
If you need to change any specifics, such as the storage requirements of the volume, the above snippet is where you'll need to do that.
Now apply the volume:
kubectl apply -f volume.yml
Once setup, you will need to access the Longhorn UI to configure the backups. As there is no authentication built into the UI out of the box, I would recommend you don't expose this to the outside world and rather use
kubectl port-forward (Obviously change the local port from 8081 if needed):
kubectl port-forward svc/longhorn-frontend -n longhorn-system 8081:80
You can then use your local browser with the address:
All being well you will be presented with the longhorn dashboard which will show the health of your volumes:
If you already have running containers using volumes then these should show as healthy, as you can see here, as I have only just created this volume on a new cluster, therefore it's showing as "detached". We can easily attach this volume by selecting the volume and clicking "attach". You can attach this volume to the master.
Next we need to configure the backup destination, navigate to settings -> general:
Scroll down to the section about backup. You will now fill in the details (changing these as required):
Make sure you save the configuration. Next we can test a backup, select volumes from the menu and then click on the volume you want to backup:
You can click create backup and add any labels if you wish. If this is a new volume it will be completed very quickly, you can check by hovering over the snapshot:
Once this shows 100%, it should be visible from the backup tab:
You can also double check your MinIO bucket:
If you want to restore a volume from backup, this is pretty straight forward. From the backup menu select the volume you want to restore then on the next screen select which backup you want to restore:
You can then complete the details as required:
You can then see the restored volume in the Volumes screen:
Once you have the volume available you can attach to a node and use as you wish.
MinIO and Longhorn play really nicely together to manage backing up and restoring data on Kubernetes clusters. MinIO being fully S3 compatible allows you to use the same basic principles regardless of the storage solution or provider you're using. You can spin up other buckets as needed.