One of the major cost drivers in Kubernetes environments is persistent storage. While persistent volumes make it easy to retain data across pod restarts, the underlying block storage can become expensive at scale.

Object storage offers a cheaper alternative for certain workloads. In this tutorial, we’ll explore how to mount Civo Object Storage inside Kubernetes as a persistent volume using the CSI-S3 driver, allowing applications to access object storage through a familiar filesystem interface.

Why back your PVs with S3?

Evidently, the burning question remains: why would you do this? Well, here are a few reasons:

  • Cost: Object storage is significantly cheaper than block storage per GB. If your workload stores large amounts of data that do not require high-frequency random reads and writes, this can meaningfully reduce your infrastructure bill.
  • Durability: Civo object storage is designed for high durability. Your data is replicated across multiple failure domains without any additional configuration on your part.
  • Scalability: Unlike block volumes, object storage grows with your data without needing to resize or re-provision disks manually.
  • Portability: Because each PVC maps to a prefix inside an existing bucket, the data is accessible outside of Kubernetes as well, useful for backups, migrations, or cross-cluster access.

It’s important to understand that this approach does not convert object storage into true block storage. Instead, the CSI driver translates filesystem operations into S3 API calls using a FUSE-based mount. This makes it suitable for certain workloads but introduces higher latency and weaker POSIX guarantees than traditional disks.

When not to do this

S3-backed storage is not a universal replacement for block volumes. There are workloads where this approach will actively work against you:

  • Databases: PostgreSQL, MySQL, and similar engines rely on fast, low-latency random I/O. Object storage adds significant latency to every read and write, which will degrade performance and can cause instability.
  • High-IOPS workloads: Anything that hammers the disk frequently, message queues, write-heavy caches, and real-time logging pipelines, is a poor fit. The overhead of translating filesystem operations to S3 API calls under load this can work against you.
  • Environments without FUSE support: The CSI-S3 driver mounts volumes using FUSE, a kernel feature that allows filesystems to run in user space. Some managed Kubernetes distributions restrict privileged containers or access to /dev/fuse, which can prevent the driver from working correctly.

Prerequisites

In order to follow along, you will need the following tools, installed locally:

  • Helm, will be used to install the CSI-S3 driver
  • Civo CLI, to interact with the Civo API to retrieve object store credentials and endpoint details
  • Jq, used to parse Civo CLI output
  • Kubectl, to verify the installation and manage cluster resources

Step-by-step process

Create the Object Store and credentials

Create a named credential pair that will be used to access the object store:

civo objectstore credentials create k8s

This creates an access key and secret key named k8s. The name is arbitrary; feel free to choose something that reflects your intended use.

Create the object store

Now, create the object store and assign the credentials you just created as the owner:

civo objectstore create prod-datastore --owner-access-key=k8s -o json

The --owner-access-key flag binds the k8s credentials to this bucket at creation time.

Note: Civo scopes credentials to a specific object store. These credentials will only have access to this bucket.

Export credentials and endpoint

export BUCKET_NAME=$(civo objectstore show prod-datastore -o json | jq '.[0].name' | tr -d '"')

Export the access key

export ACCESS_KEY=$(civo objectstore show prod-datastore -o json | jq '.[0].accesskey' | tr -d '"')

Export the secret key

The Civo CLI secret key command does not support JSON output, so you must run it manually:

civo objectstore credential secret --access-key=$ACCESS_KEY

Copy the printed secret key, then export it:

export SECRET_KEY=<paste-secret-key-here>

Export the endpoint

export ENDPOINT=https://$(civo objectstore show prod-datastore -o json | jq '.[0].objectstore_endpoint' | tr -d '"')
⚠️The endpoint must be the base URL only, do not append the bucket name or any path. The CSI driver constructs bucket paths itself.

Verify all variables are set

echo "Bucket:     $BUCKET_NAME"
echo "Access Key: $ACCESS_KEY"
echo "Endpoint:   $ENDPOINT"

Add the CSI-S3 Helm repository

helm repo add yandex-s3 https://yandex-cloud.github.io/k8s-csi-s3/charts && helm repo update

Install the CSI-S3 Helm chart

helm install csi-s3 yandex-s3/csi-s3 \
  --set secret.accessKey=$ACCESS_KEY \
  --set secret.secretKey=$SECRET_KEY \
  --set secret.endpoint=$ENDPOINT \
  --set secret.region=auto \
  --set storageClass.name=civo \
  --set storageClass.singleBucket=$BUCKET_NAME \
  --namespace=kube-system
⚠️Civo object store credentials are scoped to a specific bucket. The CSI driver by default tries to create a new bucket per PVC, but those credentials cannot do that. Setting singleBucket tells the driver to use the existing bucket and create a per-PVC directory prefix within it instead.

Troubleshooting: If volumes fail to mount, patching the DaemonSet for FUSE device creation

At the time of writing, PR #177, which fixes this issue, has not yet been merged into the upstream chart. If your pods get stuck with a mount error after the install above, follow the steps below.

On some nodes, the FUSE kernel support is present, but the /dev/fuse character device is never created on the host. The chart’s default hostPath volume for /dev/fuse silently mounts an empty directory in its place, causing all three mounters to fail at runtime.

The fix is to patch the live DaemonSet with an init container that creates the device node before the main container starts:

kubectl patch daemonset csi-s3 -n kube-system --patch "$(cat <<'EOF'
spec:
  template:
    spec:
      initContainers:
        - name: setup-fuse-device
          image: alpine
          securityContext:
            privileged: true
          command:
            - sh
            - -c
            - |
              if [ -d /dev/fuse ]; then
                rm -rf /dev/fuse
              fi
              mknod /dev/fuse c 10 229
              chmod 666 /dev/fuse
          volumeMounts:
            - name: dev
              mountPath: /dev
      volumes:
        - name: dev
          hostPath:
            path: /dev
EOF
)"
Note: This patch applies to the live DaemonSet only. If you run helm upgrade in the future, it will be overwritten. Reapply the patch after any upgrade until PR #177 is merged.

Verify the installation

kubectl get pods -n kube-system | grep csi-s3

You should see the CSI node driver and controller pods in Running status within a minute or two. The node driver pod (csi-s3-xxxxx) should show an Init phase briefly as the setup-fuse-device container runs, then transition to Running.

Test with a PVC and pod

Provision a test PVC and mount it to an nginx pod to confirm end-to-end that the storage class and mounter are working:

kubectl apply -f - <<'EOF'
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-s3-test
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: civo
EOF
Note: The storage value in the PVC is not strictly enforced for S3-backed volumes. Object storage scales automatically, so this value mainly satisfies Kubernetes resource requirements rather than acting as a hard quota.
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Pod
metadata:
  name: csi-s3-test-nginx
  namespace: default
spec:
  containers:
    - name: nginx
      image: nginx
      volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: csi-s3-test
EOF

Check that the PVC is bound and the pod reaches Running:

kubectl get pvc csi-s3-test -n default
kubectl get pod csi-s3-test-nginx -n default

Once the pod is running, create a test HTML file and copy it into the mounted volume:

cat > index.html <<'EOF'
<!DOCTYPE html>
<head>
  <meta charset="UTF-8" />
  <title>It Works</title>
</head>
<body>
    <h1>It works.</h1>
    <p>This file is being served from a PersistentVolume
backed by Civo object storage via CSI-S3.</p> </body> </html> EOF

Copy to mounted volume:

kubectl cp index.html csi-s3-test-nginx:/usr/share/nginx/html/index.html

Port-forward and confirm the page is served:

kubectl port-forward pod/csi-s3-test-nginx 8080:80 -n default

Then, in a browser, head to http://localhost:8080:

Mount Civo Object Storage as a Kubernetes Persistent Volume with CSI-S3

Then, go back over to your Civo dashboard, under the object store, you should see the file you just uploaded:

Mount Civo Object Storage as a Kubernetes Persistent Volume with CSI-S3

Summary

Mounting object storage as a Kubernetes persistent volume can be a powerful way to reduce storage costs for suitable workloads. By using the CSI-S3 driver with Civo Object Storage, you can expose S3-compatible storage to pods through a familiar filesystem interface.

While this approach is not suitable for latency-sensitive workloads like databases, it works well for static assets, shared content, backups, and large infrequently accessed datasets.

Looking for more stuff you can do with object stores? Here are some ideas: