After my blog post about Kubernetes for geeks: Creating your own Kubernetes Operator, it is time for a lighter and more basic topic: storage.
My beginner series wouldn’t be complete without it, so here goes!
Kubernetes is made for a lot of various environments: cloud, self-hosted, distributed and not-too-distributed. You need to be able to create local and shared storage for all of these solutions. Sometimes, you’ll want something that’s local, sometimes it should be remote and mounted from several nodes.
In Kubernetes, you classify your types of storage in storage classes. Your storage classes will typically be either something built-in, or you might have operators where you can extend the storage options with even more storage options.
On my Kubernetes node, I am running OpenZFS , which makes it extremely easy to create new volumes and virtual filesystems, so I was looking for something that integrates nicely into that.
The choice was pretty easy: openebs with OpenEBS LocalPV ZFS. Since I run on a single node, distributed wasn’t a need, so something local was nice and easy.
Installation
I started off by creating a parent volume – actually two, since I have two zfs pools: One on spinning disks and one on a SSD.
sudo zfs create nasdisk/k3s
sudo zfs create znvm/k3s
All my Kubernetes storage should be placed as subvoluments in one of these two volumes.
Then I install OpenEBS
helm repo add openebs https://openebs.github.io/openebs
helm repo update
helm install openebs --namespace openebs openebs/openebs --create-namespace
..and OpenEBS ZFS LocalPV
helm repo add openebs-zfs https://openebs.github.io/zfs-localpv
helm repo update
helm install openebs-zfs openebs-zfs/zfs-localpv --namespace openebs --create-namespace
Configuration
Since I have two pools, I want to add two storageclasses. This is pretty easy, I just need to apply these two StorageCLass resources:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: zfs-storage-nasdisk
provisioner: zfs.csi.openebs.io
parameters:
poolname: "nasdisk/k3s" # Change this to match your ZFS pool
fstype: "zfs"
volumeBindingMode: WaitForFirstConsumer
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: zfs-storage-znvm
provisioner: zfs.csi.openebs.io
parameters:
poolname: "znvm/k3s" # Change this to match your ZFS pool
fstype: "zfs"
volumeBindingMode: WaitForFirstConsumer
Usage
We’ll again use this blog as an example. For the document root for the wordpress installation, we’ll need a perstent volume.
In Kubernetes, defining storage is a multi-step process. FIrst, you’ll define a persistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wordpress-html
namespace: wordpress
spec:
accessModes:
- ReadWriteOnce
storageClassName: zfs-storage-nasdisk
resources:
requests:
storage: 100Gi
Then, in a deployment, you’ll define it like this:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
labels:
io.kompose.service: wordpress-vegard
name: wordpress-app
namespace: wordpress
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: wordpress-vegard
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.34.0 (cbf2835db)
labels:
io.kompose.service: wordpress-vegard
app: wordpress
spec:
containers:
- env:
<environment variables>
image: wordpress:latest
name: wordpress-app
volumeMounts:
- mountPath: /var/www/html
name: wordpress-html
restartPolicy: Always
volumes:
- name: wordpress-html
persistentVolumeClaim:
claimName: wordpress-html
This instantiates the volume, which isn’t actually created until it’s first specified in a deployment. This, we specified in the WaitForFirstConsumer property.
Then, we’ll mount it into the container with the volumeMounts statement.
So, how does this look under the hood? We’ll see a pvc:
$ kubectl get -n wordpress pvc wordpress-html -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"labels":{"io.kompose.service":"html-vegard"},"name":"wordpress-html","namespace":"wordpress"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"100Gi"}},"storageClassName":"zfs-storage-nas"}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: zfs.csi.openebs.io
volume.kubernetes.io/selected-node: hassio
volume.kubernetes.io/storage-provisioner: zfs.csi.openebs.io
creationTimestamp: "2025-02-19T01:54:09Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
io.kompose.service: html-vegard
name: wordpress-html
namespace: wordpress
resourceVersion: "33518"
uid: 232f16b3-3d12-40f0-ac9a-09241fffd551
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: zfs-storage-nasdisk
volumeMode: Filesystem
volumeName: pvc-232f16b3-3d12-40f0-ac9a-09241fffd551
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 100Gi
phase: Bound
$ kubectl get pv pvc-232f16b3-3d12-40f0-ac9a-09241fffd551 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: zfs.csi.openebs.io
volume.kubernetes.io/provisioner-deletion-secret-name: ""
volume.kubernetes.io/provisioner-deletion-secret-namespace: ""
creationTimestamp: "2025-02-19T05:50:18Z"
finalizers:
- kubernetes.io/pv-protection
name: pvc-232f16b3-3d12-40f0-ac9a-09241fffd551
resourceVersion: "33516"
uid: b3977315-ac02-4c24-a3f5-f58dfc5bece0
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 100Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: wordpress-html
namespace: wordpress
resourceVersion: "33506"
uid: 232f16b3-3d12-40f0-ac9a-09241fffd551
csi:
driver: zfs.csi.openebs.io
fsType: zfs
volumeAttributes:
openebs.io/cas-type: localpv-zfs
openebs.io/poolname: nasdisk/k3s
storage.kubernetes.io/csiProvisionerIdentity: 1739928334529-4455-zfs.csi.openebs.io
volumeHandle: pvc-232f16b3-3d12-40f0-ac9a-09241fffd551
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: openebs.io/nodeid
operator: In
values:
- hassio
persistentVolumeReclaimPolicy: Retain
storageClassName: zfs-storage-nasdisk
volumeMode: Filesystem
status:
lastPhaseTransitionTime: "2025-02-19T05:50:18Z"
phase: Bound
$ zfs list | grep pvc-232f16b3-3d12-40f0-ac9a-09241fffd551
nasdisk/k3s/pvc-232f16b3-3d12-40f0-ac9a-09241fffd551 492M 99.5G 303M legacy
As you can see, a ZFS volumen has been created, with the specified 100Gb quota.
And that’s really all there is to it. Since I have specified «Retain«, nothing will actually be deleted if I delete the Kubernetes resources, which is a good safety measure, as it’s just so easy to just delete resources thinking you’ll just recreate them…