Laying The Homelab Foundations: Kubernetes and Storage Link to heading
One of the most valuable aspects of running a homelab is how quickly it teaches you what you don’t know. My initial plan was simple: spin up a k0s cluster and start deploying monitoring tools. Instead, I got a crash course in Kubernetes storage infrastructure when my first deployment failed due to missing storage capabilities. Here’s what I learned about laying proper foundations for a homelab Kubernetes cluster.
Enable NVMe/TCP Link to heading
Until I tried to set up OpenEBS I had never heard of NVMe/TCP but it seemed to be required on my initial attempt to setup OpenEBS, so I went ahead and enabled it and made sure it was loaded after restarts. Afterwards I’m pretty sure this might only be needed if you want to use replicated storage with Mayastor which I’m not using with this setup.
$ sudo modprobe nvme_tcp
$ echo "nvme_tcp" | sudo tee /etc/modules-load.d/nvme_tcp.conf
Create k0s Cluster Link to heading
Following the k0sctl install guide getting a cluster up and running was pretty easy. For this setup, I configured the first node as both controller and worker to ensure all available resources could be used for workloads. In production environments you might want dedicated controller nodes, but in a homelab setting where resources are limited, this dual-role configuration makes the most efficient use of our hardware. Here is my k0sctl.yaml
.
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s-cluster
user: admin
spec:
hosts:
- ssh:
address: 192.168.100.1
user: jjshanks
port: 22
keyPath: ~/.ssh/id_ed25519
role: controller+worker # Primary node that runs both control plane and workloads
- ssh:
address: 192.168.100.2
user: jjshanks
port: 22
keyPath: ~/.ssh/id_ed25519
role: worker
- ssh:
address: 192.168.100.3
user: jjshanks
port: 22
keyPath: ~/.ssh/id_ed25519
role: worker
- ssh:
address: 192.168.100.4
user: jjshanks
port: 22
keyPath: ~/.ssh/id_ed25519
role: worker
Sidetrack: Resetting SSH Link to heading
Before diving into OpenEBS configuration, I encountered an interesting challenge while provisioning my cluster nodes. As an experiment with setting things up in Proxmox I tried to create my extra nodes as clones from the first one. This worked much better then restoring from a backup which caused a lot of IP caching issues I couldn’t resolve. The one issue with cloning was that the server fingerprint was also cloned making ssh unhappy. On the new hosts I was able to reset it with:
$ sudo rm /etc/ssh/ssh_host_*
$ sudo dpkg-reconfigure openssh-server
$ sudo systemctl restart ssh
Install OpenEBS Link to heading
My first attempt at installing OpenEBS followed the default instructions, which quickly proved to be overkill. The installation created more than 20 pods, consuming a substantial portion of my cluster’s memory resources.
After reviewing the documentation more carefully, I identified several components that weren’t necessary for my homelab setup. The replicated storage engine (Mayastor) and ZFS support were two major features I could disable to reduce resource usage.
Configuration required some additional tweaks specific to k0s. First, I needed to set the correct kubelet directory path as k0s uses a non-standard location. Then came the challenge of setting up the default storage class - a process that required diving into OpenEBS’s helm chart structure to find the right configuration parameter.
$ helm repo add openebs https://openebs.github.io/openebs
$ helm repo update
$ helm install openebs --namespace openebs openebs/openebs \
--set engines.replicated.mayastor.enabled=false \
--set engines.local.zfs.enabled=false \
--set lvm-localpv.lvmNode.kubeletDir=/varlib/k0s/kubelet \
--set localpv-provisioner.hostpathClass.isDefaultClass=true
--create-namespace
Validate OpenEBS Link to heading
With OpenEBS installed, we need to ensure everything is working as expected. Our validation process has three steps: first, confirming all required pods are running; second, verifying our storage class is properly registered and set as default; and finally, testing actual storage functionality with a simple write operation.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
openebs-localpv-provisioner-7b8f5b78d7-dslvh 1/1 Running 0 27m
openebs-lvm-localpv-controller-86b4d6dcff-67plt 5/5 Running 0 27m
openebs-lvm-localpv-node-42hxr 2/2 Running 0 27m
openebs-lvm-localpv-node-472ck 2/2 Running 0 27m
openebs-lvm-localpv-node-bgscg 2/2 Running 0 27m
$ kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-hostpath (default) openebs.io/local Delete WaitForFirstConsumer false 29m
And finally just to make sure all the effort wasn’t wasted I combined manifests from the docs for verification:
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-hostpath-pvc
spec:
storageClassName: openebs-hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5G
---
apiVersion: v1
kind: Pod
metadata:
name: hello-local-hostpath-pod
spec:
volumes:
- name: local-storage
persistentVolumeClaim:
claimName: local-hostpath-pvc
containers:
- name: hello-container
image: busybox
command:
- sh
- -c
- 'while true; do echo "`date` [`hostname`] Hello from OpenEBS Local PV." >> /mnt/store/greet.txt; sleep $(($RANDOM % 5 + 300)); done'
volumeMounts:
- mountPath: /mnt/store
name: local-storage
$ kubectl exec hello-local-hostpath-pod -- cat /mnt/store/greet.txt
Sun Feb 9 19:43:49 UTC 2025 [hello-local-hostpath-pod] Hello from OpenEBS Local PV.
Sun Feb 9 19:48:50 UTC 2025 [hello-local-hostpath-pod] Hello from OpenEBS Local PV.
Next Steps Link to heading
With storage and basic cluster infrastructure in place, the foundation is set for building out more sophisticated services. The next phase of this homelab journey will focus on two critical aspects: setting up proper ingress controllers to manage external access to services, and implementing DNS management to make services easily accessible within my network. These components, combined with our storage foundation, will create a robust platform for hosting applications and experimenting with new technologies. I’m particularly excited to explore how these foundations will support future projects like GitOps workflows and automated CI/CD pipelines.