liara,

A complete kubernetes cluster for a homelab probably would be overkill (unless you really wanted a kubernetes playground, which some folks do). However, yes, my recommendation these days would be k0s directly. I did use k3s up until recently but gave k0s a shot and realized it’s a bit lighter on resources, more configurable (for instance you can choose to run cri-o instead of containerd, which isn’t an option with k3s) and has some extra features like letting you put helm charts with their values directly in the k0s config.

k0s vs k3s just comes down to personal preference but for me it came down to:

  • I disabled a lot of features of k3s out of the box (disabled flannel for calico, used nginx ingress instead of traefik). K0s feels a little less opinionated – it doesn’t include quite as many batteries during initialization, but this doesn’t bother me because I have my own preferences for how to handle certain aspects of my stack
  • both can use sqlite as the data backend (and both will by default in single node mode), which is the much less resource usage than using etcd as the data store
  • I find k0s uses a couple hundred MB less RAM for the control plane components (about 700mb vs 1g for k3s)
  • less constant cpu usage from the API server
  • both have good documentation for their specific features and of course kubernetes itself is extremely well documented, which is “language” used to define the services and pods

As for distro to run it on, I use MicroOS myself (immutable os, I have it set to automatically update and reboot once a week), but Debian is my second choice and my personal preference for server distros. The beauty of this setup is the container host really just needs the bare minimum to run the containers. There’s less that can break because the containers are all managed by others upstream so the main concern of breakage areas basically becomes did server boot and did k0s start?

NFS is fine and actually natively supported as a kubernetes volume type: kubernetes.io/docs/concepts/storage/volumes/

Your option could be to mount it directly to the host first and then use a hostPath to mount it to the container, or just mount the NFS path directly to the pod. As for permissions you may need to do some mapping, but kubernetes also has security contexts that can let you alter the UID of the user running the pod. If you need user to be privileged and root, you can do that or if you need UID 5124 you can do that too.

If your goal right now is a Plex server and not much else to start then this makes things very easy:

  • spin up k0s
  • add a Plex pod/manifest
  • add a service type of NodePort and expose the Plex service on a static node port of 32400 (we are lucky that Plex falls into the NodePort service range by default)
  • the GPU passthrough I admit will take some work, but it should be doable

You can add nginx ingress, cert-manager, metal lb, etc later on down the line if you get curious and want to expand a bit (sonarr, radarr, adguard home, etc)

You could also just go full stupid with kubevirt, but it’s not a project I’ve personally explored using. Iirc it basically allows for the provisioning of more persistent VMs with k8s rather than containers

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • [email protected]
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • Socialism
  • TheResearchGuardian
  • Ask_kbincafe
  • KbinCafe
  • oklahoma
  • feritale
  • SuperSentai
  • KamenRider
  • All magazines