7.0 KiB
P5x csi-driver
P5x is a set of tools for running a Kubernetes cluster on LXC containers in Proxmox. The csi-driver
component provides a Kubernetes StorageClass
and provisioner that will automatically create, migrate, and mount Proxmox volumes on a designated network storage for use with Kubernetes PersistentVolumes
.
Why? This allows the storage backend to be configured in Proxmox, instead of requiring a separate Kubernetes storage solution or local provisioner. It also makes Kubernetes volumes appear as native Proxmox disks separate from the storage of the LXC container they're running on, which is much closer to a traditional cloud environment like EKS/GKE.
Before setting up the
csi-driver
, you must first deploy the P5xapi-server
component. Theapi-server
is required for all P5x components and manages communication between Kubernetes and the underlying Proxmox infrastructure.
Deployment
Deploy CSI Driver to Kubernetes
Once you have deployed the api-server
, deploy the Kubernetes resources for the csi-driver
:
kubectl apply -f deploy
In your Kubernetes cluster, in the p5x-system
namespace, you should now see a p5x-csi-controller-0
pod along with all the p5x-csi-node-*
pods from the CSI DaemonSet.
Verify API Connectivity & Get SSH Pubkey
Run the following command to verify that the CSI controller can communicate with the API server. This prints out P5x's SSH public key. Note this down, as we'll use it in the next step.
kubectl exec -n p5x-system p5x-csi-controller-0 -c csi-plugin -- curl -s http://api.p5x-system.svc.cluster.local:3450/system/pubkey
Configure SSH Access on K8s LXC Containers
Why? See the Q&A section below.
Prerequisite: The Kubernetes nodes (LXC containers) must have an SSH server running with key-based root
authentication enabled. Example for EL-based Linux distros:
dnf install -y openssh-server
systemctl enable --now sshd
On each of the Kubernetes nodes (LXC containers) that P5x will interact with (i.e. create disks on, mount disks to), add the public key from the previous step to the /root/.ssh/authorized_keys
file.
Registering Nodes
I am still working on a system for the P5x API server to auto-discover K8s nodes (LXC containers) or receive them from a config file. For now, you can run the following command to manually register a node with P5x:
kubectl exec -n p5x-system p5x-csi-controller-0 -c csi-plugin -- curl \
-X POST --location "http://api.p5x-system.svc.cluster.local:3450/node" \
-H "Content-Type: application/json" \
-d '{
"hostname": "k8s-node-hostname",
"pve_id": 110,
"pve_host": "lxc-container-hostname",
"assigned_ip": "lxc-container-ip-address",
"assigned_subnet": 24,
"is_permanent": true
}'
Parameters:
hostname
- The hostname of the K8s node as it appears in the Kubernetes cluster (i.e.kubectl get nodes
)pve_id
- The Proxmox ID of the LXC containerpve_host
- The hostname of the PVE node where the LXC container resides as it appears in the Proxmox VE web interfaceassigned_ip
- Resolvable LAN IP address of the LXC container - used for SSH accessassigned_subnet
- Integer subnet identifier for the LAN IP of the LXC containeris_permanent
- For future use - this should be true for manually-provisioned nodes
You can run this command once for each node in your K8s cluster. After doing so, P5x will be able to manage volumes for pods on those nodes.
Usage
Once the CSI driver is set up and the K8s nodes are registered with the API server, you can use the P5x StorageClass
for storage in Kubernetes. Here's an example PersistentVolumeClaim
:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-pvc-1
namespace: my-namespace
spec:
storageClassName: p5x
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Once the PVC is created and mounted to a pod, the CSI driver will create a new Proxmox volume, migrate it to the correct LXC container, and mount it into the K8s pod for use.
For a complete example, see the examplepod.yaml
file, but you should see something like:
[root@example-pod-1 /]# df -h /mnt/*
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/synology--scsi--lun-vm--102--disk--1 4.9G 28K 4.6G 1% /mnt/example-pvc-1
/dev/mapper/synology--scsi--lun-vm--102--disk--2 4.9G 28K 4.6G 1% /mnt/example-pvc-2
Deleting the pod will unmount the volume(s), but will not delete them. Deleting the PVCs will permanently delete the disks.
Q&A
- Why does P5x need
root
access to Proxmox and my K8s LXC containers?- The fundamental answer to both of these questions is "so it can dynamically unmount/migrate volumes without rebooting the LXC containers."
- You cannot dynamically unmount a Proxmox volume from an LXC container using the Proxmox API, so in order to do so without requiring a reboot, P5x uses its SSH access to the LXC container to
unmount
the disk from the LXC container's filesystem. - Okay, but why Proxmox?
- Reason 1: Some of these disk operations cannot be done via the Proxmox API and require access to read/modify the
/etc/pve/lxc/*.conf
config for the LXC container directly. - Reason 2: For security reasons, when Proxmox mounts a disk in an LXC container, it also shadow mounts the disk in an isolated namespace on the Proxmox host itself. This prevents certain vulnerabilities where a hostile LXC container (ab)uses
unmount
internally to gain access to the host filesystem. However, it also prevents dynamically unmounting a volume from the LXC container without rebooting the container. P5x uses its SSH access to the Proxmox node tounmount
the disk from the shadow mount in the isolated namespace.
- Reason 1: Some of these disk operations cannot be done via the Proxmox API and require access to read/modify the
- What are the
carrier-...
LXC containers in my Proxmox VE cluster?- Currently, Proxmox does not support dynamically migrating an "Unused Disk" from an LXC container on one physical PVE host to a different PVE host.
- To accomplish this, P5x provisions a tiny dummy LXC container on the same PVE host, moves the disk over to that dummy container, migrates the entire dummy container to the new PVE host (which is supported), then moves the disk off of the dummy container.
License
P5x: Proxmox on Kubernetes - CSI Driver
Copyright (C) 2025 Garrett Mills shout@garrettmills.dev
This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License along with this program. If not, see https://www.gnu.org/licenses/.