LINSTOR CSI Plugin for Kubernetes
A few weeks ago, LINBIT publicly released the LINSTOR CSI (Container Storage Interface) plugin. This means LINSTOR now has a standardized way of working with any container orchestration platform that supports CSI. Kubernetes is one of those platforms, so our developers put in the work to make LINSTOR integration with Kubernetes easy, and I’ll show you how!
You’ll need a couple things to get started:
- Kubernetes Cluster (1.12.x or newer)
- LINSTOR Cluster
LINSTOR’s CSI plugin requires certain Kubernetes feature gates be enabled on the kube-apiserver and each kubelet.
Enable the CSINodeInfo and CSIDriverRegistry feature gates on the kube-apiserver by adding, --feature-gates=KubeletPluginsWatcher=true,CSINodeInfo=true, to the list of arguments passed to the kube-apiserver system pod in the /etc/kubernetes/manifests/kube-apiserver.yaml manifest. It should look something like this:
# cat /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
... snip ...
- --feature-gates=KubeletPluginsWatcher=true,CSINodeInfo=true
... snip ...
To enable these feature gates on the Kubelet, you’ll need to add the following argument to the KUBELET_EXTRA_ARGS variable located in the /etc/sysconfig/kubelet: --feature-gates=CSINodeInfo=true,CSIDriverRegistry=true. Your config should look something like this:
# cat /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--feature-gates=CSINodeInfo=true,CSIDriverRegistry=true"
Once you’ve modified those two configurations, you can prepare your configuration for the CSI plugin’s sidecar containers. curl down the latest version of the plugin definition:
# curl -O \ https://raw.githubusercontent.com/LINBIT/linstor-csi/master/examples/k8s/deploy/linstor-csi.yaml
Set the value: of each instance of LINSTOR-IP in the linstor-csi.yaml to the IP address of your LINSTOR Controller. The placeholder IP in the example yaml is 192.168.100.100, so we can use the following command to update this address (or you can edit it with an editor), simply set CON_IP to your controller’s IP address:
# CON_IP="x.x.x.x"; sed -i.example s/192\.168\.100\.100/$CON_IP/g linstor-csi.yaml
Finally, apply the yaml to the Kubernetes cluster:
# kubectl apply -f linstor-csi.yaml
You should now see the linstor-csi sidecar pods running in the kube-system namespace:
# watch -n1 -d kubectl get pods --namespace=kube-system --output=wide
Once running, you can define storage classes in Kubernetes pointing to our LINSTOR storage pools that we can then provision persistent, and optionally replicated by DRBD, volumes from for our containers.
Here is an example yaml definition that describes a LINSTOR storage pool in my cluster named, thin-lvm:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: linstor-autoplace-1-thin-lvm provisioner: io.drbd.linstor-csi parameters: autoPlace: "1" storagePool: "thin-lvm"
And here is an example yaml definition for a persistent volume claim carved out of the above storage class:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-class: linstor-autoplace-1-thin-lvm
name: linstor-csi-pvc-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Put it all together and you’ve got yourself an open source, high performance, block device provisioner for your persistent workloads in Kubernetes!
There are many ways to craft your storage class definitions for node selection, storage tiering, diskless attachments, or even off site replicas. We’ll be working on our documentation surrounding new features, so stay tuned, and don’t hesitate to reach out for the most UpToDate information about LINBIT’s software!
Read more: CSI Plugin for LINSTOR Complete.




This guide is old and link for yaml not work.
Hello Ilya,
That’s true, Kubernetes and LINSTOR are moving very quickly, and this blog was pulling it’s yaml from the master branch of our CSI plugin’s github. You should use the example yamls from https://github.com/LINBIT/linstor-csi/tree/master/examples/k8s for deployments you’re rolling out now. One thing that stands out immediately to me as a potential issue is the provisioner name in the storage class definition; this will has been changed from “io.drbd.linstor-csi” to “linstor.csi.linbit.com”, and again, you can find working examples at the github link above.
Thanks for the feedback.