How to integrate MinIO with LINSTOR to distribute data on Kubernetes

While LINBIT’s LINSTOR is on the way to becoming an industry standard as software defined block storage. MinIO stands out as the number one product in the object storage world.

LINBIT has been working on an integration with MinIO for a long time and has infrastructures that can be used safely in business solutions.

The biggest combination of LINSTOR and MinIO is reflected in INTEL’s RSD architecture.

INTEL, which wants to offer object storage on performance with 20 servers, 4 storage in a single rack, entrusted the management of disks to LINSTOR, while using MinIO for  Object storage.

In this article, we wanted to give an example of how LINSTOR and MinIO can be combined.

If you have any questions about architecture and installation, please feel free to contact us on our slack channel.


This tutorial will show you a solution to de-couple MinIO application service and data on Kubernetes, by using LINSTOR as a distributed persistent volume instead of a local persistent volume.

1. Ubuntu virtual machine setup

Create a new and updated Ubuntu x86_64 virtual machine within 2 disks, one for Ubuntu and applications, the other one will be used for MinIO data storage.

In this case:

[email protected]:~# lsb_release -r
Release:	18.04
[email protected]:~# lsblk
NAME                                                                  MAJ:MIN  RM  SIZE RO TYPE MOUNTPOINT
sda                                                                     8:0     0   30G  0 disk
vda                                                                   252:0     0   60G  0 disk
├─vda1                                                                252:1     0 59.9G  0 part /
├─vda14                                                               252:14    0    4M  0 part
└─vda15                                                               252:15    0  106M  0 part /boot/efi

2. DRBD 9.0 setup

To install the latest DRBD 9.0, need to add PPA from LINBIT first (ref:

[email protected]:~# add-apt-repository ppa:linbit/linbit-drbd9-stack
[email protected]:~# apt update
[email protected]:~# apt install -y linux-headers-`uname -r`
[email protected]:~# apt install -y drbd-utils drbd-dkms lvm2

Take a look at the details of DRBD 9.0

[email protected]:~# modinfo drbd
filename:       /lib/modules/4.15.0-106-generic/updates/dkms/drbd.ko
alias:          block-major-147-*
license:        GPL
version:        9.0.23-1
description:    drbd - Distributed Replicated Block Device v9.0.23-1
author:         Philipp Reisner <[email protected]>, Lars Ellenberg <[email protected]>
srcversion:     96D47E12A7A913F21D2507E
depends:        libcrc32c
retpoline:      Y
name:           drbd
vermagic:       4.15.0-106-generic SMP mod_unload
signat:         PKCS#7
sig_hashalgo:   md4
parm:           enable_faults:int
parm:           fault_rate:int
parm:           fault_count:int
parm:           fault_devs:int
parm:           disable_sendpage:bool
parm:           allow_oos:DONT USE! (bool)
parm:           minor_count:Approximate number of drbd devices (1-255) (uint)
parm:           usermode_helper:string
parm:           protocol_version_min:drbd_protocol_version

3. minikube setup

According to the Kubernetes official documentation, install the latest minikube and kubectl both on Ubuntu.

[email protected]:~# curl -L -o /usr/local/bin/minikube && chmod a+x /usr/local/bin/minikube
[email protected]:~# curl -L`curl -s`/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl && chmod a+x /usr/local/bin/kubectl

Considering that minikube will be running in virtual machines directly, then set minikube driver to bare-metal.

There are two pre-requirements needed to be met first (ref:

3.1 Install Docker (ref:

[email protected]:~# apt install -y apt-transport-https  ca-certificates  curl gnupg-agent  software-properties-common
[email protected]:~# curl -fsSL | apt-key add -
[email protected]:~# apt-key fingerprint 0EBFCD88
[email protected]:~# add-apt-repository  "deb [arch=amd64]  $(lsb_release -cs)  stable"
[email protected]:~# apt update
[email protected]:~# apt install -y docker-ce docker-ce-cli
[email protected]:~# docker run hello-world

3.2 Install conntrack

[email protected]:~# apt install -y conntrack

Now, it’s time to set minikube driver and start minikube.

[email protected]:~# minikube config set driver none
[email protected]:~# minikube start

4. LINSTOR deployment with Helm v3 Chart

Deployment of LINSTOR CSI operator is recommended. We maintain Helm charts for this, and as such suggest the use of Helm v3.

Download Helm v3 from Github (, and copy helm to /usr/local/bin/ .

And label the nodes that will be used for LINSTOR. <NODE_NAME> = hostname, in this case <NODE_NAME> = minikube .

[email protected]:~# kubectl label nodes minikube

In this tutorial, lvm-thin will be used for back-end storage.

[email protected]nikube:~# git clone
[email protected]:~# helm dependency update ./piraeus-operator/charts/piraeus
[email protected]:~# helm install piraeus-op ./piraeus-operator/charts/piraeus --set operator.nodeSet.automaticStorageType=LVMTHIN

Now docker instances are being created, and DRBD v9.0 will be injected automatically, and all unused disks will be used for LINSTOR storage-pool automatically.

[email protected]:~# lsmod | grep -i drbd
drbd_transport_tcp     24576  0
drbd                  536576  3 drbd_transport_tcp
libcrc32c              16384  6 nf_conntrack,nf_nat,dm_persistent_data,drbd,raid456,ip_vs

[email protected]:~# lsblk
NAME                                                                  MAJ:MIN  RM  SIZE RO TYPE MOUNTPOINT
sda                                                                     8:0     0   30G  0 disk
├─linstor_sda-sda_tmeta                                               253:0     0   32M  0 lvm
│ └─linstor_sda-sda-tpool                                             253:2     0   30G  0 lvm
└─linstor_sda-sda_tdata                                               253:1     0   30G  0 lvm
  └─linstor_sda-sda-tpool                                             253:2     0   30G  0 lvm

Verify this deployment via LINSTOR client, to see what happened actually.

[email protected]:~# apt install -y linstor-client

[email protected]:~# linstor node list
| Node                       | NodeType   | Addresses                   | State  |
| minikube                   | SATELLITE  | (PLAIN) | Online |
| piraeus-op-cs-controller-0 | CONTROLLER | (PLAIN)     | Online |

[email protected]:~# linstor storage-pool list
| StoragePool          | Node     | Driver   | PoolName        | FreeCapacity | TotalCapacity | CanSnapshots | State |
| DfltDisklessStorPool | minikube | DISKLESS |                 |              |               | False        | Ok    |
| sda                  | minikube | LVM_THIN | linstor_sda/sda |    27.79 GiB |     29.93 GiB | True         | Ok    |

New storage-pool is created, and named as the device name automatically.

5. MinIO setup

Create storage class for MinIO first:

[email protected]:~# cat sc.yaml
kind: StorageClass
  annotations: "true"
  name: "linstor-csi-lvm-thin-r1"
  autoplace: "1"
  storagePool: "sda"
reclaimPolicy: Delete
[email protected]:~# kubectl apply -f sc.yaml

Deploy MinIO:

[email protected]:~# cat minio.yaml
apiVersion: apps/v1
kind: Deployment
  name: minio-deployment
      app: minio
    type: Recreate
        app: minio
      - name: storage
          claimName: minio-pv-claim
      - name: minio
        image: minio/minio:latest
        - server
        - /data
        # Minio access key and secret key
        - name: MINIO_ACCESS_KEY
          value: "minio"
        - name: MINIO_SECRET_KEY
          value: "minio123"
        - containerPort: 9000
          hostPort: 9000
        - name: storage 
          mountPath: "/data"
apiVersion: v1
kind: PersistentVolumeClaim
  name: minio-pv-claim
    app: minio-storage-claim
  storageClassName: linstor-csi-lvm-thin-r1
    - ReadWriteOnce
      storage: 10G
apiVersion: v1
kind: Service
  name: minio-service
  type: LoadBalancer
    - port: 9000
      nodePort: 32701
      protocol: TCP
    app: minio
  sessionAffinity: None
  type: NodePort

[email protected]:~# kubectl apply -f minio.yaml

Verify this pvc in LINSTOR.

[email protected]:~# linstor volume-definition list
| ResourceName                             | VolumeNr | VolumeMinor | Size     | Gross | State |
| pvc-8838db2c-d65d-4b6c-8bbf-660ea7718699 | 0        | 1000        | 9.31 GiB |       | ok    |

And take look at block device level in Ubuntu

[email protected]:~# lsblk
NAME                                                                  MAJ:MIN  RM  SIZE RO TYPE MOUNTPOINT
sda                                                                     8:0     0   30G  0 disk
├─linstor_sda-sda_tmeta                                               253:0     0   32M  0 lvm
│ └─linstor_sda-sda-tpool                                             253:2     0   30G  0 lvm
│   ├─linstor_sda-sda                                                 253:3     0   30G  0 lvm
│   └─linstor_sda-pvc--8838db2c--d65d--4b6c--8bbf--660ea7718699_00000 253:4     0  9.3G  0 lvm
│     └─drbd1000                                                      147:1000  0  9.3G  0 disk /var/lib/kubelet/pods/2141f8df-d320-4bd6-8af5-418a1c5dfa0b/volumes/
└─linstor_sda-sda_tdata                                               253:1     0   30G  0 lvm
  └─linstor_sda-sda-tpool                                             253:2     0   30G  0 lvm
    ├─linstor_sda-sda                                                 253:3     0   30G  0 lvm
    └─linstor_sda-pvc--8838db2c--d65d--4b6c--8bbf--660ea7718699_00000 253:4     0  9.3G  0 lvm
      └─drbd1000                                                      147:1000  0  9.3G  0 disk /var/lib/kubelet/pods/2141f8df-d320-4bd6-8af5-418a1c5dfa0b/volumes/

In a browser, navigate to the IP address of this Ubuntu virtual machine at the exposed port 9000 or 32701, and login using the default credentials:

Access Key : minio 
Secret key : minio123

Upload some files, in this case, Fedora-Cinnamon-Live-x86_64-32-1.6.iso (1.9GiB) will be used. And check LINSTOR again.

[email protected]:~# linstor volume list
| Node     | Resource                                 | StoragePool | VolNr | MinorNr | DeviceName    | Allocated | InUse |    State |
| minikube | pvc-8838db2c-d65d-4b6c-8bbf-660ea7718699 | sda         |     0 |    1000 | /dev/drbd1000 |  2.14 GiB | InUse | UpToDate |

Use the official MinIO client to see files within the exact credentials above.

[email protected]:~# ./mc ls local
[2020-06-15 21:33:35 CST]      0B my-bucket/
[email protected]:~# ./mc ls local/my-bucket/
[2020-06-15 21:34:41 CST]  1.9GiB Fedora-Cinnamon-Live-x86_64-32-1.6.iso


Using LINBIT’s LINSTOR as a block storage orchestrator, not only replicates data to many different server nodes, but also supports disk-less mode to allow access to block storage from one node to another. Even more, when integrated with the Stork plugin it can give you features to run the pod on the same server node housing the data allowing for native storage performance.

Like? Share it with the world.

Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp
Share on vk
Share on reddit
Share on email