How to setup LINSTOR on Proxmox VE

In this technical blog post, we show you how to integrate DRBD volumes in Proxmox VE via a storage plugin developed by LINBIT. The advantages of using DRBD include a configurable number of data replicas (e.g., 3 copies in a 5 node cluster), access to the data on every node and therefore very fast VM live-migrations (usually takes only a few seconds, depending on memory pressure). Download Linstor Proxmox Plugin


The rest of this post assumes that you have already set up Proxmox VE (the LINBIT example uses 4 nodes), and have created a PVE cluster consisting of all nodes. While this post is not meant to  replace the DRBD User’s Guide, we try to show a complete setup.

The setup consists of two important components:

  1. LINSTOR manages DRBD resource allocation
  2. linstor-proxmox plugin that implements the Proxmox VE storage plugin API and executes LINSTOR commands.

In order for the plugin to work, you must first create a LINSTOR cluster.


We have assumed here that you have already set up the LINBIT Proxmox repository as described in the User’s guide. If you have not completed this set up, execute the following commands on all cluster nodes. First, we need the low-level infrastructure (i.e., the DRBD9 kernel module and drbd-utils):

apt install pve-headers
apt install drbd-dkms drbd-utils
rmmod drbd; modprobe drbd
grep -q drbd /etc/modules || echo "drbd" >> /etc/modules

The next step is to install LINSTOR:

apt install linstor-controller linstor-satellite linstor-client
systemctl start linstor-satellite
systemctl enable linstor-satellite

Now, decide which of your hosts should be the current controller node and enable the linstor-controller service on that particular node only:

systemctl start linstor-controller

Volume creation

Obviously, DRBD needs storage to create volumes. In this post we assume a setup where all nodes contain an LVM-thinpool called drbdpool. In our sample setup, we created it on the pve volume group, but in your setup, you might have a different storage topology. On the node that runs the controller service, execute the following commands to add your nodes:

linstor node create alpha --node-type Combined
linstor node create bravo --node-type Combined
linstor node create charlie --node-type Combined
linstor node create delta --node-type Combined

“Combined” means that this node is allowed to execute a LINSTOR controller and/or a satellite, but a node does not have to execute both. So it is safe to specify “Combined”; it does not influence the performance or the number of services started.

The next step is to configure a storage pool definition. As described in the User’s guide, most LINSTOR objects consist of a “definition” and then concrete instances of such a definition:

linstor storage-pool-definition create drbdpool

By now it is time to mention that the LINSTOR client provides handy shortcuts for its sub-commands. The previous command could have been written as linstor spd c drbdpool. The next step is to register every node’s storage pool:

for n in alpha bravo charlie delta; do \
linstor storage-pool create $n drbdpool lvmthin pve/drbdpool; \

DRBD resource creation

After that we are ready to create our first real DRBD resource:

linstor resource-definition create first
linstor volume-definition create first 10M --storage-pool drbdpool
linstor resource create alpha first
linstor resource create bravo first

Now, check with drbdadm status that  “alpha” and “bravo” contain a replicated DRBD resource called “first”. After that this dummy resource can be deleted on all nodes by deleting its resource definition:

linstor resource-definition delete -q first

LINSTOR Proxmox VE Plugin Setup

As DRBD and LINSTOR are already set up, the only things missing is installing the plugin itself and its configuration.

apt install linstor-proxmox

The plugin is configured via the file /etc/pve/storage.cfg:

drbd: drbdstorage
content images, rootdir
redundancy 2 controller

It is not necessary to copy that file to the other nodes, as /etc/pve is already a replicated file system. After the configuration is done, you should restart the following service:

systemctl restart pvedaemon

After this setup is done, you are able to create virtual machines backed by DRBD from the GUI. To do so, select “drbdstorage” as storage in the “Hard Disk” section of the VM. LINSTOR selects the nodes that have the most free storage to create the replicated backing devices.


The interested reader can check which ones were selected via LINSTOR resource list. While interesting, it is important to know that the storage can be accessed by all nodes in the cluster via a DRBD feature called “diskless clients”. So let’s assume “alpha” and “bravo” had the most free space and were selected, and the VM was created on node “bravo”. Via the low level tool drbdadm status we now see that the resource is created on two nodes (i.e., “alpha” and “bravo”) and the DRBD resource is in “Primary” role on “bravo”.

Now we want to migrate the VM from “bravo” to node “charlie”. This is again done via a few clicks in the GUI, but the interesting steps happen behind the scene: The storage plugin realizes that it has access to the data on “alpha” and “bravo” (our two replicas) but also needs access on “charlie” to execute the VM. The plugin therefore creates a diskless assignment on “charlie”. When you execute drbdadm status on “charlie”, you see that now three nodes are involved in the overall picture:

• Alpha with storage in Secondary role
• Bravo with storage in Secondary role
• Charlie as a diskless client in Primary role

Diskless clients are created (and deleted) on demand without further user interaction, besides moving around VMs in the GUI. This means that if you now move the VM back to “bravo”, the diskless assignment on “charlie” gets deleted as it is no longer needed.

If you would have moved the VM from “charlie” to “delta”, the diskless assignment for “charlie” would have been deleted, and a new one for “delta” would have been created.

For you it is probably even more interesting that all of this including VM migration happens within seconds without moving the actual replicated storage contents.

Next Steps

So far, we created a replicated and highly-available setup for our VMs, but the LINSTOR controller and especially its database are not highly-available. In a future blog post, we will describe how to make the controller itself highly-available by only using software already included in Proxmox VE (i.e., without introducing complex technologies like Pacemaker). This will be achieved with a dedicated controller VM that will be provided by LINBIT as an appliance.

Roland Kammerer
Software Engineer at Linbit
Roland Kammerer studied technical computer science at the Vienna University of Technology and graduated with distinction. Currently, he is a PhD candidate with a research focus on time-triggered realtime-systems and works for LINBIT in the DRBD development team.
20 replies
  1. Yannis
    Yannis says:

    Thanks for this guide.

    Just wanted to comment that the line “systemctl start linstor-server” is probably wrong, that should be “systemctl start linstor-controller”. In addition I had to use “systemctl unmask linstor-controller” first, then start the controller service.

    In regards to diskless nodes, how do the reads/writes are distributed to the other ‘diskfull’ nodes? Since these nodes operate as clients to the other nodes, I assume that the performance of the VMs which are live migrated to these nodes, should be affected (negatively), at least the reads, correct ?


    • Roland Kammerer
      Roland Kammerer says:

      Thanks Yannis, we fixed the “linstor-server” vs. “linstor-controller” part.

      For the diskless clients nothing changed, it works as diskless clients work in DRBD, and as they worked in combination with DRBDManage: Reads are distributed among nodes with storage (i.e., parts from node A, parts from node B,…). Writes are done to all nodes that have storage, obviously all storage nodes need the data. There is (currently) no “write relaying/forwarding”.

  2. siD
    siD says:

    Hi, nice how-to, but I still have not succses….. Trying it on 2-node Proxmox 5.2 cluster ( + one smal node for quorum)

    1. Is linstor node create command correct? I must switch IP and nodename order: ” linstor node create pve1 –node-type Combined ” (only on controller node)
    [email protected]:~# linstor node list
    ┊ Node ┊ NodeType ┊ IPs ┊ State ┊
    ┊ pve1 ┊ COMBINED ┊ ┊ Online ┊
    ┊ pve2 ┊ COMBINED ┊ ┊ Online ┊

    2. I use another lvm partition on /dev/sdb. I prepare on both nodes ” vgcreate drbdpool /dev/sdb1″ (on both nodes) . So I use this command ” linstor storage-pool create $n drbdpool lvm drbdpool ” . (on controller node). Right?

    [email protected]:~# linstor storage-pool list
    ┊ StoragePool ┊ Node ┊ Driver ┊ PoolName ┊ Free ┊ SupportsSnapshots ┊
    ┊ drbdpool ┊ pve1 ┊ LvmDriver ┊ drbdpool ┊ 830.00 GiB ┊ false ┊
    ┊ drbdpool ┊ pve2 ┊ LvmDriver ┊ drbdpool ┊ 830.00 GiB ┊ false ┊

    3. Manualy create an delete resource works fine.

    5. But linstor-proxmox plugin not work. ” drbdstorage ” show up on both nodes, Enabled: YES, but Active: NO.
    Creating VM on pve1 show me error: TASK ERROR: unable to create VM 104 – error with cfs lock ‘storage-drbdstorage’: drbd error: Startup not successful (no quorum? not *both* nodes up in a 2 node cluster?)
    Creating VM on pve2 show me error: TASK ERROR: unable to create VM 104 – error with cfs lock ‘storage-drbdstorage’: org.freedesktop.DBus.Error.ServiceUnknown: The name org.drbd.drbdmanaged was not provided by any .service files

    Any help, pls ? 🙂
    (and sorry for my english)

    • siD
      siD says:

      Oh… My fault… After ” systemctl restart pvedaemon ” linstor-proxmox plugin works.

      But after create disk in Proxmox, syncing between nodes is very slow. ( about 5 MB/s). Is this normal? Maybe lvm vs lvmthin storage-pool?

      • Roland Kammerer
        Roland Kammerer says:

        Hi siD,

        Name and IP were in the wrong order, thank you, I updated the blog post. I also added the hint to restart the “pvedaemon”.

        The slow sync is a bit off-topic for this blog post. Here you would have to start the usual debugging and test if it is the LVM, the network, …. For that please read the users guide or post to the drbd-users ML.

        • siD
          siD says:

          Hi, OK, I wil go subscrible to drbd-users ML.
          Thanks for the great blog and I’m looking forward to continuing with the HA Linstor controller

  3. k
    k says:

    I’m having an error deleting a resource-definition and/or the volume-definitions defined inside. When I use ‘linstor resource-definition delete name’ or ‘linstor volume-definition delete name num’ , the command hangs indefinitely. No logical volumes were created, is there someway I can force the reconfiguration? Even reinstalling leaves the nodes, resources, and volumes in tact. Looking for something similar to drbdmanage’s ‘uninit’, which was useful for these situations.

    • Roland Kammerer
      Roland Kammerer says:

      Obviously that should not happen. I remember a bug in that area, but that should be fixed. Please make sure to use the latest versions (i.e., there was a new release yesterday).

      A hard “uninit” would be to delete/recreate the database of the linstor-controller node.

      In general we prefer this kind of communication to happen on the “drbd-user” mailing-list. If you think the bug still exists, feel free to file a bug on

      Thanks, rck

  4. Yvan
    Yvan says:

    Hi, just after having installed controller and one satellite, I have a problem
    [email protected]:~# linstor node list
    ┊ Node ┊ NodeType ┊ Addresses ┊ State ┊
    It’s not my first DRBD cluster on promox but in this cas I don’t know what I could do to solve that.
    Any idea ?

    • Roland Kammerer
      Roland Kammerer says:

      This highly depends on the versions you use, and also the error log would be interesting. A best guess would be the host names do not match. Sorry, but we do not consider this blog a “support forum”. Please post this issue on the drbd-user mailing-list, or open an issue on github, then we try to help you. Thanks.

  5. Alessan
    Alessan says:


    First, thanks for this great guide.

    I’m having a issue on 3 node when i shutdown one node, started vms works fine but if i stop a vm can’t start it again:
    TASK ERROR: command ‘drbdsetup wait-connect-resource vm-100-disk-1’ failed: got timeout

    [email protected]:~# linstor node l
    ┊ Node ┊ NodeType ┊ Addresses ┊ State ┊
    ┊ linstor-controller ┊ CONTROLLER ┊ (PLAIN) ┊ Unknown ┊
    ┊ pve1 ┊ SATELLITE ┊ (PLAIN) ┊ Online ┊
    ┊ pve2 ┊ SATELLITE ┊ (PLAIN) ┊ Online ┊
    ┊ pve3 ┊ SATELLITE ┊ (PLAIN) ┊ OFFLINE ┊
    [email protected]:~# linstor sp l
    ┊ StoragePool ┊ Node ┊ Driver ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ SupportsSnapshots ┊
    ┊ drbdpool ┊ pve1 ┊ LvmDriver ┊ vg_hdd ┊ 7.99 GiB ┊ 10.00 GiB ┊ false ┊
    ┊ drbdpool ┊ pve2 ┊ LvmDriver ┊ vg_hdd ┊ 7.99 GiB ┊ 10.00 GiB ┊ false ┊
    ┊ drbdpool ┊ pve3 ┊ LvmDriver ┊ vg_hdd ┊ ┊ ┊ false ┊
    [email protected]:~# linstor r l
    ┊ ResourceName ┊ Node ┊ Port ┊ Usage ┊ State ┊
    ┊ vm-100-disk-1 ┊ pve1 ┊ 7000 ┊ Unused ┊ UpToDate ┊
    ┊ vm-100-disk-1 ┊ pve2 ┊ 7000 ┊ Unused ┊ UpToDate ┊
    ┊ vm-100-disk-1 ┊ pve3 ┊ 7000 ┊ ┊ Unknown ┊

      • Alessan
        Alessan says:

        Hi again,

        While I waiting moderator approve my join I found same issue on 2 node on mailing list with no solution:

        They talk about no sense call “drbdsetup wait-connect-resource”, then i comment line 479 on /usr/share/perl5/PVE/Storage/Custom/ and all works except clone.

        479| #wait_connect_resource($volname);

        > am pretty sure that this behavior is caused by some other component in
        > the system. DRBD does not decide on its own that now is a good time to
        > run a drbdsetup wait-connect-resource command for no apparent reason.

        This behavior is caused by linstor proxmox plugin and not other external component…….

        Anyway, thanks for support.

  6. Kyle
    Kyle says:


    I am trying to get this up and running on my cluster. I am running PVE5 on each node.
    I cannot get DRBD9 to work.
    I have read multiple tutorials including this one (and Yannis’ Blog entry).

    # I add the proxmox no-subscription repo to /etc/apt/sources.list
    deb stretch pve-no-subscription

    I add the linbit repo:
    wget -O- | apt-key add –
    PVERS=5 && echo “deb proxmox-$PVERS drbd-9.0″ \ > /etc/apt/sources.list.d/linbit.list

    # Then do the following
    apt-get update && apt-get install linstor-proxmox
    apt install pve-headers
    apt install drbd-dkms drbd-utils
    rmmod drbd; modprobe drbd
    grep -q drbd /etc/modules || echo “drbd” >> /etc/module

    when I do a “cat /proc/drbd”, I still get version 8.4.7..
    If I continue setting up the environment with other nodes, the satellites cannot connect. When I review the linstor logs I see that It’s complaining DRBD9 not being installed.

    Can anyone give me some guidance on what I’m doing wrong?

    Thank you in advance.
    – Kyle

    • Roland Kammerer
      Roland Kammerer says:

      Usually this means that the old module is still loaded. “rmmod drbd” and “modprobe drbd”, then you should see the new module.

      PS: This is not a support forum. Feel free to subscribe and post to the drbd-user mailing list.

  7. Peter Frank
    Peter Frank says:

    please update the typo here
    grep -q drbd /etc/modules || echo “drbd” >> /etc/module

    Should be
    grep -q drbd /etc/modules || echo “drbd” >> /etc/modules

    please notice the S in /etc/modules

  8. Marcelo
    Marcelo says:

    I see you use lvm as backing storage to create the pool. I’m trying ZFS as backing storage to test this, and I was wondering: Does Linstor require the ZFS pool to be used exclusively ? More to the point: Can I use the same ZFS pool for VM backing storage and DRBD backing storage, at the same time ? Thank you! Great how-to post

    • Roland Kammerer
      Roland Kammerer says:

      LINSTOR or the plugin do not impose exclusivity. The only thing to be careful about is to not create resources manually that have the same name as one created by the plugin. That would produce a name conflict, but as long as you don’t do such nasty things intentionally, you should be fine.

      Please also note that this is a blog, not a support forum. If you have further questions, please subscribe to the drbd-user mailing list and ask questions there. Thanks.


Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply to Yvan Cancel reply

Your email address will not be published. Required fields are marked *