How to Setup LINSTOR with OpenNebula
This post will guide you through the setup of the LINSTOR – OpenNebula Addon. After completing it, you will be able to easily live-migrate virtual machines between OpenNebula nodes, and additionally, have data redundancy.
Setup Linstor with OpenNebula
This post assumes that you already have OpenNebula installed and running on all of your nodes. At first I will give you a quick guide for installing LINSTOR, for a more detailed documentation please read the DRBD User’s Guide. The second part will show you how to add a LINSTOR image and system datastore to OpenNebula.
We will assume the following node setup:
Node name | IP | Role |
---|---|---|
alpha | 10.0.0.1 | Controller/ON front-end |
bravo | 10.0.0.2 | Virtualization host |
charlie | 10.0.0.3 | Virtualization host |
delta | 10.0.0.4 | Virtualization host |
Make sure you have configured 2 lvm-thin storage pools named linstorpool/thin_image
and linstorpool/thin_system
on all of your nodes.
LINSTOR setup
Install LINSTOR packages
The easiest setup is to install the linstor-controller on the same node as the OpenNebula cloud front-end. The linstor-opennebula
package contains our OpenNebula driver, and therefore, is essential on the OpenNebula cloud front-end node. On this node install the following packages:
apt install drbd-dkms drbd-utils python-linstor linstor-satellite linstor-client linstor-controller linstor-opennebula
After the installation completes start the linstor-controller and enable the service:
systemctl start linstor-controller
systemctl enable linstor-controller
On all other virtualization nodes you do not need the linstor-controller
, linstor-client
or linstor-opennebula
package:
apt install drbd-dkms drbd-utils python-linstor linstor-satellite
For all nodes (including the controller) you have to start and enable the linstor-satellite:
systemctl start linstor-satellite
systemctl enable linstor-satellite
Now all LINSTOR-related services should be running.
Adding and configuring LINSTOR nodes
All nodes that should work as virtualization nodes need to be added to LINSTOR, so that storage can be distributed and activated on all nodes:
linstor node create alpha 10.0.0.1 --node-type Combined
linstor node create bravo 10.0.0.2
linstor node create charlie 10.0.0.3
linstor node create delta 10.0.0.4
Now we will configure the system and image lvm-thin pools with LINSTOR:
linstor storage-pool create lvmthin alpha open_system linstorpool/thin_system
linstor storage-pool create lvmthin bravo open_system linstorpool/thin_system
linstor storage-pool create lvmthin charlie open_system linstorpool/thin_system
linstor storage-pool create lvmthin delta open_system linstorpool/thin_system
linstor storage-pool create lvmthin alpha open_image linstorpool/thin_image
linstor storage-pool create lvmthin bravo open_image linstorpool/thin_image
linstor storage-pool create lvmthin charlie open_image linstorpool/thin_image
linstor storage-pool create lvmthin delta open_image linstorpool/thin_image
For testing we can now try to create a dummy test resource:
linstor resource-definition create dummy
linstor volume-definition create dummy 10M
linstor resource create dummy --auto-place 3 -s open_image
If everything went fine with the above commands you should be able to see a resource created on 3 nodes using our default lvm-thin storage pool:
linstor resource list-volumes
Now we can delete the created dummy resource:
linstor resource-definition delete dummy
LINSTOR is now setup and ready to be used by OpenNebula.
OpenNebula LINSTOR datastores
OpenNebula uses different types of datastores: system, image and files.
LINSTOR supports the system and image datastore types.
- System datastore is used to store a small context image that stores all information needed to run a virtual machine (VM) on a node.
- Image datastore as it name reveals stores VM images.
OpenNebula doesn’t need to be configured with a LINSTOR system datastore; it will also work with its default system datastore, but using LINSTOR system datastore gives it some data redundancy advantages.
Setup LINSTOR datastore drivers
As LINSTOR is an addon driver for OpenNebula, the LINSTOR OpenNebula driver needs to be added to it, to do so you need to modify the /etc/one/oned.conf
and add linstor
to the TM_MAD
and DATASTORE_MAD
sections.
TM_MAD = [
executable = "one_tm",
arguments = "-t 15 -d dummy,lvm,shared,fs_lvm,qcow2,ssh,vmfs,ceph,linstor"
]
Note that for the DATASTORE_MAD
section the linstor
driver has to specified 2 times (image datastore and system datastore).
DATASTORE_MAD = [
EXECUTABLE = "one_datastore",
ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter,linstor -s shared,ssh,ceph,fs_lvm,qcow2,vcenter,linstor"
]
And finally at the end of the configuration file, add new TM_MAD_CONF
and DS_MAD_CONF
sections for the linstor
driver:
TM_MAD_CONF = [
name = "linstor", ln_target = "NONE", clone_target = "SELF", shared = "yes", ALLOW_ORPHANS="yes"
]
DS_MAD_CONF = [
NAME = "linstor", REQUIRED_ATTRS = "BRIDGE_LIST", PERSISTENT_ONLY = "NO", MARKETPLACE_ACTIONS = "export"
]
Now restart the OpenNebula service.
Adding LINSTOR datastore drivers
After we registered the LINSTOR driver with OpenNebula we can add the image and system datastore.
For the system datastore we will create a configuration file and add it with the onedatastore
tool. If you want to use more than 2 replicas, just edit the LINSTOR_AUTO_PLACE
value.
cat >system_ds.conf <<EOI
NAME = linstor_system_auto_place
TM_MAD = linstor
TYPE = SYSTEM_DS
LINSTOR_AUTO_PLACE = 2
LINSTOR_STORAGE_POOL = "open_system"
BRIDGE_LIST = "alpha bravo charlie delta"
EOI
onedatastore create system_ds.conf
And we do nearly the same for the image datastore:
cat >image_ds.conf <<EOI
NAME = linstor_image_auto_place
DS_MAD = linstor
TM_MAD = linstor
TYPE = IMAGE_DS
DISK_TYPE = BLOCK
LINSTOR_AUTO_PLACE = 2
LINSTOR_STORAGE_POOL = "open_image"
BRIDGE_LIST = "alpha bravo charlie delta"
EOI
onedatastore create image_ds.conf
Now you should see 2 new datastores in the OpenNebula web front-end that are ready to use.
Usage and Notes
The new datastores can be used in the usual OpenNebula datastore selections and should support all OpenNebula features.
The LINSTOR datastores have also some configuration options that are described on the drivers github repository page.
Data distribution
The interested reader can check which ones were selected via LINSTOR resource list.
linstor resource list
While interesting, it is important to know that the storage can be accessed by all nodes in the cluster via a DRBD feature called “diskless clients”. So let’s assume “alpha” and “bravo” had the most free space and were selected, and the VM was created on node “bravo”. Via the low level tool drbdadm status we now see that the resource is created on two nodes (i.e., “alpha” and “bravo”) and the DRBD resource is in “Primary” role on “bravo”.
Now we want to migrate the VM from “bravo” to node “charlie”. This is again done via a few clicks in the GUI, but the interesting steps happen behind the scenes: The storage plugin realizes that it has access to the data on “alpha” and “bravo” (our two replicas) but also needs access on “charlie” to execute the VM. The plugin therefore creates a diskless assignment on “charlie”. When you execute drbdadm status on “charlie”, you see that now three nodes are involved in the overall picture:
- Alpha with storage in Secondary role
- Bravo with storage in Secondary role
- Charlie as a diskless client in Primary role
Diskless clients are created (and deleted) on demand without further user interaction, besides moving around VMs in the GUI. This means that if you now move the VM back to “bravo”, the diskless assignment on “charlie” gets deleted as it is no longer needed.
If you would have moved the VM from “charlie” to “delta”, the diskless assignment for “charlie” would have been deleted, and a new one for “delta” would have been created.
For you it is probably even more interesting that all of this including VM migration happens within seconds without moving the actual replicated storage contents.
Check this for LINSTOR and OpenNebula: