Persistent Memory (PMEM) is an exciting storage tier with much lower latency than SSDs. LINBIT has optimized DRBD for when its metadata is stored on PMEM/NVDIMM.
This article relates to both:
Traditional NVDIMM-N: Some DRAM is accompanied by NAND-flash. On power failure, a backup power source (supercap, battery) is used to save the contents of the DRAM to the flash storage. When the main power is restored, the contents of the DRAM are restored. These components have exactly the same timing characteristics as DRAM and are available in sizes of 8GB, 16GB or 32GB per DIMM.
Intel’s new Optane DC Persistent Memory: These DIMMs are built using a new technology called 3D XPoint. It is inherently non-volatile and has only slightly higher access times than pure DRAM. It comes in much higher capacities than traditional NVDIMMs: 128GB, 256GB and 512GB.
DRBD requires metadata to keep track of which blocks are in sync with its peers. This consists of 2 main parts. One part is the bitmap which keeps track of exactly which 4KiB blocks may be out of sync. It is used when a peer is disconnected to minimize the amount of data that must be synced when the nodes reconnect. The other part is the activity log. This keeps track of which data regions have ongoing I/O or had I/O activity recently. It is used after a crash to ensure that the nodes are fully in sync. It consists of 4MiB extents which, by default, cover about 5GiB of the volume.
Since the DRBD metadata is small and frequently accessed, it is a perfect candidate to be placed on PMEM. A single 8GiB NVDIMM can store enough metadata for 100 volumes of 1TiB each, allowing for replication between 3 nodes.
PMEM outperforms
DRBD 9 has been optimized to access metadata on PMEM directly using memory operations. This approach is extremely efficient and leads to significant performance gains. The improvement is most dramatic when the metadata is most often updated. This occurs when writes are performed serially. That is, the I/O depth is 1. When this is the case, scattered I/O forces the activity log to be updated on every write. Here we compare the performance between metadata on a separate NVMe device, and metadata on PMEM with and without the optimizations.
As can be seen, placing the DRBD metadata on a PMEM device results in a massive performance boost for this kind of workload.
Impact with concurrent I/O
When I/O is submitted concurrently, DRBD does not have to access the metadata as often. Hence we do not expect the performance impact to be quite as dramatic. Nevertheless, there is still a significant performance boost, as can be seen.
The above tests were carried out on a pair of 16 core servers equipped with NVMe storage and a direct ethernet connection. Each server had an 8GiB DDR4 NVDIMM from Viking installed. DRBD 9.0.17 was used to perform the tests without the PMEM optimizations and DRBD 9.0.20 for the remainder. I/O was generated using the fio tool with the following parameters:
Joel is a software developer at LINBIT with a background in mathematics. A polyglot programmer, Joel enjoys working with many different languages and technologies. At LINBIT, he has been involved in the development of LINSTOR and DRBD. Originally from England, Joel is now based in Vienna, Austria.
https://www.linbit.com/wp-content/uploads/2019/11/photo-1508633069371-a735f885a1c7.jpeg9011350Joel Colledgehttps://www.linbit.com/wp-content/uploads/2017/11/LinBit_Logo_v3.pngJoel Colledge2019-11-25 14:38:512019-11-25 14:55:19Optimizing DRBD for Persistent Memory
Just in the last few months, LINBIT closed with the help of a Red Hat a deal with a large, well-known retail and commercial bank with branches across England and Wales. Since 2019, LINBIT has seen a growing interest from banks and the financial sector in our open source products. A bank operating in the UK changed its High Availability solution from Veritas Volume Replicator to LINBIT HA. Here you get all the relevant information about how what and why!
Background
A long time ago, financial institutions ran a high number of internal data-processing services and the IT department created blueprints, or standard architectures, showing how services should be deployed.
At that time, servers were running Sun Solaris or IBM’s AIX operating system. Veritas Cluster Server (VCS) – or sometimes Volume Replicator (VVR) – was the software used to keep these servers up and running.
Fast forward to 2019. Most new services get deployed on Linux, namely Red Hat’s enterprise Linux (RHEL). RHEL was swapped in for the operating system, the cluster stack to form HA-clusters remained unchanged (VCS and VVR).
With Red Hat already having an OS in the stack, they use this opportunity to promote their own answer to HA clustering. Under the name “High Availability Add-On”, Red Hat brings the open source Pacemaker technology to customers, which is their replacement for VCS.
DRBD replaces VVR
In some cases, VCS is deployed with VVR. This is where LINBIT’s HA comes in. It acts as a replacement for VVR, and is perfectly integrated with Pacemaker in technical terms and in terms of support. Red Hat and LINBIT combined their support forces via TSANet, which gives customers a seamless support experience in a multi-vendor relationship.
LINBIT also knows all there is to know about Pacemaker since it is the defacto standard HA cluster manager in the open source community. When a customer in this context sends us a question related to Pacemaker we simply answer it instead of referring them to Red Hat.
Additionally the TCO costs of the DRBD solution are by far cheaper than with VCS and VVR. And the bank can rely on the 24 x 365 remote support. There is no vendor lock-in, because it is open source.
This big bank in Great Britain chose LINBIT HA with the help of Redhat and there are very happy about it.
The LINBIT solution is a great piece in our internal infrastructure pushing the system to the next level – the performance, stability, and support is outstanding. Those guys deliver.
– Mark –
Chief Technology Officer
This is not the end…
A bank’s business is money, but that does not mean that the organization wants to spend more than necessary on its IT infrastructure. LINBIT’s DRBD is an effective solution to keep the highly available services reliable as they should be and gain more room to maneuver for investments into emerging technologies.
If you have any questions, don’t hesitate to email us at [email protected]
https://www.linbit.com/wp-content/uploads/2019/09/photo-1526304640581-d334cdbbf45e.jpeg9001350Philipp Reisnerhttps://www.linbit.com/wp-content/uploads/2017/11/LinBit_Logo_v3.pngPhilipp Reisner2019-09-23 15:04:002019-10-17 08:07:10Use Case: Bank replaces Veritas Volume Replicator with LINBIT HA
One major part of LINBIT Disaster Recovery is DRBD Proxy, which helps DRBD with long-distance real-time replication. DRBD Proxy mitigates bandwidth, latency, and distance issues by buffering writes into memory, ensuring that your WAN latency doesn’t become your disk throughput.
The upcoming release of the DRBD Proxy will come with a few new tools to improve data replication with compression. Its LZ4 plugin has been updated to the latest version 1.9.0 and Zstandard algorithm has been added as a brand-new plugin.
Both offer great balance of compression ratio and speed while offering higher replication performance on the DRBD end. In our test cases, both performed considerably better than without compression in overall read and write operations.
Here’s a short synopsis of some of the tests we ran. For this setup, we built a two-node DRBD cluster that was geographically separated. Both ran the latest yet-to-be-released version of the DRBD Proxy for various IO tests. The compression level for the Zstandard was 3, which is the default level between 0 and 22. LZ4 was set to level 9, which is the maximum level.
In this scenario, we used sysbench to perform random reads and writes to a MySQL database replicated on both nodes with DRBD Proxy and DRBD. Sysbench created a random database mounted on a 200MB DRBD volume with Thin LVM backing. Then it performed random transactions for 100 seconds.
The improved number of writes and overall transactions with compression is pretty clear compared to the ‘Proxy Only’ numbers. Interestingly, LZ4 and Zstandard both performed quite similarly.
The average latency from the same MySQL tests showed another interesting fact. When using DRBD Proxy, DRBD uses protocol A, which is an asynchronous mode. This setup in the test performed quite nicely compared to replicating with protocol C, the default synchronous mode. All three proxy configurations, regardless of the compression, performed very well against synchronous mode. The different modes of DRBD transport are explained here.
Other random IO tests performed with sysbench on the file system as well as fio tests at the block level mirrored the results shown above, where compression with proxy helped greatly with reducing the network payload while increasing overall read/write performance.
This was a quick preview of the upcoming DRBD Proxy release highlighting its compression plugins. Please stay tuned for the release announcement, and for any questions or comments, feel free to reach me in the comments below.
P.S. The test nodes were configured relatively light. The local node was a 4-core VM with 1GB of RAM running Ubuntu 18.04 and DRBD v9.0.18. The remote node was a 4-core VM with 4GB of RAM also running the same OS and DRBD. The WAN link was restricted to 2MB/s. The relevant sysbench commands used were:
Woojay is working on data replication and software-defined-storage with LINSTOR, built on DRBD @LINBIT. He has worked on web development, embedded firmwares, professional culinary education, power carving with ice and wood. He is a proud father and likes to play with legos.
https://www.linbit.com/wp-content/uploads/2019/06/Woojay.jpeg14172520Woojay Poynterhttps://www.linbit.com/wp-content/uploads/2017/11/LinBit_Logo_v3.pngWoojay Poynter2019-06-27 14:51:502019-06-27 14:54:58Coming Soon, a New DRBD Proxy Release
For quite some time, LINSTOR has been able to use NVMe-oF storage targets via the Swordfish API. This was expressed in LINSTOR as a resource definition that contains a single resource with one backing disk (that is the NVMe-oF target) and one diskless resource (that is the NVMe-oF initiator).
Layers in the storage stack
In the last few months the team has been busy makingLINSTOR more generic, adding support for resource templates. A resource template describes a storage stack in terms of layers for specific resources/volumes. Here are some examples of such storage stacks:
Swordfish initiator & target on top of logic volumes (LVM)
DRBD on top of LUKS on top of logic volumes (LVM)
LVM only
The team came up with an elegant approach that introduces these additional resource templates in ways that allow existing LINSTOR configurations to keep their semantics as the default resource templates.
With this decoupling, we no longer need to have DRBD installed on LINSTOR clusters that do not require the replication functions of DRBD.
What does that mean for DRBD?
The interests of LINBIT’s customers vary widely. Some want to use LINSTOR without DRBD – which is now supported. A very prominent example of this is Intel, who uses LINSTOR in its Rack Scale Design effort to connect storage nodes and compute nodes with NVMe-oF. In this example, the storage is disaggregated from the other nodes.
Other customers see converged architectures as a better fit. For converged scenarios, DRBD has many advantages over a pure data access protocol such as NVMe-oF. LINSTOR is built from the ground up to manage DRBD, therefore, the need for DRBD support will remain.
Linux-native NVMe-oF and NVMe/TCP
SNIA’s Swordfish has clear benefits with creating a standard for managing storage targets such as allowing optimized storage target implementations, as well as a hardware-accelerated data-path, non-Linux control path.
Due to the fact that Swordfish is an extension of Redfish, which needs to be implemented in the Baseboard Management Controller (BMC), we have decided to extend LINSTOR’s driver set to configure NVMe-oF target and initiator software. We do this by utilizing existing tools found within the Linux operating system, eliminating the need for a Swordfish software stack.
Summary
LINSTOR now supports configurations without DRBD. It is now a unified storage orchestrator for replicated and non-replicated storage.
Philipp Reisner is founder and CEO of LINBIT in Vienna/Austria. His professional career has been dominated by developing DRBD, a storage replication for Linux. Today he leads a company of about 30 employees with locations in Vienna/Austria and Portland/Oregon.
In this blog post, we present one of our recent extensions to the LINSTOR ecosystem: A high-level, user-friendly Python API that allows simple DRBD resource management via LINSTOR.
Background: So far LINSTOR components communicated by the following means: Via Protocol Buffers, or via the Python API that is used in the linstor command line client. Protocol Buffers are a great way to transport serialized structured data between LINSTOR components, but by themselves they don’t provide the necessary abstraction for developers.
That is not the job of Protocol Buffers. Since the early days we split the command line client into the client logic (parsing configuration files, parsing command line arguments…), and a Python library (python-linstor). This Python library provides all the bits and pieces to interact with LINSTOR. For example it provides a MultiLinstor class that handles TCP/IP communication to the LINSTOR controller. Additionally, it allows all the operations that are possible with LINSTOR (e.g. creating nodes, creating storage pools…). For perfectly valid reasons this API is very low level and pretty close to the actual Protocol Buffer messages sent to the LINSTOR controller.
By developing more and more plugins to integrate LINSTOR into other projects like OpenStack, OpenNebula, Docker Volumes, and many more, we saw that there is need for a higher level abstraction.
Finding the Right Abstraction
The first dimension of abstraction is to abstract from LINSTOR internals. For example it perfectly makes sense that recreating an existing resource is an error on a low level (think of it as EEXIST). On a higher level, depending on the actual object, trying to recreate an object might be perfectly fine and one wants to get the existing object (i.e. idem-potency).
The second dimension of abstraction is from DRBD and LINSTOR as a whole. Developers dealing with storage already have a good knowledge about concepts like nodes, storage pools, resource, volumes, placement policies… This is the part where we can make LINSTOR and DRBD accessible for new developers.
The third goal was to only provide a set of objects that are important in the context of the user/developer. This, for example, means that we can assume that the LINSTOR cluster is already set up, so we do not need to provide a high-level API to add nodes. For the higher-level API we can focus on [LINSTOR] resources. This allows us to satisfy the KISS (keep-it-simple-stupid) principle. A forth goal was to introduce new, higher-level concepts like placement policies. Placement policies/templates are concepts currently developed in core LINSTOR, but we can already provide basics on a higher level.
Demo Time
We start by creating a 10 GB big replicated LINSTOR/DRBD volume in a 3 node cluster. We want the volume to be 2 times redundant. Then we increase the size of the volume to 20 GB.
This line is enough to resize a replicated volume cluster wide.
We needed 5 lines of code to create a replicated DRBD volume in a cluster! Let that sink in for a moment and compare it to the steps that were necessary without LINSTOR: Creating backing devices on all nodes, writing and synchronizing DRBD res(ource) files, creating meta-data on all nodes, drbdadm up the resource and force one to the Primary role to start the initial sync.
For the next step we assume that the volume is replicated and that we are a storage plugin developer. Our goal is to make sure the volume is accessible on every node because the block device should be used in a VM. So, A) make sure we can access the block device, and B) find out what the name of the block device of the first volume actually is:
The method activate is one of these methods that shows how we intended abstraction. Note that we autoplaced the resource 2 times in a 3-node cluster. So LINSTOR chose the nodes that fit best. But now we want the resource to be accessible on every node without increasing the redundancy to 3 (because that would need additional storage and 2 times replicated data is good enough).
Diskless clients
Fortunately DRBD has us covered as it has the concept of diskless clients. These nodes provide a local block device as usual, but they read and write data from/to their peers only over the network (i.e. no local storage). Creating this diskless assignment is not necessary if the node was already part of the replication in the first place (then it already has access to the data locally).
This is exactly what activate does: If the node can already access the data – fine, if not, create a diskless assignment. Now assume we are done and we do not need access to the device anymore. We want to do some cleanup because we do not need a diskless assignment:
>>> foo.deactivate(socket.gethostname())
The semantic of this method is to remove the assignment if it is diskless (as it does not contribute to actual redundancy), but if it is a node that stores actual data, deactivate does nothing and keeps the data as redundant as it was. This is only a very small subset of the functionality the high-level API provides, there is a lot more to know like creating snapshots, converting diskless assignments to diskful ones and vice versa, or managing DRBD Proxy. For more information check the online documentation.
If you want to go deeper into the LINSTOR universe, please visit our youtube channel.
Roland Kammerer studied technical computer science at the Vienna University of Technology and graduated with distinction. Currently, he is a PhD candidate with a research focus on time-triggered realtime-systems and works for LINBIT in the DRBD development team.
https://www.linbit.com/wp-content/uploads/2019/03/LINSTOR-High-Level-Resource-API.jpeg360640Roland Kammererhttps://www.linbit.com/wp-content/uploads/2017/11/LinBit_Logo_v3.pngRoland Kammerer2019-03-04 14:25:092019-04-12 17:20:21High Level Resource API - The simplicity of creating replicated volumes
This time it’s no April Fools’ joke: LINBIT is porting its flagship technology, the DRBD software, to the Microsoft Windows platforms.
Linux inside …
To achieve that while still being able to support the latest DRBD 9 features, a Linux kernel emulation layer has been designed and implemented. This layer allows us to use the original DRBD 9 sources nearly unchanged. The currently (January 2019) supported DRBD 9 version is 9.0.16 (which is the same patchlevel as the Linux version). Future versions of WinDRBD will track Linux DRBD releases with only a few days offset. Download WinDRBD. Please note that WinDRBD is currently in beta being tech reviewed and is not currently supported.
… the Windows Kernel
On the Windows side, the Windows Driver Model API (WDM) is used. This allows us to support older Windows versions, like Windows 7 Service Pack 1 (in theory, even ReactOS should work one day). Of course this is upward compatible, so, yes this also works with Windows 10, Windows Server 2016 and other releases of Windows Desktop or Server operating systems. Unlike Microsoft’s Storage Spaces Direct which is tied to the Server edition only.
Obtaining WinDRBD
We are proud to announce that WinDRBD is available as part of a public beta program and can be downloaded from our Website. An graphical inno-setup based installer helps getting started with WinDRBD. Also available for download, the WinDRBD tech guide describes how to create a 2-node replicated block device setup with WinDRBD. Exactly as with the Linux version, you will have to edit some configuration files (the drbd.conf) to define your DRBD resources and their parameters. With exception of the drive letter syntax (which obviously doesn’t exist on Linux), the configuration file format is exactly the same as in the Linux version of DRBD, so if you know DRBD for Linux, you will promptly feel at home.
Current Status as of January 2019
As of January 2019, most core features of WinDRBD work just the same way as they do with the Linux version. This includes:
Online data replication
Resync
Connect to Windows or Linux DRBD
Separate block device for WinDRBD and backing storage
Diskless operation
Proper handling of I/O and network failures
NTFS file system on top of WinDRBD device (others will follow)
Driver can be loaded and unloaded without reboot
(Almost) complete support of all drbdadm, drbdsetup and drbdmeta commands
Internal metadata
User mode helpers (via cmd.exe, bash or PowerShell)
Windows UI compliant installer (inno-setup based)
We are working currently to complete porting the features still missing:
3 (and more) node setups (coming soon)
Using System volume (C:) as WinDRBD device (for a diskless client booted via iPXE)
File systems other than NTFS
Cosmetic changes to installer (currently C:\windrbd is hardcoded for the configuration files, also installer requires a reboot on upgrade or uninstall)
drbdadm wait-for family of commands (though drbdsetup events2 works)
and some others like debugfs or network and disk statistics
In addition we are working on integration with various cluster managers (like Microsoft cluster manager that comes with the Server operating systems) We expect a 1.0 release later this year. You can help developing WinDRBD by using and testing it or simply let us know which features are most important to you.
All fine. But what can I do with WinDRBD?
DRBD and hence also WinDRBD is a block device level replication level that sits under the file system layer. It mirrors data:
Transparently: Applications (and also the file system) do not have to be modified to support the replication.
On line: Data is replicated in real time while data is written by the application.
Should one node be offline, it will be synchronized by DRBD automatically once it comes up again.
So it suits perfect for everything that requires high availability (HA) services, including:
A Windows based file server for small or medium sized businesses: Keep your mission critical files redundant and highly available. Should one server fail due to a hardware or software fault, data is still available from the mirror server. This happens transparently to the application.
A Windows based database server: Some database technologies still run better on Windows than on Linux (such as Microsoft SQL server): your applications can access data of the database server that is protected against a hardware or software failure of one node.
Add high availability to your legacy applications: Since WinDRBD is transparent to the application you can, if you still need those, add high availability to your (for example) Access database based application with WinDRBD.
In addition, WinDRBD might be suited for virtualization (Hyper-V) on Windows as DRBD is suited for virtualization on Linux with KVM and XEN. It is too early for us to tell for sure. In case it works it would bring transparent HA to the the virtual storage devices for virtual machines, including support for live migration.
Last but not least, in the near future, you can use WinDRBD on your clients for a diskless client setup. You can boot your Windows clients via iPXE, including the WinDRBD driver and WinDRBD provides the system volume (C: in most cases) from remote DRBD servers (this may also be a Linux based DRBD, since WinDRBD is wire compatible with its Linux cousin).
Conclusion
With WinDRBD, LINBIT brings the advantages of a seasoned software defined storage solution to the Microsoft Windows platform. Both DRBD and WinDRBD are Open Source licensed under the GNU General Public License (GPL). With wide support for Microsoft Windows platforms, a multitude of applications can be built around WinDRBD. Future developments, like LINSTOR and Microsoft cluster manager integration as well as diskless client support will widen the possible areas of application and thanks to LINSTOR, administration of WinDRBD resources will be just as easy as with the Linux version of DRBD.
Johannes Thoma is a freelance software developer with 20+ years of experience in the industry. He specializes on systems programming, including but not limited to Linux Kernel programming. Past projects include a real-time capable image chain for a medical X-Ray device as well as an early version of DRBD proxy. Johannes also plays Jazz Piano on a professional level and can be heard on Jam Sessions all over the world, including the Smalls in New York or the B flat in Berlin.
https://www.linbit.com/wp-content/uploads/2018/06/computer-gaming-green-51415.jpg13652048Johannes Thomahttps://www.linbit.com/wp-content/uploads/2017/11/LinBit_Logo_v3.pngJohannes Thoma2019-01-24 13:28:462019-02-26 18:10:03WinDRBD: DRBD for Windows is Coming
This post will guide you through the setup of the LINSTOR – OpenNebula Addon. After completing it, you will be able to easily live-migrate virtual machines between OpenNebula nodes, and additionally, have data redundancy.
Setup Linstor with OpenNebula
This post assumes that you already have OpenNebula installed and running on all of your nodes. At first I will give you a quick guide for installing LINSTOR, for a more detailed documentation please read the DRBD User’s Guide. The second part will show you how to add a LINSTOR image and system datastore to OpenNebula.
We will assume the following node setup:
Node name
IP
Role
alpha
10.0.0.1
Controller/ON front-end
bravo
10.0.0.2
Virtualization host
charlie
10.0.0.3
Virtualization host
delta
10.0.0.4
Virtualization host
Make sure you have configured 2 lvm-thin storage pools named linstorpool/thin_image and linstorpool/thin_system on all of your nodes.
LINSTOR setup
Install LINSTOR packages
The easiest setup is to install the linstor-controller on the same node as the OpenNebula cloud front-end. The linstor-opennebula package contains our OpenNebula driver, and therefore, is essential on the OpenNebula cloud front-end node. On this node install the following packages:
If everything went fine with the above commands you should be able to see a resource created on 3 nodes using our default lvm-thin storage pool:
linstor resource list-volumes
Now we can delete the created dummy resource:
linstor resource-definition delete dummy
LINSTOR is now setup and ready to be used by OpenNebula.
OpenNebula LINSTOR datastores
OpenNebula uses different types of datastores: system, image and files.
LINSTOR supports the system and image datastore types.
System datastore is used to store a small context image that stores all information needed to run a virtual machine (VM) on a node.
Image datastore as it name reveals stores VM images.
OpenNebula doesn’t need to be configured with a LINSTOR system datastore; it will also work with its default system datastore, but using LINSTOR system datastore gives it some data redundancy advantages.
Setup LINSTOR datastore drivers
As LINSTOR is an addon driver for OpenNebula, the LINSTOR OpenNebula driver needs to be added to it, to do so you need to modify the /etc/one/oned.conf and add linstor to the TM_MAD and DATASTORE_MAD sections.
After we registered the LINSTOR driver with OpenNebula we can add the image and system datastore.
For the system datastore we will create a configuration file and add it with the onedatastore tool. If you want to use more than 2 replicas, just edit the LINSTOR_AUTO_PLACE value.
cat >system_ds.conf <<EOI
NAME = linstor_system_auto_place
TM_MAD = linstor
TYPE = SYSTEM_DS
LINSTOR_AUTO_PLACE = 2
LINSTOR_STORAGE_POOL = "open_system"
BRIDGE_LIST = "alpha bravo charlie delta"
EOI
onedatastore create system_ds.conf
And we do nearly the same for the image datastore:
Now you should see 2 new datastores in the OpenNebula web front-end that are ready to use.
Usage and Notes
The new datastores can be used in the usual OpenNebula datastore selections and should support all OpenNebula features.
The LINSTOR datastores have also some configuration options that are described on the drivers github repository page.
Data distribution
The interested reader can check which ones were selected via LINSTOR resource list.
linstor resource list
While interesting, it is important to know that the storage can be accessed by all nodes in the cluster via a DRBD feature called “diskless clients”. So let’s assume “alpha” and “bravo” had the most free space and were selected, and the VM was created on node “bravo”. Via the low level tool drbdadm status we now see that the resource is created on two nodes (i.e., “alpha” and “bravo”) and the DRBD resource is in “Primary” role on “bravo”.
Now we want to migrate the VM from “bravo” to node “charlie”. This is again done via a few clicks in the GUI, but the interesting steps happen behind the scenes: The storage plugin realizes that it has access to the data on “alpha” and “bravo” (our two replicas) but also needs access on “charlie” to execute the VM. The plugin therefore creates a diskless assignment on “charlie”. When you execute drbdadm status on “charlie”, you see that now three nodes are involved in the overall picture:
Alpha with storage in Secondary role
Bravo with storage in Secondary role
Charlie as a diskless client in Primary role
Diskless clients are created (and deleted) on demand without further user interaction, besides moving around VMs in the GUI. This means that if you now move the VM back to “bravo”, the diskless assignment on “charlie” gets deleted as it is no longer needed.
If you would have moved the VM from “charlie” to “delta”, the diskless assignment for “charlie” would have been deleted, and a new one for “delta” would have been created.
For you it is probably even more interesting that all of this including VM migration happens within seconds without moving the actual replicated storage contents.
Check this for LINSTOR and OpenNebula:
By loading the video, you agree to YouTube's privacy policy. Learn more
Rene was one of the first developers seeing a DRBD resource deployed by LINSTOR and is software developer at LINBIT since 2017.
While not squashing bugs in LINSTOR, Rene is either climbing or paragliding down a mountain.
https://www.linbit.com/wp-content/uploads/2019/01/black-and-white-connect-hand-164531-1.jpg34565184Rene Peinthorhttps://www.linbit.com/wp-content/uploads/2017/11/LinBit_Logo_v3.pngRene Peinthor2019-01-03 14:50:492019-02-26 21:56:08How to Setup LINSTOR with OpenNebula
LINBIT SDS (Linux SDS) will showcase cloud-native enterprise storage management at KubeCon + CloudNativeCon in Seattle
Beaverton, OR, Dec. 3, 2018 – LINBIT enhances open source software-defined storage (SDS) by providing disaster recovery (DR) replication for critical data. LINBIT SDS is an enterprise-class storage management solution designed for cloud and container storage workloads.
To simplify administration, enhance user experience, and accelerate integration with other software, LINBIT SDS relies on the pre-existing storage management capabilities native to Linux, such as LVM and DRBD. These capabilities are complemented byLINSTOR, a feature-rich volume management software. One supported storage tool is DRBD, the in-kernel block level data-replication for Linux. By announcing support for DRBD Proxy, LINBIT extends replication to disaster recovery scenarios since DRBD Proxy enables fast and reliable data replication over any distance by resolving network communications and handling data access latencies.
“LINBIT SDS is rapidly becoming the reliable, high performance, and economical choice for enterprise and cloud workloads,” said Brian Hellman COO of LINBIT. “With simplified support for DR, LINBIT SDS is surpassing the costly and complex proprietary cloud storage solutions.”
LINBIT SDS provides a host of capabilities to manage persistent block storage for Kubernetes environments. It supports logical volume management (LVM) snapshots, which enhance application availability while minimizing data loss; thin provisioning, which improves efficient resource utilization in virtualized environments; and volume management, which simplifies tasks such as adding, removing, or replicating storage volumes.
LINBIT SDS supports Kubernetes
The Linux based SDS solution works with the leading cloud projects Kubernetes, OpenStack, and OpenNebula, as well as a range of virtualization platforms, and as a stand-alone product. Learn more about how the software works by watching a short video-demo here:
LINBIT is a member of the Linux Foundation and is proud to support KubeCon, a Linux Foundation conference. Visit us at KubeCon + CloudNativeCon at booth #S7, December 11th-13th, 2018 in Seattle.
LINBIT is the force behind DRBD and is the de facto open standard for High Availability (HA) software for enterprise and cloud computing. The LINBIT DRBD software is deployed in millions of mission-critical environments worldwide to provide High Availability (HA), Geo-Clustering for Disaster Recovery (DR), and Software Defined Storage (SDS) for OpenStack and OpenNebula based clouds. Don’t be shy. Visit us atLINBIT.com.
###
https://www.linbit.com/wp-content/uploads/2017/11/LinBit_Logo_v3.png00Brian Hellmanhttps://www.linbit.com/wp-content/uploads/2017/11/LinBit_Logo_v3.pngBrian Hellman2018-12-03 15:18:092019-05-29 19:02:41LINBIT SDS Adds Disaster Recovery and Support for Kubernetes
In this video Matt Kereczman from LINBIT combines components of LINBIT SDS and LINBIT to demonstrate extending an existing LINSTOR managed DRBD volume to a disaster recovery node, located in a geographically-separated datacenter via LINSTOR and DRBD proxy.
Watch the video:
By loading the video, you agree to YouTube's privacy policy. Learn more
He’s already created a LINSTOR cluster on four nodes: linstor-a, linstor-b, linstor-c and linstor-dr.
You can see that linstor-dr is in a different network than our other three nodes. This network exists in the DR DC, which is connected to our local DC via a 40Mb/s WAN link.
He has a single DRBD resource defined, which is currently replicated synchronously between the three peers in our local datacenter. He’s listed out his LINSTOR-managed resources and volumes which is currently mounted on linstor-a:
Before he adds a replica of this volume to the DR node in his DR datacenter, he’ll quickly test the write throughput of his DRBD device, so he has a baseline of how well it should perform.
https://www.linbit.com/wp-content/uploads/2018/11/LINBIT-DR-and-LINSTOR-demo.png8881584Brian Hellmanhttps://www.linbit.com/wp-content/uploads/2017/11/LinBit_Logo_v3.pngBrian Hellman2018-11-30 20:03:302019-01-29 18:59:23Demo of Extending LINSTOR-Managed DRBD Volume to a DR Node
LINBIT US is celebrating a decade of service and growth. 10 years ago, we started our journey with you from a newly established office in the pacific northwest. In that time, we have moved into new offices, grown our team 4 times in size, built some really great software, and most importantly, met, collaborated with, and served some of the most sophisticated customers along the way. Here’s a snapshot of some of the major milestones told in the present tense.
2010: Our bread and butter has always been High Availability. LINBIT HA software, DRBD, is now in the Linux mainline kernel since 2010, as of release 2.6.33. This promises to be a standout event that makes enterprise-grade HA a standard capability within Linux and puts the open source community on par with the best of proprietary systems out there.
2015: Fast forward to 2015. LINBIT is a company that is actually being talked about as the best solution for huge enterprises! Hundreds of thousands of servers depend on the replication that DRBD provides. All our customers are doing really cool work. And some of them are very well known, such as Cisco and Google. We are forming strong partnerships across North and South America– think RedHat and Suse.
New Horizon: Disaster Recovery
2016: Not only is the LINBIT HA product a success, but our new product focused on disaster recovery, DRBD Proxy, is proving to be incredibly useful to companies who need to replicate data across distances. LINBIT is having wonderful success in providing clients peace of mind in case a disaster strikes, or perhaps a clumsy admin pulls on some cables they weren’t supposed to be pulling on! Oh, and we can’t forget our fun videos that go along with these products: LINBIT DR, LINBIT HA, and LINBIT SDS.
More in 2016: The official release of DRBD9 to the public. A huge move for enterprises looking to have multiple replicas of their data (up to 32!). Now, companies can implement software-defined storage (SDS) for creating, managing and running a cloud storage environment.
New Kid on the Block: LINSTOR
2018: Now that SDS is a feature, many clients are looking for it. LINBIT is making it even easier, and plausible, with the release of LINSTOR. With this, everything is automated. Deploying a DRBD volume has never been easier.
2018: At this point we would be remiss if we didn’t mention that LINSTOR has Flex Volume & External Provisioner drivers for Kubernetes. We now provide persistent storage to high performance containerized applications! Here is a LINSTOR demo, showing you just how quick and easy it is to deploy a DRBD cluster with 20 resources.
Now: A new guide describes DRBD for the Microsoft Azure cloud service. We have partners and resellers who have end clients running Windows servers that need HA. One of our engineers even created a video of an NFS failover in Azure!
What else? There is almost too much to say about the past 10 years and the amount of growth and change is astonishing. However, at our core, we are the same. We believe in open source. In building software that turns the difficult into fast, robust, and easy. In our clients. In our company.
“We are grateful”
During a conversation at Red Hat Summit this year, LINBIT COO Brian Hellman was asked how long he had been at LINBIT. “I replied ‘10 years in September.’ The gentleman was surprised; ‘That’s a long time, especially in the tech industry’. To which he replied, ‘I love what I do and the people I work with — Not only the members of the LINBIT team, but also our customers, partners, and our extended team. Without them we wouldn’t be here, they make it all possible and for that we are grateful.”
To whomever is reading this, wherever you are, you were part of it. You ARE part of it! So a big thank you for reading, caring, and hopefully using LINBIT HA, LINBIT DR, or LINBIT SDS. Cheers to another 10 years!
Kelsey turns her personal passion for connecting with people into a supporting LINBIT clients. As the Accounts Manager for LINBIT USA, Kelsey engages with customers to provide them with the best experience possible. From Enterprise companies, to Mom and Pop shops, Kelsey ensures the implementation of LINBIT products goes smoothly. Doing what is best for the client is her #1 priority.
https://www.linbit.com/wp-content/uploads/2018/08/mission-critical-10yr.png17703792Kelsey Swanhttps://www.linbit.com/wp-content/uploads/2017/11/LinBit_Logo_v3.pngKelsey Swan2018-09-03 12:41:572019-05-29 19:04:14LINBIT USA Celebrates 10 Year Anniversary