DRBD 9 over RDMA with Micron SSDs

We have been testing out some 240GB Micron M500DC SSDs with DRBD® 9 and DRBD’s RDMA Transport layer.  Micron, based in Boise Idaho, is a leader in NAND, flash production and storage.  We found that that their M500DC SSD’s are write optimized for data center use cases and in some cases exceeded the expected performance.

For those who are just joining us, leveraging RDMA as a transport protocol is relatively new to DRBD and is only possible with DRBD 9. You can find some background on RDMA and how DRBD benefits from it in one of our past blog posts, “What is RDMA, and why should we care?”. Also, check out our technical guide on benchmarking DRBD 9 on Ultrastar SN150 NVMe SSDs if you are interested in seeing some of the numbers we were able to achieve with DRBD 9.0.1-1 and RDMA on very fast storage.

Back to the matter at hand.

In our test environment we used two 240GB Micron M500DC SSDs in RAID0 in each of our two nodes. We connected the two peers using Infiniband ConnectX-4 10Gbe. We then ran a series of tests to compare the performance of DRBD disconnected (not-replicating), DRBD connected using TCP over Infiniband, and DRBD connected using RDMA over Infiniband, all against the performance of the backing disks without DRBD.

For testing random read/write IOPs we used fio with 4K blocksize and 16 parallel jobs. For testing sequential writes we used dd with 4M blocks. Both tests used the appropriate flag for direct IO in order to remove any caching that might skew the results.

We also levereaged DRBD’s “when-congested-remote” read-balancing option to pull read’s from the peer if the IO subsystem is congested on the Primary node. We will see that this produces dramatic increases to performance of our random reads; especially when combined with RDMA’s extremely low latency.

Here are the results from our Random Read/Write IOPs testing:

As you can see from the numbers and graphs we achieve huge gains in read performance when using DRBD with read-balancing; roughly a 26% increase when using TCP and 62% with RDMA.

We also see that using the RDMA transport protocol results in less than 1% of overhead when synchronously replicating the writes to our DRBD device; that’s pretty sweet. 🙂

Sequential reads cannot benefit from DRBD’s read-balancing at all, and large sequential writes are going to be heavily segmented by the TCP stack, so our numbers for sequential writes better represent the impact a transport protocol has on synchronous replication.

Here are the results from our Sequential Write testing:

Looking at the graph it’s easy to see that RDMA is the transport mode of choice if your IO patterns are sequential. With TCP we see ~19.1% overhead, while RDMA results in ~1.1% overhead.

Matt Kereczman

Matt Kereczman

Matt Kereczman is a Solutions Architect at LINBIT with a long history of Linux System Administration and Linux System Engineering. Matt is a cornerstone in LINBIT's technical team, and plays an important role in making LINBIT and LINBIT's customer's solutions great. Matt was President of the GNU/Linux Club at Northampton Area Community College prior to graduating with Honors from Pennsylvania College of Technology with a BS in Information Security. Open Source Software and Hardware are at the core of most of Matt's hobbies.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.