By using the Pacemaker 1.1.15 release, the LINBIT cluster stack has been cutting-edge. Now the distributions are catching up.
This author has yet to write their bio.Meanwhile lets just say that we are proud flip contributed a whooping 31 entries.
Entries by flip
As an update to the previous post, we now have the Tech Guide for RDMA performance with non-volatile storage available online. Just head over to the LINBIT Tech Guide area and read the HGST Ultrastar SN150 NVMe performance report! (Free registration required.)
As promised in the previous RDMA post, we gathered some performance data for the RDMA transport. Read and enjoy!
DRBD9 has a new transport abstraction layer and it is designed for speed; apart from SSOCKS and TCP the next generation link will be RDMA. So, what is RDMA, and how is it different from TCP?
We often see people on #drbd or on drbd-user trying to measure the performance of their setup. Here are a few best practices to do this.
When DRBD 8.4.4 integrated TRIM/Discard support, a lot of things got much better… for example, 700MB/sec over a 1GBit/sec connection.
In the “Root-on-DRBD” Tech-Guide we showed how to cleanly get DRBD below the Root filesystem, how to use it, and a few advantages and disadvantages. Now, if there’s a complete, live, backup of a machine available, a few more use-cases become available; here we want to discuss testing upgrades of production servers.
In the last blog post about DRBDmanage we mentioned Initial setup is a bit involved (see the README) … with the new release, this is no longer true!
As already announced in another blog post, we’re preparing a new tool to simplify DRBD administration. Now we’re publishing its first release!
One of the projects that LINBIT will publish soon[1. With an Open Source license; GIT will be the preferred way to help us.] is drbdmanage, which allows easy cluster-wide storage administration with DRBD 9.