Coming Soon, a New DRBD Proxy Release

One major part of LINBIT Disaster Recovery is DRBD Proxy, which helps DRBD with long-distance real-time replication. DRBD Proxy mitigates bandwidth, latency, and distance issues by buffering writes into memory, ensuring that your WAN latency doesn’t become your disk throughput.

The upcoming release of the DRBD Proxy will come with a few new tools to improve data replication with compression. Its LZ4 plugin has been updated to the latest version 1.9.0 and Zstandard algorithm has been added as a brand-new plugin.

Both offer great balance of compression ratio and speed while offering higher replication performance on the DRBD end. In our test cases, both performed considerably better than without compression in overall read and write operations.

Here’s a short synopsis of some of the tests we ran. For this setup, we built a two-node DRBD cluster that was geographically separated. Both ran the latest yet-to-be-released version of the DRBD Proxy for various IO tests. The compression level for the Zstandard was 3, which is the default level between 0 and 22. LZ4 was set to level 9, which is the maximum level.

MySQL Read Write Operations with Sysbench

In this scenario, we used sysbench to perform random reads and writes to a MySQL database replicated on both nodes with DRBD Proxy and DRBD. Sysbench created a random database mounted on a 200MB DRBD volume with Thin LVM backing. Then it performed random transactions for 100 seconds.

The improved number of writes and overall transactions with compression is pretty clear compared to the ‘Proxy Only’ numbers. Interestingly, LZ4 and Zstandard both performed quite similarly.

MySQL Average Latency on MySQL RW Tests

The average latency from the same MySQL tests showed another interesting fact. When using DRBD Proxy, DRBD uses protocol A, which is an asynchronous mode. This setup in the test performed quite nicely compared to replicating with protocol C, the default synchronous mode. All three proxy configurations, regardless of the compression, performed very well against synchronous mode. The different modes of DRBD transport are explained here.

Other random IO tests performed with sysbench on the file system as well as fio tests at the block level mirrored the results shown above, where compression with proxy helped greatly with reducing the network payload while increasing overall read/write performance.

This was a quick preview of the upcoming DRBD Proxy release highlighting its compression plugins. Please stay tuned for the release announcement, and for any questions or comments, feel free to reach me in the comments below.

P.S. The test nodes were configured relatively light. The local node was a 4-core VM with 1GB of RAM running Ubuntu 18.04 and DRBD v9.0.18. The remote node was a 4-core VM with 4GB of RAM also running the same OS and DRBD. The WAN link was restricted to 2MB/s. The relevant sysbench commands used were:

sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-db=sbtest --db-driver=mysql --tables=10 --table-size=1000 prepare
sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-db=sbtest --db-driver=mysql --tables=10 --table-size=1000 --report-interval=10 --threads=4 --time=100 run
Woojay Poynter
IO Plumber
Woojay is working on data replication and software-defined-storage with LINSTOR, built on DRBD @LINBIT. He has worked on web development, embedded firmwares, professional culinary education, power carving with ice and wood. He is a proud father and likes to play with legos.
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *