sync rate controller

DRBD and the sync-rate controller, part 3

This is an update to our previous two blog posts here and here. The goal with this post is to even further simplify the steps needed to tune the sync-rate controller. If you review the older posts, you’ll see that I omit a few options with this post, and even just simply pick some arbitrary starting values that work best in most deployments we’ve encountered.

I would also like to point out again that this is all about initial device synchronization and recovery resynchronization. This has no effect on the replication speeds which occur under normal replication when everything is in a healthy state.


Purpose of the sync rate controller

The dynamic sync-rate controller for DRBD was introduced way back in version 8.3.9. It was introduced as a way to slow down DRBD resynchronization speeds. The idea here is that if you have a write intensive application running atop the DRBD device, it may already be close to filling up your I/O bandwidth. We introduced the dynamic rate limiter to then make sure that recovery resync does not compete for bandwidth with the ongoing write replication. To ensure that the resync does not compete with application IO, the defaults lean towards the conservative side.

If the defaults seem slow to you or your use case, you can speed things up with a little bit of tuning in the DRBD configuration.

Tuning the sync rate controller

It is nearly impossible for DRBD to know just how much activity your storage and network backend can handle. It is fairly easy for DRBD to know how much activity it generates itself, which is why we tune how much network activity we allow DRBD to generate.

  • Set c-max-rate to 100% (or slightly more) than what your hardware can handle.
    • For example: if you know your network is capable of 10Gb/s, but your disk throughput is only 800MiB/s, then set this value to 800M.
  • Increase max-buffers to 40k.
    • 40k is usually a good starting point, but we’ve seen good results with anywhere between 20k to 80k.
  • Set c-fill-target to 1M.
    • Just trust us on this, and simply set it to ‘1M’.

This should be enough to get the resync rate going well beyond the defaults. Many people often tune the “c-*” sync rate controller setting, but never increase the max-buffers value. This may be partly our fault as we never mentioned it in the previous blog post, which is one reason I am revisiting this topic today.

Tuning the sync rate controller even further

Obviously, there is even further tuning we can do. Some of these, if tuned improperly, may have negative impacts on the application performance of programs writing to the DRBD device, so use caution. I would suggest starting with smaller values and working your way up if performing this tuning on production systems.

  • Set the resync-rate to ⅓ of the c-max-rate.
    • With the dynamic resync-rate controller, this value is only used as a starting point. Changing this will only have a slight effect, but will help things speed up faster.
  • Increase the c-min-rate to ⅓ of the c-max-rate.
    • It is usually advised to leave this value alone as the idea behind the dynamic sync rate controller is to “step aside” and allow application IO to take priority. If you really want to ensure things always move along at a minimum speed, then feel free to tune this a bit. As I mentioned earlier, you may want to start with a lower value and work up if doing this on a production system.
  • Set sndbuf-size and rcvbuf-size to 10M.
    • This is generally auto-tuned by the kernel, but cranking this up may help to move along the recovery resync speeds. There is also a possibility that this will lead to buffer-bloat, so tune these with caution. Again, on a production system, start with a value just a little over 4M and increase it slowly while observing the systems.

It is our hope that the information above will prove useful to some of our users and help possibly clear up some confusion regarding the resync tunables we have discussed in the past. As always, please feel free to drop us a comment below if you have any questions or anything you’d like to share.

Devin Vance on Linkedin
Devin Vance
First introduced to Linux back in 1996, and using Linux almost exclusively by 2005, Devin has years of Linux administration and systems engineering under his belt. He has been deploying and improving clusters with LINBIT since 2011. When not at the keyboard, you can usually find Devin wrenching on an American motorcycle or down at one of the local bowling alleys.
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *