Mirrored SAN vs. DRBD

Every now and then we get asked “why not simply use a mirrored SAN instead of DRBD”? This post shows some important differences.

Basically, the first setup is having two servers, one of them being actively driving a DM-mirror (RAID1) over (eg.) two iSCSI volumes that are exported by two SANs; the alternative is using a standard DRBD setup. Please note that both setups need some kind of cluster manager (like Pacemaker).

Here are the two setups visualized:
The main differences are:

1.High cost, single supplierLower cost, commercial-off-the-shelf parts
2.At least 4 boxes (2 application servers, 2 SANs)2 servers are sufficient
3.DM-Mirror has only recently got a write-intent-bitmap, and at least had performance problems (needed if active node crashes)Optimized Activity Log
4.Maintenance needs multiple commandsSingle userspace command: drbdadm
5.Split-Brain not automatically handledAutomatical Split-Brain detection, policies via DRBD configuration
6.Data Verification needs to get all data over the network – twiceOnline-Verify transports (optionally) only checksums over the wire
7.Asynchronous mode (via WAN) not in standard productProtocol A available, optional proxy for compression and buffering
8.Black BoxGPL solution, integrated in standard Linux Kernel since 2.6.33

So the Open-Source solution via DRBD has some clear technical advantages — not just the price.

And, if that’s not enough — with LINBIT you get world-class support, too!

Like? Share it with the world.

Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp
Share on vk
Share on reddit
Share on email