Mirrored SAN vs. DRBD

Every now and then we get asked “why not simply use a mirrored SAN instead of DRBD”? This post shows some important differences.

Basically, the first setup is having two servers, one of them being actively driving a DM-mirror (RAID1) over (eg.) two iSCSI volumes that are exported by two SANs; the alternative is using a standard DRBD setup. Please note that both setups need some kind of cluster manager (like Pacemaker).

Here are the two setups visualized:
The main differences are:

# SAN DRBD
1. High cost, single supplier Lower cost, commercial-off-the-shelf parts
2. At least 4 boxes (2 application servers, 2 SANs) 2 servers are sufficient
3. DM-Mirror has only recently got a write-intent-bitmap, and at least had performance problems (needed if active node crashes) Optimized Activity Log
4. Maintenance needs multiple commands Single userspace command: drbdadm
5. Split-Brain not automatically handled Automatical Split-Brain detection, policies via DRBD configuration
6. Data Verification needs to get all data over the network – twice Online-Verify transports (optionally) only checksums over the wire
7. Asynchronous mode (via WAN) not in standard product Protocol A available, optional proxy for compression and buffering
8. Black Box GPL solution, integrated in standard Linux Kernel since 2.6.33

So the Open-Source solution via DRBD has some clear technical advantages — not just the price.

And, if that’s not enough — with LINBIT you get world-class support, too!

Like? Share it with the world.

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on whatsapp
WhatsApp
Share on vk
VK
Share on reddit
Reddit
Share on email
Email