Testing SSD Drives with DRBD: SanDisk Optimus Ascend

This week we continue our SSD testing series with the SanDisk Optimus Ascend 2.5 800GB SAS drives. Sandisk-Logo

Background:
SanDisk Corporation designs, develops and manufactures flash memory storage solutions. LINBIT is known for developing DRBD (Distributed Replicated Block Device), the backbone of Linux High Availability software. LINBIT tested how quickly data can be synchronously replicated from a SanDisk 800 GB SSD in server A, to an identical SSD located in server B. Disaster Recovery replication was also investigated, using the same hardware to an off-site server.

For those who are unfamiliar with the “shared nothing” High Availability approach to block level synchronous data replication: DRBD uses two (2) separate servers so that if one (1) fails, the other takes over. Synchronous replication is completely transaction safe and is used for 100% data protection purposes. DRBD has been available as part of the mainline Linux kernel since version 2.6.33.

This post reviews DRBD in an active/passive configuration using synchronous replication (DRBD’s Protocol C). Server A is active and server B is passive. Due to DRBD’s positioning in the Linux kernel (just above the disk scheduler), DRBD is application agnostic. It can work with any filesystem, database, or application that writes data to disk on Linux.

High Availability Testing: Sequential Read/Writes
Objective: Determine the performance implications of synchronous replication when using high performance SanDisk SSD drives.

In the initial test, LINBIT used a 10GbE connection between servers. The Ethernet Connection’s latency became the bottleneck when replicating data. We replaced the 10GbE with Dolphin Interconnects cards – removing the latency constraint.

Each test was ran 5 times, the averages are displayed below:

Sandisk-SequentialRWSandisk-Sequential-disks

As you can see from the above graph, the overhead introduced by using DRBD synchronous replication was only 2.42%.

Mounting the ext4 filesystem on top of DRBD, writing 1GiB of data incurs a 1.41% performance hit. Even when writing a larger 10GiB file, utilizing the DRBD replication software never introduced more than an average 2.16% overhead.

Random Read/Write Tests:
LINBIT dug deeper after finding the theoretical maximum speeds of DRBD Replication with SanDisk Optimus Ascend™ 800GB SSDs by using random read and write assessments. These random read/writes simulate how many applications and databases work in a production environment. The purpose of random read/write tests are to provide a realistic example of what users will experience when they add a load to their systems.

Naturally, the disks will slow down when separating the reads and writes.

Sandisk-RandReadWrite Sandisk-RandRWIOPS

On average, DRBD introduced a 1.02% overhead as compared to using a single disk without DRBD. In many of LINBIT’s random read/write tests, the disks performed faster with DRBD installed, than without it. DRBD achieves this by allowing us to tune, or completely disable, write barriers and flushing at the block device level; this is considered safe as long as the users RAID controller has a healthy battery backed write cache.
Conclusion:
Shared Nothing High Availability and Disaster Recovery replication architectures, with the help of fast SSD storage, can add outstanding resiliency to IT systems with minimal performance implications.

LINBIT found that when synchronously replicating data they can achieve write speeds near the advertised speeds of using a single SanDisk 800GB SSD using sequential read/writes. While using random read/writes, DRBD will also have very little impact on SSD performance as compared to using a single drive. Users can guarantee 100% data protection without sacrificing performance using this Open Source Software Solution. Users simply need two separate systems, DRBD data replication software, and high performance storage in the form of SanDisk Optimus Ascend SSDs.

Our next installment will cover Micron’s M500DC SSD drives.  Until then, happy replicating!

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *