It was not all that long ago that RAID-6 was just some theoretical RAID level in books that was not implemented by vendors. RAID was expensive, and the cost of adding an extra parity drive was pricey. And since RAID reconstruction was relatively fast, given disk performance compared to density (more on this later), RAID-6 wasn’t much of an issue.
RAID-6 is becoming a common way to get the most out of SATA drives. If you’re considering using RAID-6, be sure your RAID controller is up to the task.
Then SATA drives came along, with significantly higher density but also higher failure rates and lower performance, and RAID-6 designs soon followed to enable them to be used in higher-end and performance environments.
With RAID-6 growing in popularity, now is a good time to take a look at some of the issues involved compared to RAID-5 when evaluating RAID controllers, among other concerns.
Driving the Problem
The time to read a single drive has gotten significantly longer over time, as you can see in the chart below:
![]() |
The main reason for this is that density for disk drives has been growing much faster than performance. For enterprise disks (SCSI and Fibre Channel — FC — drives), we have gone from 500MB drives to 300GB drives since 1991. That is an increase of 600 times. During the same period, the maximum performance has gone from 4 MB/sec to 125 MB/sec, an increase of 31.25 times. If disk drive performance had increased at the same rate as density, we would have drives that could be reading or writing at about 2.4 GB/sec. That would be great, but it is not likely to happen any time soon.
So it’s clear that the time to rebuild RAID LUNs has increased dramatically. Another point to consider: Look back on 1996, for example, near the introduction of 1Gb half-duplex FC. The transfer rate for a disk drive was 16 MB/sec and a density of 9GB. From 1996 to today, the maximum performance of a drive has gone up 7.8 times, the density has gone up 33.33 times, while the change rate to a single drive has only increased four times. Yes, we have full duplex, but in 1996, a single FC channel could support a maximum of 6.25 drives reading or writing at full rate. Today, that number is 3.2. I am aware of no significant changes for enterprise drives that will change these trends. Adding SATA drives to the mix exacerbates these problems, since the drives are denser and the transfer rates are lower. I believe this became the driving reason for RAID-6, since the risk of data loss for RAID-5 increased as the density increased.
RAID Controller Performance
Thus, RAID-6 for FC will become more common given the increase in rebuild time that denser drives with slower interfaces require. Add SATA drive usage to this, and it is clear that RAID-6 is here to stay until someone figures out something better.
The problem is that RAID-6 requires more resources from the controller to calculate the additional parity and more bandwidth to write it and, for some vendors, to read the additional parity. The amount of bandwidth required depends on the RAID-6 configuration. For example, with 8+1 RAID-5, you need the bandwidth of nine drives; with RAID-6 8+2, you need 11 percent more bandwidth, or 10 drives. For a 4+1 RAID-5, you need the bandwidth of five drives, but with RAID-6 4+2, you need 20 percent more bandwidth, or a sixth drive. That is 20 percent more bandwidth for a single LUN and surely almost every RAID controller can handle that, but what if all of the LUNs in the system were RAID-6?
Does your controller have 11 percent or 20 percent more computational resources to calculate parity and that much more bandwidth from the controller to all of the trays of disks? Add the potential for RAID reconstruction and you might be trying to run the RAID controller faster than it was designed to run. I think it is important for everyone considering RAID-6 to understand some of the design issues in RAID controllers to better understand if what you are buying can meet your performance needs. I am not going to address the differences between FC and SATA drives, since I have already covered this (see The Real Cost of Storage).
This article was originally published on Enterprise Storage Forum.