[GLLUG] Why RAID 5 stops working in 2009

Peter Smith psmith.gllug at gmail.com
Wed Mar 25 15:36:45 EDT 2009


I mentioned this article/idea at the post-meeting activities last week. The
original post is at http://blogs.zdnet.com/storage/?p=162 from July 18,
2007. A nice commentary on/against it is at
http://dansdata.blogsome.com/2008/10/23/death-of-raid-predicted-film-at-11/

The basic premise is thus: The unrecoverable read error (URE) rate for SATA
drives is generally documented at 10^14. About 12 TB. When we hit 2TB
Drives, there's a problem. With a 7 drive RAID 5 disk failure, you're left
with six 2TB drives to rebuild the replaced 'dead' drive. That's about 12
TB.  So, um, that means  you'll have a URE while recovering, and the
recovery will shut down and tell you to restore from backups.

So you go to RAID 6, which becomes the new RAID 5. :) For a while, and
REQUIRED to have safety from one disk failure.

That's the premise. I found the commentary AFTER the meeting, and it solved
the one problem *I* had with it (that a failure rate of one in 10^14 over a
sample size of 10^14 isn't 100%) that I didn't get around to quantifying
with math.

But, I thought i'd post it anyhow, since I offered to do so. :) Discuss
among yourselves.

-- 
Peter Smith
psmith.gllug at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.egr.msu.edu/mailman/public/linux-user/attachments/20090325/8a0fe834/attachment.html 


More information about the linux-user mailing list