
On Wed, Apr 10, 2013 at 04:48:17AM +0000, James Harper wrote:
On 2013-04-09 02:40, James Harper wrote:
I have a server that had 4 x 1.5TB disks installed in a RAID5 configuration (except /boot is a 'RAID1' across all 4 disks). One of the disks failed recently and so was replaced with a 3TB disk,
I'd be very wary of running RAID5 on disks >2TB
Remember that, when you have a disk failure, in order to rebuild the array, it needs to scan every sector of every remaining disk, then write to every sector of the replacement disk.
Debian does a complete scan every month anyway. A HP raid controller will basically be constantly (slowly) doing a background scan during periods of low use.
And a full resync on my 4x3TB array only takes 6 hours, so the window is pretty small.
with disks (and raid arrays) of that size, you also have to be concerned about data errors as well as disk failures - you're pretty much guaranteed to get some, either unrecoverable errors or, worse, silent corruption of the data. this is why error-detecting and error-correcting filesystems like ZFS and btrfs exist - they're not just a good idea, they're essential with the large disk and storage array sizes common today. see, for example: http://en.wikipedia.org/wiki/ZFS#Error_rates_in_harddisks personally, i wouldn't use raid-5 (or raid-6) any more. I'd use ZFS RAID-Z (raid5 equiv) or RAID-Z2 (raid6 equiv. with 2 parity disks) instead. actually, i wouldn't have used RAID-5 without a good hardware raid controller with non-volatile write cache - the performance sucks without that - but ZFS allows you to use an SSD as ZIL (ZFS Intent Log or sync. write cache) and as read cache. if performance was more important than capacity, I'd use RAID-1 or so-called raid-"10" or ZFS mirrored disks - a ZFS pool of mirrored pairs is similar to raid-10 but with all the extra benefits (error detection, volume management, snapshots, etc) of zfs. ZFSonLinux just released version 0.61, which is the first release they're happy to say is ready for production use. i've been using prior versions for a year or two now(*) with no problems and just switched from my locally compiled packages to their release .debs (for amd64 wheezy, although they work find with sid too). http://zfsonlinux.org/debian.html BTW, btrfs just got raid5/6 emulation support too...in a year or so (after the early-adopter guinea pigs have discovered the bugs), it could be worth considering that as an alternative. my own personal experience with btrfs raid1 & raid10 emulation was quite bad, but some people swear by it and lots of bugs have been fixed since i last used it. for large disks and large arrays, it's still a better choice than ext3/4 or xfs. (*) i was using it at work on a file-server (main purpose was to be a target for rsync backups of other machines) but i switched jobs last year. AFAIK, it is still running fine. i also use it on two machines at home. one with two pools, one in active normal daily use (called "export" as a generic-but-still-useful mountpoint name) and the other called "backup" which takes zfs send backups from "export" and rsync backups from other machines on my home LAN. The other machine is my mythtv box which has a ZFS pool for the recordings - mostly for convenience if a disk dies and needs to be replaced. all three pools are under regular heavy use, without problems. my inclination is to use ZFS anywhere I would otherwise be tempted to use mdadm and/or LVM....which is pretty much everywhere since i'm inclined to use mdadm RAID-1 even on desktop machines. craig -- craig sanders <cas@taz.net.au>