
On Thu, Apr 05, 2012 at 03:26:55PM +1000, Brett Pemberton wrote:
1) That doesn't provide enough of a jump over my 1.5TB drives, capacity wise, to make it worth it.
yeah, as you say 3TB drives are too expensive at the moment.
2) No space in the machine/enough sata ports to do this.
8-port IBM M1015 SAS cards (a rebadged LSI 9220-8i - 8 SAS/SATA 6Gbps ports) can be had for $US70 on ebay (plus $30 postage per order). that's about $10/port. The cards are also easily flashed to Initiator Target mode for improved software raid compatibility. http://www.servethehome.com/ibm-m1015-part-1-started-lsi-92208i/ http://www.ebay.com.au/itm/IBM-ServeRAID-M1015-SAS-RAID-Controller-FRU-46M08... that ebay item page implies that the $30 shipping is per card, but when i asked them about it, they said they would combine shipping for multiple cards. the cards are also missing the back-panel bracket, and need to be fitted with either a full-height or low-profile bracket (a couple of bucks each or scavenged from a old or dead card).
The array has just over 7TB of data, so to do this with 2TB drives, I'd need 5 at the least.
hang the new drives outside the case during the transfer (propping them up on individual cardboard boxes or similar for airflow and pointing a big fan at them is probably not a bad idea), move them into the case afterwards and re-purpose the old drives. fs-level compression would probably bring that down to 4 or 5TB unless it's mostly non-compressible data like videos.
Which would make a total of 12 drives in the machine. More than it can hold, and more than my 8 sata ports will be happy with. If anything, I'll contemplate doing this with 3TB drives, once they drop in price enough.
yeah, i'm waiting for 3TB drives to get around the $100 mark before i upgrade my zpools. they were slowly heading in that direction before the thailand floods last year but have now stabilised at around $200. maybe in a year or so. MSY has WD Green 3TB for $195, and Seagate 3TB (barracuda, i think) for $219 - WD Green drives are OK but be wary of TLER issues with a raid-card in JBOD mode rather then IT mode. 4x$219=$876 for 9TB raid5/raidz-1 storage vs 5*$120=$600 for 8TB r5/rz1 storage. might be worth considering when 3TB drives get down to $150...but i think i'll still wait for $100. hmmm...with an 8-port IBM M1015 as above, 8x$120 = $960 for either 14TB r5/rz1 or 12TB raid-6/raid-z2 storage.
This time, build it with ZFS (or maaaaaybe btrfs if you dare), as with those you can add more disks (of variable size) later and rebalance files.
+1 iirc i started using btrfs early last year, and then switched to zfs in the last half of the year. love it - it's exactly what i've always wanted for disk and filesystem management. i know i built my backup pool in Sep. dunno for sure when i first built my 'export' pool but it was a few weeks before then (i destroyed and recreated it after creating the backup pool). # zpool history backup | head 2011-09-27.12:35:13 zpool create -f -o ashift=12 backup raidz scsi-SATA_ST31000528AS_6VP3FWAG scsi-SATA_ST31000528AS_9VP4RPXK scsi-SATA_ST31000528AS_9VP509T5 scsi-SATA_ST31000528AS_9VP4P4LN 2011-09-27.12:37:41 zfs receive -v backup/asterisk 2011-09-27.12:37:53 zfs set mountpoint=/backup/hosts/asterisk backup/asterisk 2011-09-27.12:39:49 zfs receive -v backup/hanuman 2011-09-27.12:40:21 zfs set mountpoint=/backup/hosts/hanuman backup/hanuman 2011-09-27.12:55:15 zfs receive -v backup/kali 2011-09-27.12:59:57 zfs set mountpoint=/backup/hosts/kali backup/kali 2011-09-27.13:41:36 zfs receive -v backup/indra 2011-09-27.13:43:04 zfs set mountpoint=/backup/hosts/indra backup/indra # zpool history export | head History for 'export': 2011-10-01.09:26:43 zpool create -o ashift=12 -f export -m /exp raidz scsi-SATA_WDC_WD10EACS-00_WD-WCASJ2114122 scsi-SATA_WDC_WD10EACS-00_WD-WCASJ2195141 scsi-SATA_WDC_WD10EARS-00_WD-WMAV50817803 scsi-SATA_WDC_WD10EARS-00_WD-WMAV50933036 2011-10-01.09:27:21 zfs create export/home 2011-10-01.09:27:49 zfs set compression=on export 2011-10-02.09:55:31 zpool add export cache scsi-SATA_Patriot_Torqx_278BF0715010800025492-part7 2011-10-02.09:55:45 zpool add export log scsi-SATA_Patriot_Torqx_278BF0715010800025492-part6 2011-10-02.22:55:47 zfs create export/src 2011-10-02.23:03:44 zfs create export/ftp 2011-10-02.23:03:57 zfs set compression=off export/ftp 2011-10-02.23:04:24 zfs set atime=off export/ftp
I've been pretty happy with XFS on mdadm RAID5. Not sure if I'd feel safe moving to ZFS yet.
zfsonlinux is quite stable. I've only had a few minor problems in the six+ months i've been using it(*), and nothing even remotely resembling data loss. i trust it a LOT more than i ever trusted btrfs. according to my zpool history, i've had one WD 1TB drive die in my "export" pool, easily replaced with a seagate 1TB. and then that seagate died about three weeks later and i replaced it with another one. no disk deaths since then. 2011-11-28.19:26:00 zpool replace export scsi-SATA_WDC_WD10EARS-00_WD-WMAV50933036 scsi-SATA_ST31000528AS_9VP16X03 [...] 2011-12-19.09:10:12 zpool replace export scsi-SATA_ST31000528AS_9VP16X03 scsi-SATA_ST31000528AS_9VP18CCV (neat, i just realised i finally have real data on how often drives die on me and how often i have to replace them, rather than vague recollections) (hmmm...given what i've learnt about JBOD and TLER and my LSI 8-port card since then, it's possible that the WD and the 1st seagate aren't actually dead, they just got booted by the LSI card. i haven't got around to flash the card to IT mode because the dos flash program doesn't like my fancy modern motherboard, i'll have to pull the card from the system and flash it in an older machine) some of ZFS's nicer features: * disk, pool, volume, filesystem and snapshot management * much simpler management tools than lvm + mdadm + mkfs * extremely lightweight fs & subvolume creation. ditto for snapshots. * optional compression of individual filesystems and zvols * size limits on created filesystems and volumes are more like an easily-changed quota than, say, increasing the size of an lv on LVM.. e.g. zfs create -V 5G poolname/volname oops, i meant to make that 10G: zfs set volsize=10G poolname/volname BTW, both of those commands are effectively instant, a second or so. i can't recall if a VM running off that volume would recognise the size change immediately (or with partprobe) or if i would have to reboot it before i could repartition it and resize the VM's fs. * can use an SSD as L2ARC (read cache) and/or for the ZiL ZFS Intent Log (random write cache. better than a battery backed nv cache for a hw raid card) * error detection and correction * 'zfs send snapshotname | ssh remotehost zfs receive ...' zfs knows which blocks have changed in the snapshot so an increment zfs send | zfs recv is faster and less load than rsync. * audit trail / history log of actions. useful to know when you did something and also as a reminder of HOW to do some uncommon task. ZFS can also do de-duping(**) but vast quantities of RAM & L2ARC required, on the order of 4-6GB RAM or more per TB of storage (*) on two raidz-1 (similar to raid-5) pools with 4x1TB drives each in my home server, and another two raidz-1 pools with 4x2TB each in a zfs rsync backup server i built at work. and several experimental zfs VMs (running in kvm on a zfs zvol with numerous additional zvols added to build their own zpools with). (**) on the whole, de-duping is one of those things that sounds like a great feature but isn't all that compelling in practice. it's cheaper and far more effective to add more disks than to add the extra RAM required - even with the current price of 8GB sticks. craig -- craig sanders <cas@taz.net.au> BOFH excuse #435: Internet shut down due to maintenance