
On Sun, Jul 22, 2012 at 03:12:22PM +1000, Brett Pemberton wrote:
My current system has 2 x 64GB SSDs for OS, and a ZIL (mirrored) and
is the ZIL mirrored, or just the OS partitions? i'd set that up as maybe 10 or 20GB mirrored for the OS, plus a small (1-4GB) ZIL on each SSD, swap space, and the remainder of each SSD as two separate LOG devices. in total that would give 2-8GB ZIL (very generous) and about 80GB of LOG for read cache. in linux terms: sda1 & sdb1: OS 10-20GB (raid-1) sda2 & sdb2: swap ??? (optional, non-mirrored) sda3 & sdb3: ZIL 1 and ZIL 2 1-4GB (non-mirrored) sda4 & sdb4: LOG 1 and LOG 2 about 40GB (non-mirrored) the biggest variable is how much you want for the mirrored OS partitions. that could be a lot smaller then 10-20GB if the system won't have much installed and/or you're planning to have /var and other stuff on your zpool. or it could be more if you intend to use the zpool just for bulk data storage, with everything else on the SSD.
started with 8 x 500gb drives. A mix of PATA and SATA, which isn't ideal, but is what I had spare.
Yesterday I did a replace of one of the 500gb drives with a 1.5TB drive I also had spare. The issue I've hit is that although the zpool has seen an increase in capacity, the zfs filesystem has not.
like md raid, you won't see an increase in filesystem capacity until all drives in a zdev (not the entire pool, the zdev) have been upgraded. so if you have 8x500gb in one zdev then you have to upgrade all 8 drives before you see the extra space, but if you have 8 drives in 4 mirrored (raid1) zdevs, then you'll see the increased capacity after upgrading just two drives. summary & examples: 8x500gb drives in raidz-1 is about 3.5TB. more storage capacity now but to upgrade capacity you have to replace all 8 drives. 8x500gb drives in 4 mirrored zdevs is about 2TB. less storage but cheaper to upgrade in stages later (two drives at a time). This will also give better performance. alternatively, 8 drives could also be set up as 2 raidz zdevs, giving about 3TB capacity. to increase that later you'd have to upgrade four drives at a time. and if you don't care about redundant copies of your data, you could have 8 zdevs with one drive each, giving about 4TB of space upgradable one drive at a time. (BTW, this would be slightly better than just using md raid0, but given that the only sane use for raid0 is as fast scratch space for data you can afford to lose, it's hard to see much benefit in using zfs. more convenient drive replacement and maybe ssd caching might make it worthwhile) i'll leave it up to someone else to answer the rest of your questions, i'm tired and just can't force my brain to concentrate enough to get it right.
Note that this is still in testing phase, so I'm happy to start from scratch, destroy the FS and try again with all 500gb drives, then do the replace again, if I missed a step somehow. It's more important to me that I get the procedure correct, so that once this is production, I know what's going on.
good idea. it's a shame more people don't do this kind of experimentation before diving in. craig -- craig sanders <cas@taz.net.au> BOFH excuse #19: floating point processor overflow