
On Thu, Jul 25, 2013 at 07:13:25PM +1000, Russell Coker wrote:
On Tue, 16 Jul 2013, Craig Sanders <cas@taz.net.au> wrote:
my main zpools (4x1TB in RAIDZ1) are about 70% full. I probably should start thinking about replacing the drives with 4x2TB soon...or deleting some crap.
2*4TB would give you twice the storage you've currently got with less noise and power use.
and faster than raidz or raid5 too, but it would only give me a total of 4TB in a mirrored/raid-1 pair. seagate ST4000DM000 drives seem to be the cheapest the moment at $195...so a total of $390. noise isn't an issue (the system fans are louder than the drives, and they're low noise fans), and power isn't much of a concern (the drives use 4.5W each idle, 6W in use) i've been reading up on drive reviews recently - advance planning for the upgrade - and while the ST4000DM000 has got good reviews, the WD RED drives seem better. 4TB RED drives aren't available yet, and 3TB WD30EFRX drives cost $158 each.
3*4TB in a RAID-Z1 will give you 3* the storage you've currently got.
my current plan is to replace my 4x1TB raidz backup pool with one or two mirrored pairs of either 3TB or 4TB drives. i'll wait a while and hope the 4TB WD REDs are reasonably priced. if they're cheap enough i'll buy four. or i'll get seagates. otherwise i'll get either a 2 x 4TB drives or 4 x 3TB. most likely the former as I can always add another pair later (which is one of my reasons for switching from raidz to mirrored pairs - cheaper upgrades. i've always preferred raid1 or mirrors anyway). i'm in no hurry, i can wait to see what happens with pricing. and i've read rumours that WD are likely to be releasing 5TB drives around the same time as their 4TB drives too, which should make things "interesting". my current 1TB drives can also be recycled into two more 2x1TB pairs to put some or all of them back into the backup pool - giving e.g. a total of 2x4TB + 2x1TB + 2x1TB or 6TB usable space. Fortunately, I have enough SAS/SATA ports and hot-swap drive bays to do that. I'll probably just use two of them with the idea of replacing them with 4TB or larger drives in a year or three. i don't really need more space in my main pool. growth in usage is slow and I can easily delete a lot of stuff (i'd get back two hundred GB or more if i moved my ripped CD and DVD collection to the zpool on my myth box which is only about 60% full - and i could easily bring that back to 30 or 40% by deleting old recordings i've already watched)
How do you replace a zpool? I've got a system with 4 small disks in a zpool that I plan to replace with 2 large disks. I'd like to create a new zpool with the 2 large disks, do an online migration of all the data, and then remove the 4 old disks.
there are two ways. 1. replace each drive individually with 'zpool replace olddrive newdrive' commands. this takes quite a while (depending on the amount of data that needs to be resilvered). when every drive in a vdev (a single drive, a mirrored pair, a raidz) in the pool is replaced with a larger one, the extra space is immediately available. for a 4-drive raidz it would take a very long time to replace each one individually. IMO this is only worthwhile for replacing a failed or failing drive. 2. create a new pool, 'zfs send' the old pool to the new one and destroy the old pool. this is much faster, but you need enough sata/sas ports to have both pools active at the same time. it also has the advantage of rewriting the data according to the attributes of the new pool (e.g. if you had gzip compression on the old pool and lz4[1] compression on the new your data will be recompressed with lz4 - highly recommended BTW) if you're converting from a 4-drive raidz to a 2-drive mirrored pair, then this is the only way to do it. [1] http://wiki.illumos.org/display/illumos/LZ4+Compression
Also are there any issues with device renaming? For example if I have sdc, sdd, sde, sdf used for a RAID-Z1 and I migrate the data to a RAID-1 on sdg and sdh, when I reboot the RAID-1 will be sdc and sdd, will that cause problems?
http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool using sda/sdb/etc has the usual problems of drives being renamed if the kernel detects them in a different order (e.g. new kernel version, different load order for driver modules, variations in drive spin up time, etc) and is not recommended except for testing/experimental pools. it's best to use the /dev/disk/by-id device names. they're based on the model and serial number of the drive so are guaranteed unique and will never change. e.g. my backup pool currently looks like this (i haven't bothered running zpool upgrade on it yet to take advantage of lz4 and other improvements) pool: backup state: ONLINE status: The pool is formatted using a legacy on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on software that does not support feature flags. scan: scrub repaired 160K in 4h21m with 0 errors on Sat Jul 20 06:03:58 2013 config: NAME STATE READ WRITE CKSUM backup ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 ata-ST31000528AS_6VP3FWAG ONLINE 0 0 0 ata-ST31000528AS_9VP4RPXK ONLINE 0 0 0 ata-ST31000528AS_9VP509T5 ONLINE 0 0 0 ata-ST31000528AS_9VP4P4LN ONLINE 0 0 0 errors: No known data errors craig -- craig sanders <cas@taz.net.au> BOFH excuse #193: Did you pay the new Support Fee?