
On Fri, 26 Jul 2013, Craig Sanders <cas@taz.net.au> wrote:
noise isn't an issue (the system fans are louder than the drives, and they're low noise fans), and power isn't much of a concern (the drives use 4.5W each idle, 6W in use)
With noise it's not an issue of the loudest thing is the one you hear. The noise from different sources adds and you can have harmonics too. As an aside, I've got a 6yo Dell PowerEdge T105 that's CPU fan is becoming noisy. Any suggestions on how to find a replacement?
How do you replace a zpool? I've got a system with 4 small disks in a zpool that I plan to replace with 2 large disks. I'd like to create a new zpool with the 2 large disks, do an online migration of all the data, and then remove the 4 old disks.
there are two ways.
1. replace each drive individually with 'zpool replace olddrive newdrive' commands. this takes quite a while (depending on the amount of data that needs to be resilvered). when every drive in a vdev (a single drive, a mirrored pair, a raidz) in the pool is replaced with a larger one, the extra space is immediately available. for a 4-drive raidz it would take a very long time to replace each one individually.
IMO this is only worthwhile for replacing a failed or failing drive.
2. create a new pool, 'zfs send' the old pool to the new one and destroy the old pool. this is much faster, but you need enough sata/sas ports to have both pools active at the same time. it also has the advantage of rewriting the data according to the attributes of the new pool (e.g. if you had gzip compression on the old pool and lz4[1] compression on the new your data will be recompressed with lz4 - highly recommended BTW)
if you're converting from a 4-drive raidz to a 2-drive mirrored pair, then this is the only way to do it.
According to the documentation you can add a new vdev to an existing pool. So if you have a pool named "tank" you can do the following to add a new RAID-1: zpool add tank mirror /dev/sde2 /dev/sdf2 Then if you could remove a vdev you could easily migrate a pool. However it doesn't appear possible to remove a vdev, you can remove a device that contains mirrored data but nothing else. Using a pair of mirror vdevs would allow you to easily upgrade the filesystem while only replacing two disks. But the down-side to that is that if two disks fail that could still lose most of your data while a RAID-Z2 over 4 disks would give the same capacity as 2*RAID-1 but cover you in the case of two failures.
Also are there any issues with device renaming? For example if I have sdc, sdd, sde, sdf used for a RAID-Z1 and I migrate the data to a RAID-1 on sdg and sdh, when I reboot the RAID-1 will be sdc and sdd, will that cause problems?
http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool
using sda/sdb/etc has the usual problems of drives being renamed if the kernel detects them in a different order (e.g. new kernel version, different load order for driver modules, variations in drive spin up time, etc) and is not recommended except for testing/experimental pools.
it's best to use the /dev/disk/by-id device names. they're based on the model and serial number of the drive so are guaranteed unique and will never change.
zpool replace tank sdd /dev/disk/by-id/ata-ST4000DM000-1F2168_Z300MHWF-part2 That doesn't seem to work. I used the above command to replace a disk and then after that process was complete I rebooted the system and saw the following: # zpool status pool: tank state: ONLINE scan: scrub repaired 0 in 11h38m with 0 errors on Thu Aug 1 11:38:57 2013 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sda ONLINE 0 0 0 sdb ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd2 ONLINE 0 0 0 It appears that ZFS is scanning /dev for the first device node with a suitable UUID.
e.g. my backup pool currently looks like this (i haven't bothered running zpool upgrade on it yet to take advantage of lz4 and other improvements)
pool: backup state: ONLINE status: The pool is formatted using a legacy on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on software that does not support feature flags. scan: scrub repaired 160K in 4h21m with 0 errors on Sat Jul 20 06:03:58 2013 config:
NAME STATE READ WRITE CKSUM backup ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 ata-ST31000528AS_6VP3FWAG ONLINE 0 0 0 ata-ST31000528AS_9VP4RPXK ONLINE 0 0 0 ata-ST31000528AS_9VP509T5 ONLINE 0 0 0 ata-ST31000528AS_9VP4P4LN ONLINE 0 0 0
errors: No known data errors
Strangely that's not the way it works for me. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/