
On Sat, Aug 10, 2013 at 08:11:32PM +1000, Russell Coker wrote:
As an aside, I've got a 6yo Dell PowerEdge T105 that's CPU fan is becoming noisy. Any suggestions on how to find a replacement?
sorry, no idea.
How do you replace a zpool? [...] zpool that I plan to replace with 2 large disks. I'd like to create a new zpool with the 2 large disks, do an online migration of all the data, and then remove the 4 old disks.
there are two ways.
1. replace each drive individually [...] 2. create a new pool, 'zfs send' the old pool to the new one [...]
According to the documentation you can add a new vdev to an existing pool.
yep, but as you note, you can't remove a vdev from a pool...so it's not much use for replacing a zpool.
Using a pair of mirror vdevs would allow you to easily upgrade the filesystem while only replacing two disks. But the down-side to that is that if two disks fail that could still lose most of your data while a RAID-Z2 over 4 disks would give the same capacity as 2*RAID-1 but cover you in the case of two failures.
yep, but you've got the performance of raidz2 rather than mirrored pairs (which, overall, is not as bad as raid5/raid6 but still isn't great). that may be exactly what you want, but you need to know the tradeoffs for what you're choosing.
zpool replace tank sdd /dev/disk/by-id/ata-ST4000DM000-1F2168_Z300MHWF-part2
That doesn't seem to work. I used the above command to replace a disk and then after that process was complete I rebooted the system and saw the following:
firstly, you don't need to type the full /dev/disk/by-id/ path. just ata-ST4000DM000-1F2168_Z300MHWF-part2 would do. typing the full path isn't wrong - works just as well, it just takes longer to type and uglifies the output of zpool status and iostat. secondly, why add a partition and not a whole disk? third, is sdd2 the same drive/partition as ata-ST4000DM000-1F2168_Z300MHWF-part2? if so, then it added the correct drive. to get the naming "correct"/"consistent", did you try 'zpool export tank' and then 'zpool import -d /dev/disk/by-id' ?
It appears that ZFS is scanning /dev for the first device node with a suitable UUID.
sort of. it remembers (in /etc/zfs/zpool.cache) what drives were in each pool and only scans if you tell it to with zpool import. if zpool.cache lists sda, sdb, sdc etc then that's how they'll appear in 'zpool status'. and if zpool.cache lists /dev/disk/by-id names then that's what it'll show. try hexdumping zpool.cache to see what i mean.
Strangely that's not the way it works for me.
i created mine with the ata-ST31999528AS_* names, so that's what's in my zpool.cache file. to fix on your system, export and import as mentioned above. craig -- craig sanders <cas@taz.net.au> BOFH excuse #108: The air conditioning water supply pipe ruptured over the machine room