
Hi, I've begun testing/commissioning a new storage server at home, trying to make it expandable in the future, as 3tb drives become cheaper. I decided to give ZFS on FreeBSD a try, and this is what I'm on now, however the issues I'm having apply equally to zfsonlinux/ZFS using FUSE, so I'm hoping to grab some help here. I knew upfront that you can't add devices to a zpool, so to make this expandable, I took the approach of maxing out my case hard drive space, with the plan of simply replacing the drives with bigger capacity as they fail, or as new drives become cheap. My current system has 2 x 64GB SSDs for OS, and a ZIL (mirrored) and started with 8 x 500gb drives. A mix of PATA and SATA, which isn't ideal, but is what I had spare. Yesterday I did a replace of one of the 500gb drives with a 1.5TB drive I also had spare. The issue I've hit is that although the zpool has seen an increase in capacity, the zfs filesystem has not. Googling and IRC have been unable to help with this, so running it by LUV, considering this portion of the system is OS independent. My relevant history lines are: 2012-07-17.20:04:15 zpool create -f storage raidz ada0 ada1 ada4 ada5 ada6 ada7 ada8 ada9 2012-07-17.20:04:41 zfs create storage/tbla 2012-07-17.20:04:52 zfs set sharenfs=on storage/tbla 2012-07-17.20:04:59 zfs set atime=off storage/tbla 2012-07-17.20:05:15 zpool add -f storage log mirror/gm0s2a 2012-07-17.20:19:27 zfs sharenfs=-maproot=0:0 storage/tbla 2012-07-21.13:33:29 zpool offline storage ada8 ... reboot, replace drive ... 2012-07-21.15:20:40 zpool replace storage 5231172179341362844 ada8 2012-07-21.16:06:17 zpool set autoexpand=on storage ... ^^^ note that this was run while the resilver was in operation ... 2012-07-21.16:46:16 zpool export storage 2012-07-21.16:46:34 zpool import storage 2012-07-21.16:54:02 zpool set autoexpand=off storage 2012-07-21.16:54:07 zpool set autoexpand=on storage 2012-07-21.17:34:12 zpool scrub storage [root@swamp /usr/home/brett]# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT storage 3.62T 701G 2.94T 18% 1.00x ONLINE - Before the replace, storage was showing 3.1T size, I believe. So it jumped by 500gb, which seems about right. [root@swamp /usr/home/brett]# zfs list NAME USED AVAIL REFER MOUNTPOINT storage 609G 2.51T 50.4K /storage storage/tbla 609G 2.51T 609G /storage/tbla [root@swamp /usr/home/brett]# df -h Filesystem Size Used Avail Capacity Mounted on /dev/mirror/gm0s1a 18G 3.1G 14G 18% / devfs 1.0k 1.0k 0B 100% /dev storage 2.5T 50k 2.5T 0% /storage storage/tbla 3.1T 609G 2.5T 19% /storage/tbla I'm also slightly confused as to why storage shows a size of 2.5T, while storage/tbla has 3.1. And clearly neither has the 3.62 which is in the zpool. Can anyone explain this to me? Note that this is still in testing phase, so I'm happy to start from scratch, destroy the FS and try again with all 500gb drives, then do the replace again, if I missed a step somehow. It's more important to me that I get the procedure correct, so that once this is production, I know what's going on. cheers, / Brett