
On Wed, 6 Jun 2012, Joel W Shea wrote:
On 6 June 2012 14:35, Russell Coker <russell@coker.com.au> wrote:
I have 4*3TB disks in a RAID-Z. zpool list says the size is 10.9TB, df -h says the size is 7.8TB.
I expected to see 9TB as the reported size. What is happening? Is the full capacity of the disks being used?
Furthermore, with ZFS you probably shouldn't rely on df to report the correct filesystem size; Use "zfs list" instead, as the df command wont be aware of, or understand descendant filesystems, snapshots, compression, dedup etc
It's not that df is "lying" but it does not tell you "the ZFS surroundings" of the file system. All the ZFS filesystems are sharing one zpool (at least here), and it is a bit confusing, to deal with the numbers if you are used to partition in a style like "/ 200MB, /usr 4GB, /var/log 2GB" etc. I don't run a large farm of ZFS servers at the moment so I can deal with the file systems more or less "half-automated" but it is a bit messy. One box as an example: It has 7 jails and one Virtualbox, and a mirror of 5 jails running on a similar box. It has 122 file systems ans 219 snapshots. (For an active file system there are 5 daily snapshots per week, and seven weeks of weekly snapshots, and two for the migration to the "mirror box".) It is easy "to forget" snapshots or temporary clones. It is useful to compare the various usedby* properties of the file systems (used = usedbydataset + usedbysnapshots + usedbychildren + usedbyrefreservation) At the moment I am monitoring the "overall" size of a zpool, and check the variables above manually. If you are running a bigger farm of services, it can become a nightmare easily. Regards Peter