
On Tue, Apr 24, 2012 at 02:45:39PM +1000, Tim Connors wrote:
On Tue, 24 Apr 2012, Peter Ross wrote:
On Tue, 24 Apr 2012, Trent W. Buck wrote:
Peter Ross wrote:
The zvol was never running out of disk space, just nearly 100% full. According to other discussions ZFS slows down if it is filling up. That would explain my problem.
IME, and according to #btrfs on freenode, that happens at about 80% full, IOW if you have a 2TB zpool its effective storage capacity is only 1.8TB, since after that it becomes unusably slow.
"Unusably" as in "it took me hours to delete a single 20MB daily snapshot, while the system was disconnected from the network entirely and all its other I/Oy processes were turned off."
Just now on the machine:
zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zpool 912G 774G 138G 84% 1.00x ONLINE -
At the moment I don't see a performance problem. There are ca. 50 people connected to the mail server, it runs a MediaWiki, a PHP developer box, MySQL for it etc.
It looks as I have to keep an eye on it. I mirror a samba server here, maybe I should do it on another box..
Do you use snapshots? Extensively? Perhaps continual use of snapshots greatly fragments the pool. I was personally hoping not to lose 20% of my disk! But since my usage won't involve snapshots (except every month or so when I send a snapshot to offsite storage then immediately remove the local snapshot), maybe I'll be ok.
ZFS is fragmentation city, pretty much by design - that's what you get with COW and variable block sizes. I don't think ZFS snapshots are any different to all the other i/o from a fragmentation point of view. what is the difference between a fs being 50% full with static snapshots or 50% full with static data? it's all just blocks that can't be shifted around. having said that, all fs's get fragmented and all hate being nearly full, and I doubt ZFS's %full drop-off point is more than 5% of disk capacity different from any other fs's. cheers, robin