
On 16/07/13 09:45, Petros wrote:
Quoting "Tim Connors" <tconnors@rather.puzzling.org>
Don't ever use more than 80% of your file system? Yeah, I know that's not a very acceptable alternative.
The 80% is a bit of an "old time myth", I was running ZFS with higher usage under FreeBSD until I hit the "slowness".
Chipping in with a bit of real-world info for you here. Yes, ZFS on Linux still suffers from massive slowdowns once you run out of most of the free space. It's more than 80%, I'll grant, but not a whole lot - maybe 90%?
BTW: I don't use dedup. Firstly because I use cloning many times and after that: Well, that are changes unique to the ZFS in question.
I have problems to come up with a scenario to use it. But I am pretty sure someone asked for it. Maybe someone running a big big server farm and distributing copies of many many Gigabytes of data to many VMs on the same box?
Dedup is really painful on zfs. Attempting to use it over a multi-terabyte pool ended in failure; using it just a subset of data worked out OK, but ended up with >50GB of memory in the server to cope. Fine for a big server, but how many of you are running little fileservers at home with that much? You do save on a fair bit of disk space if you're storing a pile of virtual machine images that contain a lot of similarities. -Toby