
On 03/07/13 15:22, Joel W Shea wrote:
On 2 July 2013 20:20, Toby Corkindale <toby.corkindale@strategicdata.com.au> wrote: <...>
Having looked at it a bit more, it seems better suited to the SSD-caching scenario than ZFS; there are auto-tuning parameters in bcache to detect at what point to just bypass the cache and go straight to the disks, saving more cache room for blocks that will benefit.
This is precisely what the ZFS L2ARC is supposed to do.
And the write-ahead logging is limited only by the size of the cache.(Whereas ZFS' ZIL can't grow very large)
I don't know enough about bcache writes to make a comparison, but the maximum ZIL size would only be dictated by write throughput.
As I understand it, ZFS flushes the ZIL after at most five seconds. FAQs recommend the ZIL be sized at 10x your backing disk(s)'s maximum per-second write performance. (So if 200mbyte/sec, then a ZIL of 2GB) So my understanding of that is that if you have a burst of small writes to the ZIL, that the backing disks can't write out fast at all, then you'll hit a wall in less than ten seconds. Whereas, if I understand bcache's design correctly, it will continue to write data to the SSD until it fills up, without a maximum dirty time. Because it's accumulating more writes before streaming it to the backing disks, there's a better chance of random writes being aggregated into linear ones. (From http://bcache.evilpiepirate.org/Design/ )
At least bcache is filesystem agnostic and doesn't suffer from the NIH syndrome.
Agreed.