
On 02/07/13 10:33, Toby Corkindale wrote:
I can't be the only one who's been waiting for the bcache stuff to hit mainstream kernels. I rebooted into a stable 3.10 kernel yesterday. Due to the requirement to reformat disks, I haven't started using bcache yet. Is anyone else here already onto it? I'd be curious to hear how it compares to the zfs+l2arc setup some of us have been using previously.
bcache.txt from the linux kernel: https://github.com/torvalds/linux/blob/master/Documentation/bcache.txt
I do wonder if this has landed a bit too late though. Back when they started, good SSDs were expensive and small; but now you can pick up relatively large and fast drives relatively cheaply. You can afford to use one as your primary drive, and just offload big media files to spinning drive arrays. (Which are fine for that access pattern of linear reads and writes) Even the documentation is showing its age, using Intel X-25 drives as the example, which are now four years old. I'm sure there's still a place for this technology when you don't *want* to have to manually choose where to store different categories of files - such as in NAS/storage appliances. Some database loads might benefit, although for PostgreSQL at least you can (and should) configure it to use SSDs for the transaction logging and such anyway, which would give you most the benefits anyway. Having looked at it a bit more, it seems better suited to the SSD-caching scenario than ZFS; there are auto-tuning parameters in bcache to detect at what point to just bypass the cache and go straight to the disks, saving more cache room for blocks that will benefit. And the write-ahead logging is limited only by the size of the cache.(Whereas ZFS' ZIL can't grow very large) tjc