
On Sun, 5 Feb 2012, Daniel Pittman wrote:
btw, when btrfs is stable/production-system-ready then the authors are correct - fsck won't be necessary. same as it isn't necessary on ZFS. they both do (or will do in the case of btrfs) continuous consistency checking and file-level checksumming. and the btrfs authors are right to focus on improving btrfs itself rather than getting too side-tracked by an fsck tool.
Very much the XFS model, then. You might have an external tool that will check or repair it, but the basic model is that the in-tree code gets it right through everything going wrong.
Just in case anyone is under the mistaken impression that ZFS is reliable and self repairing (well, I guess it is, technically), here is a status update from an Scary Devil Monastery oldtimer: Aaaaaarrrrrrggggggghhhhhhh!!!! I just lost 3 websites on my big server! Power crashed this afternoon. They were all in a ZFS file system, /z1/blogspace, but now /z1/blogspace is EMPTY! I had built them and got the blogging software running, but power failed before I could get them backed up. I wonder what else I have lost ... . (Yes, I am surprised that such an oldtimer isn't making automated backups twice a day, but he always did seem better with big-iron than these little itty bitty solaris boxen) Me, I stick with ext3/ext4 with certain "features" turned off, because they're filesystems I've never lost anything important on, unlike many others. Actually, that reminds me. This thread is hilarious - particularly Chris Mason's reply 3rd down: http://thread.gmane.org/gmane.linux.file-systems/23709 People doing fsync() (and worse, the people insisting you need to use it otherwise you're a data hater) are probably making their data more fragile. xfs? fragile. ext4 without barriers? Probably just fine, thanks very much! I can't tell any difference between speed of filesystems in practice anyway. -- Tim Connors