
Rohan McLeod wrote:
a general question about file-systems and file fragmentation.
What file fragmentation? It should be negligible unless you're using a legacy fs like FAT, an unusual workload, or you are routinely filling ext's reserved space (either by setting it to 0%, or filling the disk as root). Likewise, naïve defragmentation on ext is trivial: for each file f copy f to f' move f' to f (For btrfs, zfs and log-oriented filesystems, there is dedicated infrastructure to coalesce/rebalance/scrub/resilver/&c for you, which will do a better job.) If you're asking "why are we wasting code even having this feature", consider copying a DVD to a disk with 1TB free, but no contiguous free segment exceeding 4GB. Should the user just be told "just buy a bigger disk, they're really cheap"? That seems ludicrous to me.