
On 27 September 2013 22:13, James Harper <james.harper@bendigoit.com.au> wrote:
I want to create a filesystem to store my on-disk backups (from Bacula) on a new server. These backup files will be few (less than 10000) and mostly huge (>1GB). Because I will have multiple files being written out at once, a large data per inode ratio seems to make sense as it will greatly reduce fragmentation, and wasted space would be low because of the small number of files. Also because the write pattern is exclusively streaming writes, I can go against my normal rule and use RAID5.
I've chosen a 4MB of data per inode ratio based on some rough calculations, but while my mkfs.ext3 <dev> -i 4194304 just raced through initially, when it got to "Writing superblocks and filesystem accounting information:" it just seemed to hang. Strace says it's doing seek, write 4k, seek, write 4k, over and over again. I hit ^C and the process is now [mkfs.ext3], but the system is still pegged at 100% disk utilisation.
Why aren't you using ext4? It has improvements for handling large files (extents), among other things. Although I would have chosen zfs or btrfs for that task myself, unless I was stuck on something like RHEL :/