
On 13 January 2015 at 03:12, Robin Humble <rjh+luv@cita.utoronto.ca> wrote: [snip]
so how about SSDs? assuming 100k iops & 500 MB/s, that works out at a "small" file size of maybe ~5kB or a bit less, which is impressive. however small random write i/o is the absolute worst thing you can do to these things, so be sure to buy a good one (ie. intel only IMHO).
I've been kinda unimpressed by Intel's SSDs. The high-end ones do OK, but.. only OK. And they're expensive. Other brands have enormous random-write performance now, eg. 90,000 IOPS for 4kb random writes on the current-gen Samsungs, that are also cheap and have long durability.
mh also tends towards zillions of files in a dir and some fs's don't deal with that well. XFS used to handle 100k+ file/dir a lot better than ext[34]. dunno if it still does. many files in a dir is not a great idea with any fs.
ext3/4 added indexes to directories years and years ago, and so have been fine with very large directories since then. That said, it's still not a great idea, and I've seen some systems crippled due to having thousands upon thousands of files in single directories. (GlusterFS comes to mind as one that was particularly epic-fail)
the old adage "a filesystem is not a database" springs to mind...
Indeed, that's the crux of it. And this is why your sensible mail programs end up putting your emails into SQLite or LevelDB instead of antique mail files or maildirs. :) (I'd go leveldb over sqlite myself, but with Solr or similar on the side for searching, but we seem to mostly see SQLite used on its own so far) -T