
Hi all In my work (Open Query - MySQL and MariaDB database backed infrastructure) we tend to find that it's not data transfer rate or RPMs that's the biggest hindrance. On a disk or array, it's seektime, and with SANs it's latency. The disk story is because seektime is still counted in milliseconds, and rarely do you transfer a lot of data in one chunk. More commonly you'll read or write smaller chunks in different locations. Thus you're bound by the seek speed. A SAN is only fast when you request big chunks of data, or request smaller chunks that have in some recent past already been accessed (by you or another host). Otherwise you're not using its cache and thus it suffers from the same issue as any disk or array. (FYI, MySQL/MariaDB of course keep lots of data/index info in memory so when they read disk data it won't be in the cache - we tend to make SAN people cry as the *effective* performance for these tasks is just so dismal). SSD is of course very nice as it gets rid of the seeks. You can also use SSD as an intermediate caching mechanism, as it's persistent. There are now some RAID controllers available that implement this, using SSD as well as RAM in multiple layers of caching. This might come in handy when you need more storage. We sometimes play with SATA RAID rather than SAS for fast yet cost-efficient storage. SAS is freaking expensive for less space. SAS has a longer command queue but when you stick a RAID controller in front of it that becomes irrelevant as the controller will work that out for you. No matter which physical device you use, or filesystem, you'll find that setting 'noatime' (in /etc/fstab) helps. You generally don't need to know the last access time (unless it's for some strict read-access security auditing requirement) and it'll prevent at least one and possibly two seeks. Cheers, Arjen. -- Exec.Director @ Open Query (http://openquery.com) MariaDB/MySQL services Sane business strategy explorations at http://upstarta.com.au Personal blog at http://lentz.com.au/blog/