
Rohan McLeod wrote:
I'm seeing SATA-SSD with read/ write transfer rates at about 500MB/sec eg http://www.sandisk.com.au/products/ssd/sata/
whereas the PCI-e SSD seem to have read/ write transfer rate at about 1000MB/sec eg
http://techau.com.au/review-revodrive-240gb-pci-e-ssd-vs-ocz-agility-3-240gb... so I could expect a boot time of about 3.5 secs in the above case ?
Run bootchart2 / pybootchartgui. The output looks like this: http://cyber.com.au/~twb/tmp/boot.nfs-common/bootchart.pdf http://cyber.com.au/~twb/tmp/boot.klibc/bootchart.pdf The gap on the LHS is before bootchart started (i.e. the ramdisk). The top graph shows the overall I/O wait in pink. You can see in the klibc one that seconds 12 through 14 aren't related to I/O at all -- a faster disk won't reduce that gap. If you look at the third graph, the only place there's any pink (I/O wait) is in loop0. That might be an artefact of these being netboot hosts, but if true, then AIUI even though there's overall I/O wait (first graph), faster I/O would AT MOST only remove those pink bars in the third graph's loop0 process. Don't forget that these are timing only from when the kernel starts executing, meaning that EFI POST and the bootloader both have to run first. And EFI (and the AHCI driver or proprietary RAID card, and the whole LOM BNC) take for-fucking-ever to boot -- and they're reading from ROMs, so a faster SSD won't help there at all :-/ (If you're curious, I made those graphs last week to investigate why installing nfs-common makes xdm take a whole lot longer to come up on Debian 7, compared to cheating and using nfsmount(8klibc). The latter isn't a long-term solution because it has a stub rpcbind and a nonexisting lockd.)