Re: [luv-main] Slow down problem with SSD boot drives ?

On Fri, 16 May 2014 15:51:40 Rohan McLeod wrote:
We started discussing SSD boot drives and it was suggested that SSD boot drives with both sata 3.0 and PCIe interfaces suffer from a fairly severe slow-down problem after about 6 - 12months use. 1/ Has anyone noticed such a problem ? 2/ If such problem exists any theories ? - all I could think was that somehow boot drives were subject to extreme wear and the reallocated 'cells' were at the end and somehow the replacement 'out-of-sequence' cells; were slowing the drive down in the manner of a fragmented rotating drive ?
There is nothing special about boot drives. Drives just respond to read and write requests, booting is no different from other reads. http://etbe.coker.com.au/2014/04/27/swap-breaking-ssd/ There are rumors that swap breaks SSD, I wrote the above post to refute that. While there are probably some usage patterns where swap would cause problems one could say that same about /home, /var/log, or any other filesystem or subtree. https://en.wikipedia.org/wiki/TRIM The TRIM command can theoretically improve performance in some situations. However it also significantly decreases performance in other situations. So for a typical use with BTRFS it's recommended that you not use it. Note that when the SSD erase list gets fragmented such that TRIM helps will depend on work load. It could happen in a matter of days or not happen in a year. Also it would depend on the quality of the SSD. I've redirected this to luv-main. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

Russell Coker wrote:
On Fri, 16 May 2014 15:51:40 Rohan McLeod wrote:
We started discussing SSD boot drives and it was suggested that SSD boot drives with both sata 3.0 and PCIe interfaces suffer from a fairly severe slow-down problem after about 6 - 12months use. 1/ Has anyone noticed such a problem ? 2/ If such problem exists any theories ? - all I could think was that somehow boot drives were subject to extreme wear and the reallocated 'cells' were at the end and somehow the replacement 'out-of-sequence' cells; were slowing the drive down in the manner of a fragmented rotating drive ? There is nothing special about boot drives. Drives just respond to read and write requests, booting is no different from other reads.
Many thanks for your reply Russell; can I take it that: "......... I’ve documented my unsuccessful experiments with using USB-flash for the root filesystem of a gateway server [2] (and the flash device that wasn’t used for swap died too)......" indicates that you have also used 'designed for use as harddrive' SSD's and found no such slow down problem ? ie not only should not exist; but does not exist ? regards Rohan McLeod

On Fri, 16 May 2014 17:47:58 Rohan McLeod wrote:
Russell Coker wrote:
On Fri, 16 May 2014 15:51:40 Rohan McLeod wrote:
We started discussing SSD boot drives and it was suggested that SSD boot drives with both sata 3.0 and PCIe interfaces suffer from a fairly severe slow-down problem after about 6 - 12months use. 1/ Has anyone noticed such a problem ? 2/ If such problem exists any theories ? - all I could think was that somehow boot drives were subject to extreme wear and the reallocated 'cells' were at the end and somehow the replacement 'out-of-sequence' cells; were slowing the drive down in the manner of a fragmented rotating drive ?
There is nothing special about boot drives. Drives just respond to read and write requests, booting is no different from other reads.
Many thanks for your reply Russell; can I take it that:
"......... I’ve documented my unsuccessful experiments with using USB-flash for the root filesystem of a gateway server [2] (and the flash device that wasn’t used for swap died too)......"
indicates that you have also used 'designed for use as harddrive' SSD's and found no such slow down problem ? ie not only should not exist; but does not exist ?
In workstations and servers I use Intel SATA SSDs that are designed for such use. Using USB flash devices (that I got free at trade shows) was an experiment which showed that cheap flash isn't much good. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

Russell Coker wrote:
On Fri, 16 May 2014 17:47:58 Rohan McLeod wrote:
Russell Coker wrote:
On Fri, 16 May 2014 15:51:40 Rohan McLeod wrote:
We started discussing SSD boot drives and it was suggested that SSD boot drives with both sata 3.0 and PCIe interfaces suffer from a fairly severe slow-down problem after about 6 - 12months use. 1/ Has anyone noticed such a problem ?
In workstations and servers I use Intel SATA SSDs that are designed for such use.
Many thanks that was what I wanted to know ; regards Rohan McLeod

Russell Coker <russell@coker.com.au> writes:
"......... I’ve documented my unsuccessful experiments with using USB-flash for the root filesystem of a gateway server [2] (and the flash device that wasn’t used for swap died too)......"
indicates that you have also used 'designed for use as harddrive' SSD's and found no such slow down problem ? ie not only should not exist; but does not exist ?
In workstations and servers I use Intel SATA SSDs that are designed for such use.
Is it still true that Intel SSDs are special and present completely different performance characteristics to all other SSDs? I thought that really only applied to second-generation FTLs -- I just assumed everything ran on sandforce now.

I am not aware of anything particularly special about Intel SSDs. It's just the brand issue I mentioned before. -- Sent from my Samsung Galaxy Note 2 with K-9 Mail.

On Mon, 19 May 2014 10:36:08 AM Trent W. Buck wrote:
Is it still true that Intel SSDs are special and present completely different performance characteristics to all other SSDs?
This report makes interesting reading, but keep in mind that they were specifically looking at SSDs with a smaller capacity (and cost) but they had to be reliable (after a 50+% failure rate with an OCZ model): http://lkcl.net/reports/ssd_analysis.html The thing that stands out to me is that even if you have Sandforce controllers (like the OCZ ones did) a manufacturer can make them horribly unreliable with their version of FTL and firmware. :-( All the best, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

Chris Samuel wrote:
On Mon, 19 May 2014 10:36:08 AM Trent W. Buck wrote:
Is it still true that Intel SSDs are special and present completely different performance characteristics to all other SSDs? This report makes interesting reading, but keep in mind that they were specifically looking at SSDs with a smaller capacity (and cost) but they had to be reliable (after a 50+% failure rate with an OCZ model):
http://lkcl.net/reports/ssd_analysis.html Well as a fairly naive reader the above URL does seem to support the contention that currently: "Intel SSDs are special and present completely different performance characteristics to all other SSDs "
regards Rohan Mcleod

On 20 May 2014 12:02, Rohan McLeod <rhn@jeack.com.au> wrote:
Chris Samuel wrote:
On Mon, 19 May 2014 10:36:08 AM Trent W. Buck wrote:
Is it still true that Intel SSDs are special and present completely different performance characteristics to all other SSDs? This report makes interesting reading, but keep in mind that they were specifically looking at SSDs with a smaller capacity (and cost) but they had to be reliable (after a 50+% failure rate with an OCZ model):
http://lkcl.net/reports/ssd_analysis.html Well as a fairly naive reader the above URL does seem to support the contention that currently: "Intel SSDs are special and present completely different performance characteristics to all other SSDs "
At the time of that report's writing, that may have been the case, but the drives mentioned are all pretty old models. (And it's worth noting that the Intel 320s they were testing turned out to have a firmware bug that caused complete data loss, although obviously that news broke after the report was created) SSD drives were still going through a maturation period back then.. I'd say that current drives are considerably better. I don't know how well they'll handle repeated power cycling vs lost data though. The question there is whether the test those people were using was calling fsync() or not.. ie. Were some of those drives lying about having flushed the data to disk? Or was the test just really naive?

Toby Corkindale wrote:
On 20 May 2014 12:02, Rohan McLeod <rhn@jeack.com.au> wrote:
Chris Samuel wrote:
On Mon, 19 May 2014 10:36:08 AM Trent W. Buck wrote:
Is it still true that Intel SSDs are special and present completely different performance characteristics to all other SSDs? This report makes interesting reading, but keep in mind that they were specifically looking at SSDs with a smaller capacity (and cost) but they had to be reliable (after a 50+% failure rate with an OCZ model):
http://lkcl.net/reports/ssd_analysis.html Well as a fairly naive reader the above URL does seem to support the contention that currently: "Intel SSDs are special and present completely different performance characteristics to all other SSDs " At the time of that report's writing, that may have been the case, but the drives mentioned are all pretty old models. (And it's worth noting that the Intel 320s they were testing turned out to have a firmware bug that caused complete data loss, although obviously that news broke after the report was created)
SSD drives were still going through a maturation period back then.. I'd say that current drives are considerably better.
So as of " 1 Jan 2014 " ? thanks for the reply Rohan McLeod

On 20 May 2014 14:38, Rohan McLeod <rhn@jeack.com.au> wrote:
Toby Corkindale wrote:
On 20 May 2014 12:02, Rohan McLeod <rhn@jeack.com.au> wrote:
Chris Samuel wrote:
On Mon, 19 May 2014 10:36:08 AM Trent W. Buck wrote:
Is it still true that Intel SSDs are special and present completely different performance characteristics to all other SSDs? This report makes interesting reading, but keep in mind that they were specifically looking at SSDs with a smaller capacity (and cost) but they had to be reliable (after a 50+% failure rate with an OCZ model):
http://lkcl.net/reports/ssd_analysis.html Well as a fairly naive reader the above URL does seem to support the contention that currently: "Intel SSDs are special and present completely different performance characteristics to all other SSDs " At the time of that report's writing, that may have been the case, but the drives mentioned are all pretty old models. (And it's worth noting that the Intel 320s they were testing turned out to have a firmware bug that caused complete data loss, although obviously that news broke after the report was created)
SSD drives were still going through a maturation period back then.. I'd say that current drives are considerably better.
So as of " 1 Jan 2014 " ?
No, that's the last publication date of the report, not when it was actually performed. Or if they performed it then, then they were selecting from dusty, old SSDs forgotten on the back of the shelf. They did say in the report that cheapness was an important factor, so maybe that's why they were shopping in the bargain bins? (eg. The Crucial M4 is a 2011-era drive)

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Fri, 16 May 2014 05:47:58 PM Rohan McLeod wrote:
indicates that you have also used 'designed for use as harddrive' SSD's and found no such slow down problem ?
The quality of SSDs is often determined by their firmware as much, if not more, than by the physical hardware. For instance a laptop I had came with an SSD with firmware that I had to upgrade as smartctl warned me that it was known to lock up after a certain number of hours of uptime. It wasn't clear if that was contiguous or total since manufacture. People on the btrfs list have reported issues like that, and it's often due to garbage collection as the SSD fills up. YMMV. As Russell says the Intel SSDs have by far the best reputation and, not surprisingly, a correspondingly high price. All the best, Chris - -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQEVAwUBU3XUeI1yjaOTJg85AQLlgggAlhn0JC/nDrbV/VfpG320UPzoJk5skXZB L5PJ2RX+Fe2+iqVk9g76e4RNbwrpeIWE/H1QJQ/zim/qN4ppRVwKjN8BQN149Xl5 JPSYCmIZom6LEogeAvQsYp96szOalHxKEmPGICvZrDYrqxdtit6Dqr817Rc8Le99 b+lZ6Fdfyg8RVRTlkrFpGA+M+mU95+umAE5HRjOteFJZ86mDGz0Ec7eiEG1Tmgup S0+Q6eB8HVQx74Si48wvYDIjEbVtrWfm+Yu63ilM6PXRjxtTWLymffPqguta7HmU bcsriGYnRpqxKvRKDUCbabHTGTfboBdELSZpABsu0T+hWTdqbkAIEQ== =9oQB -----END PGP SIGNATURE-----

On Fri, 16 May 2014 19:03:49 Chris Samuel wrote:
YMMV. As Russell says the Intel SSDs have by far the best reputation and, not surprisingly, a correspondingly high price.
http://www.behardware.com/articles/881-7/components-returns-rates-7.html Intel overall have a good brand name so they won't release shoddy products that will hurt their image. BE Hardware periodically releases information on product returns rates, the above URL has the latest I could find and shows that Intel slightly edges out Samsung for the least returns (IE highest quality). I wouldn't assume that Intel will always be the best (I've seen other survey results indicating that they were in about #4 position). But they are always good enough that it's safe to use them. Intel isn't always that expensive. I bought several Intel SSDs when a 120G device cost about $130 including postage. That's cheaper than a lot of spinning media. 120G is a bit small, but that was a couple of years ago and you get bigger storage for the money nowadays. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On 16 May 2014 17:11, Russell Coker <russell@coker.com.au> wrote:
https://en.wikipedia.org/wiki/TRIM
The TRIM command can theoretically improve performance in some situations. However it also significantly decreases performance in other situations. So for a typical use with BTRFS it's recommended that you not use it.
Note that when the SSD erase list gets fragmented such that TRIM helps will depend on work load. It could happen in a matter of days or not happen in a year. Also it would depend on the quality of the SSD.
Don't use the discard mount-options because, as you mention, it can decrease performance in the moment. The preferred solution is to run 'fstrim' automatically in an off-peak time, eg. the middle of the night. It'll discard all the unused blocks in one go.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Sun, 18 May 2014 09:42:55 PM Toby Corkindale wrote:
Don't use the discard mount-options because, as you mention, it can decrease performance in the moment.
Depends on your drives, if you've got SATA 3.1 drives that support NCQ TRIM then you're likely to be OK. It's not easy to see whether your SSD supports it or not (other than checking for SMART3.1 and testing if it is present), but there is an experimental kernel patch to expose this information: http://comments.gmane.org/gmane.linux.ide/57331 All the best, Chris - -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQEVAwUBU3ijXI1yjaOTJg85AQLvJwgAiI2JB8fuqCRfT1jipf88u3iB4rBDfupg LuSlaMl6j+rmnDV5LlmEqy+tUx2+vXCk2paEGVPFprn+z5T11cVgvQ9nfBKHGTa+ bG20dJEaUEH9r1xHmROfsQzryhar7BlF1CKxyUzygAsx3wmV0vO30IvQ38ByM87Z xI6VUlz5lqKuUH6Q5MSJSiH78bAKU8wMkI7qC7NITMFqdt9VO6yhGiwrM0fdcAcM sJoLyaQ1Og52vlh5pZ4JdBntGTwmKtoJIR0GpyKdxRIAM5BhX59rYTIVGXglC9bo PvvnXuy4B9z8rqWAANMJJyWtoHpaL9b6yMsobkT4rtZrZXDbmQzdSA== =mi2H -----END PGP SIGNATURE-----
participants (5)
-
Chris Samuel
-
Rohan McLeod
-
Russell Coker
-
Toby Corkindale
-
trentbuck@gmail.com