
On Tue, 3 Feb 2015 04:02:00 AM Toby Corkindale wrote:
That's >61 terabytes written by the o/s; wear leveling is up to nearly 3000, which is getting on for a bit. Still no sectors getting remapped though, which implies no failures.
http://etbe.coker.com.au/2014/04/27/swap-breaking-ssd/ Last year I blogged about the amount of writes performed by workstations I run. The most was 128G in a day for atypical use (torrent download and filesystem balance) and the most for typical use was 24G in a day. If the SSDs I'm using are only capable of 61TB of writes then that would be 7 years of typical use or 1.3 years of atypical use before they have problems. What portion of hard drives survive 7 years of service? I've recently had 2*3TB disks give a small number of read errors (I now use them for backups) and a 2TB disk used for backups become almost unusable. Of the SATA disks that are 2G+ in size that I run I'm seeing significantly more than 10% failures so far - and it wasn't 7 years ago that 2G was the biggest disk available. Finally there's nothing stopping you from using a RAID-1 array over SSDs and/or having cron job backups. One of my servers has a single SSD for root and a cron job that backs it up to a RAID-1 of 4TB hard drives - for that system I don't mind a risk of a bit of down-time just as long as I don't lose the data. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/