
I'm curious to know what your SSD wear indicators look like, from long-running Linux machines, and how long it looks like they'll last based on existing usage. You can query these with smartctl (if your drive db is too old, run sudo update-smart-drivedb first) I'll go first. These are just private machines, albeit ones doing reasonable work. Perhaps at some point in the future I'll be able to report on long-term results of enterprise SSDs, but I can't right now. Machine one: Power_On_Hours 4522 (188 days) Total_NAND_Writes_GiB 18846 Maximum_Erase_Cycle 199 Avg_Write_Erase_Ct 74 Total_Bad_Block 201 Perc_Avail_Resrvd_Space 100 This machine has been running for over 188 days non-stop, has logged nearly 19 TB of writes, and is about 2.5% of the way through it's expected minimum lifespan[1]. Estimated total lifespan time: 20.6 years. Machine two: Power_On_Hours 18326 (763 days) Used_Rsvd_Blk_Cnt_Tot 0 Wear_Leveling_Count 13 Total_LBAs_Written 2066747494 (ie. about 1080 GiB [2]) This has been running for 763 days non-stop. Like the first machine, it hasn't used any of the reserved blocks yet. It's about 1.3% of the way through its min expected lifespan.[3] Estimated total lifespan time: 160 years. -Toby 1: ie. 3000 write/erase cycles for MLC; in practice you seem to get quite a bit more though, according to testers. 2: This drive doesn't report the actual NAND writes, just LBAs written, but you can roughly convert those out; call each LBA 512 bytes, and then multiply the total by a conservative 1.1 to allow for write-amplification; we come up with about 1080 gigabytes. 3: This machine is running a cheaper type of TLC-based SSD, so theoretical amount of erase/write cycles are just 1000.