Any issues running Linux from SSD on laptop?

Hello All, To expand on the subject line, I have inherited an Acer laptop on which I intend to install Linux. I think the machine spec should be OK for most linux distros...Core 2 duo T5600, 4GB, 120GB, Nvidia graphics. I am considering replacing the 120GB drive with an SSD, as it should smarten up the performance a tad and probably improve battery life a bit as well. Are there issues with running Linux from an SSD, that one needs to take into account when first installing/partitioning/whatever? Cheers, -- Regards, Terry Duell

Terry Duell <tduell@iinet.net.au> wrote:
Are there issues with running Linux from an SSD, that one needs to take into account when first installing/partitioning/whatever?
It has been suggested here in the past that you should use the noatime option to ensure that not every access to a file causes blocks to be written to the ssd.

Hello Jason, On Sat, 03 Nov 2012 13:19:55 +1100, Jason White <jason@jasonjgw.net> wrote:
Terry Duell <tduell@iinet.net.au> wrote:
Are there issues with running Linux from an SSD, that one needs to take into account when first installing/partitioning/whatever?
It has been suggested here in the past that you should use the noatime option to ensure that not every access to a file causes blocks to be written to the ssd.
Thanks. Cheers, -- Regards, Terry Duell

On 03/11/12 13:19, Jason White wrote:
Terry Duell <tduell@iinet.net.au> wrote:
Are there issues with running Linux from an SSD, that one needs to take into account when first installing/partitioning/whatever?
It has been suggested here in the past that you should use the noatime option to ensure that not every access to a file causes blocks to be written to the ssd.
Given that the 'relatime' option is now the default (since kernel 2.6.30), it's probably not that much of a concern anymore. Yes, relatime will create slightly more writes than noatime would, but it will be considerably fewer than the old default option. (relatime only updates the access time if the previous access time was earlier than the current modify/change times; noatime causes programs like mutt to break) Going back to the original question ... I've had an SSD in a netbook for over two years now, Linux works fine on it. Cheers, Paul -- Paul Dwerryhouse | PGP Key ID: 0x6B91B584 http://weblog.leapster.org/

Paul Dwerryhouse <paul@dwerryhouse.com.au> wrote:
Given that the 'relatime' option is now the default (since kernel 2.6.30), it's probably not that much of a concern anymore. Yes, relatime will create slightly more writes than noatime would, but it will be considerably fewer than the old default option.
I thought the new default was just for ext3/ext4, but I might be mistaken. The Debian page cited earlier in this thread lists additional mount options, including enabling trim and others specific to Btrfs. Whether one would run Btrfs on a laptop at this stage in its development is a matter of judgment. I can predict there'll be arguments on both sides from people who have different needs and priorities. I personally wouldn't feel comfortable with it yet for general use, but for a development/test system with no important data to preserve that could easily be re-installed, I might use it.

On 03/11/12 20:48, Jason White wrote:
Whether one would run Btrfs on a laptop at this stage in its development is a matter of judgment.
FWIW I've been running it on my work laptops for /home since January 2009 (pre-mainline kernel merge) and haven't had a problem (yet) with it. That does not mean it will be pain free for others, I don't use any features like snapshots, subvolumes, etc and it's done well by me so far. This page is worth reading before deciding if you want to try it out or not: https://btrfs.wiki.kernel.org/index.php/Gotchas cheers, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

Hi, On 3/11/2012 8:48 PM, Jason White wrote:
Whether one would run Btrfs on a laptop at this stage in its development is a matter of judgment. I can predict there'll be arguments on both sides from people who have different needs and priorities. I personally wouldn't feel comfortable with it yet for general use, but for a development/test system with no important data to preserve that could easily be re-installed, I might use it.
Insofar as Oracle is concerned with their own version of RHEL.... BTRFS is production ready, but they haven't certified their other main products to use it! So as far as I'm concerned, either it isn't "really" production ready or they should get onto certifying their other products to give the "production ready" status some real world worth. I've subscribed to linux-btrfs-owner@vger.kernel.org mailing list, just to have an idea as to what is going on .... patches keep on coming and other issues crop up too often as well. I'm not ready to use BTRFS on anything yet myself. In time I do hope it turns out to be a better option than ZFS. I wish that there weren't any licensing issues with ZFS code living in the Linux kernel because I love ZFS on Solaris and I don't want to use ZFS via FUSE -- nor do I want to use BSD kernel (which is another possibility with Debian). Cheers -- Kind Regards AndrewM Andrew McGlashan Broadband Solutions now including VoIP Current Land Line No: 03 9012 2102 Mobile: 04 2574 1827 Fax: 03 9012 2178 National No: 1300 85 3804 Affinity Vision Australia Pty Ltd http://affinityvision.com.au http://securemywireless.com.au http://adsl2choice.net.au In Case of Emergency -- http://affinityvision.com.au/ice.html

On Sun, 4 Nov 2012, Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
I've subscribed to linux-btrfs-owner@vger.kernel.org mailing list, just to have an idea as to what is going on .... patches keep on coming and other issues crop up too often as well. I'm not ready to use BTRFS on anything yet myself. In time I do hope it turns out to be a better option than ZFS. I wish that there weren't any licensing issues with ZFS code living in the Linux kernel because I love ZFS on Solaris and I don't want to use ZFS via FUSE -- nor do I want to use BSD kernel (which is another possibility with Debian).
One major problem with ZFS is the way it manages memory. It's supposed to be possible to keep it's memory use down by limiting ARC size, but that doesn't seem to work as designed. My experience is that a zfsonlinux system with 4G of RAM and light load was giving kernel panics due to memory allocations failing. While it's relatively cheap to add heaps of RAM on new systems it's still an annoyance and this prevents the use of ZFS on small systems. So while I could probably upgrade my Thinkpad to 8G of RAM and ZFS but it's a lot easier to keep the current 5G and use BTRFS. It's possible that the FUSE option would improve things, presumably the user- space ZFS code run by FUSE would be pageable and therefore the memory allocation problems wouldn't be as bad. But FUSE has other issues and I'd prefer not to use it. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
I've subscribed to linux-btrfs-owner@vger.kernel.org mailing list, just to have an idea as to what is going on .... patches keep on coming and other issues crop up too often as well. I'm not ready to use BTRFS on anything yet myself.
Patches are to be expected as performance improvements and new features are added. It's the incidence of corruption not due to hardware issues that I would consider to be more indicative.

On 4/11/2012 3:35 PM, Jason White wrote:
Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
I've subscribed to linux-btrfs-owner@vger.kernel.org mailing list, just to have an idea as to what is going on .... patches keep on coming and other issues crop up too often as well. I'm not ready to use BTRFS on anything yet myself.
Actually, this is the list I've subscribed: linux-btrfs@vger.kernel.org
Patches are to be expected as performance improvements and new features are added. It's the incidence of corruption not due to hardware issues that I would consider to be more indicative.
Okay, but I think it will be a while yet until I start to use BTRFS.... Kind Regards AndrewM

On 04/11/12 15:35, Jason White wrote:
Patches are to be expected as performance improvements and new features are added. It's the incidence of corruption not due to hardware issues that I would consider to be more indicative.
It is marked as an experimental filesystem for a reason. :-) That said, there's been a lot of work done trying to catch issues which could leave the filesystem in an inconsistent state on loss of power (which was the most recent corruption issue I've noticed reported), which kernel version are you thinking of? cheers, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

On 04/11/12 21:09, Chris Samuel wrote:
That said, there's been a lot of work done trying to catch issues which could leave the filesystem in an inconsistent state on loss of power (which was the most recent corruption issue I've noticed reported), which kernel version are you thinking of?
I take that back, the following 2 patches merged for 3.7 address potential filesystem corruption issues: commit 5af3e8cce8b7ba0a2819e18c9146c8c0b452d479 Author: Stefan Behrens <sbehrens@giantdisaster.de> Date: Wed Aug 1 18:56:49 2012 +0200 Btrfs: make filesystem read-only when submitting barrier fails So far the return code of barrier_all_devices() is ignored, which means that errors are ignored. The result can be a corrupt filesystem which is not consistent. This commit adds code to evaluate the return code of barrier_all_devices(). The normal btrfs_error() mechanism is used to switch the filesystem into read-only mode when errors are detected. In order to decide whether barrier_all_devices() should return error or success, the number of disks that are allowed to fail the barrier submission is calculated. This calculation accounts for the worst RAID level of metadata, system and data. If single, dup or RAID0 is in use, a single disk error is already considered to be fatal. Otherwise a single disk error is tolerated. The calculation of the number of disks that are tolerated to fail the barrier operation is performed when the filesystem gets mounted, when a balance operation is started and finished, and when devices are added or removed. Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de> commit 62856a9b73860cffe2a3d91b069393b88c219aa6 Author: Stefan Behrens <sbehrens@giantdisaster.de> Date: Tue Jul 31 11:09:44 2012 -0600 Btrfs: detect corrupted filesystem after write I/O errors In check-integrity, detect when a superblock is written that points to blocks that have not been written to disk due to I/O write errors. Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de> -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

On 05/11/12 09:48, Chris Samuel wrote:
commit 62856a9b73860cffe2a3d91b069393b88c219aa6 Author: Stefan Behrens <sbehrens@giantdisaster.de> Date: Tue Jul 31 11:09:44 2012 -0600
Btrfs: detect corrupted filesystem after write I/O errors
In check-integrity, detect when a superblock is written that points to blocks that have not been written to disk due to I/O write errors.
Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de>
This patch is part of the runtime integrity checking that can be enabled as part of btrfs, it's designed to be used to catch potential bugs so its existence may not be indicative of an actual issue, just something that it is important to know if it does get missed. -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

On 04/11/12 14:33, Andrew McGlashan wrote:
Insofar as Oracle is concerned with their own version of RHEL.... BTRFS is production ready, but they haven't certified their other main products to use it!
I am very dubious of such claims, there should be no magic code in there that won't be in the mainline by now and Chris Mason left Oracle to join Fusion-IO back in June, and was joined by Josef Bacik from Red Hat. It's still marked experimental in the mainline kernel. # Btrfs is highly experimental, and THE DISK FORMAT IS NOT # YET FINALIZED. You should say N here unless you are interested # in testing Btrfs with non-critical data. cheers, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

Hi, On 04/11/2012, at 2:33 PM, Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
Insofar as Oracle is concerned with their own version of RHEL.... BTRFS is production ready, but they haven't certified their other main products to use it!
This is half-true: Oracle supports btrfs if used in production, but the Linux team is not responsible for product certification. So, you can happily store Oracle binaries on btrfs (and I know a few customers that are now exploring this), but you can't use btrfs to store Oracle Database data. And why would you anyway, given so many better options, not least of which is Oracle ASM? And, Oracle Database is one of the few Oracle products that actually certifies filesystems for data storage. Where a product has no filesystem certifications at all, e.g. the middleware stack, you can also use btrfs. Cheers, Avi

Hi Avi, On 5/11/2012 8:04 AM, Avi Miller wrote:
On 04/11/2012, at 2:33 PM, Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
Insofar as Oracle is concerned with their own version of RHEL.... BTRFS is production ready, but they haven't certified their other main products to use it!
This is half-true: Oracle supports btrfs if used in production, but the Linux team is not responsible for product certification. So, you can happily store Oracle binaries on btrfs (and I know a few customers that are now exploring this), but you can't use btrfs to store Oracle Database data. And why would you anyway, given so many better options, not least of which is Oracle ASM?
I wish it was 0% true... ;)
And, Oracle Database is one of the few Oracle products that actually certifies filesystems for data storage. Where a product has no filesystem certifications at all, e.g. the middleware stack, you can also use btrfs.
I much prefer using standard files on a file system without autoextend enabled or any kind of automatic space management; it is much less prone to space issues when managed properly and it is much easier for me to clone databases for development and test requirements from a simple production backup. Cheers -- Kind Regards AndrewM

Hi Terry, Have been using SSD with Linux daily for over 3 months without any problems. You might want to checkout this article http://wiki.debian.org/SSDoptimization Cheers, On Sat, Nov 3, 2012 at 1:15 PM, Terry Duell <tduell@iinet.net.au> wrote:
Hello All, To expand on the subject line, I have inherited an Acer laptop on which I intend to install Linux. I think the machine spec should be OK for most linux distros...Core 2 duo T5600, 4GB, 120GB, Nvidia graphics. I am considering replacing the 120GB drive with an SSD, as it should smarten up the performance a tad and probably improve battery life a bit as well. Are there issues with running Linux from an SSD, that one needs to take into account when first installing/partitioning/whatever?
Cheers, -- Regards, Terry Duell _______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main
-- simple is good http://brucewang.net http://twitter.com/number5

Hello Bruce, On Sat, 03 Nov 2012 13:38:28 +1100, Bruce Wang <bruce@brucewang.net> wrote:
Hi Terry,
Have been using SSD with Linux daily for over 3 months without any problems.
You might want to checkout this article http://wiki.debian.org/SSDoptimization
Thanks for that link, very helpful. Cheers, -- Regards, Terry Duell

On Sat, 03 Nov 2012 13:15:50 +1100, Terry Duell <tduell@iinet.net.au> wrote:
Hello All, To expand on the subject line, I have inherited an Acer laptop on which I intend to install Linux. I think the machine spec should be OK for most linux distros...Core 2 duo T5600, 4GB, 120GB, Nvidia graphics. I am considering replacing the 120GB drive with an SSD, as it should smarten up the performance a tad and probably improve battery life a bit as well.
I may have been a bit premature...the installed drive is an IDE. MSY only have SATA SSDs in their price list, and a quick snoop about suggests the IDE SSDs that are available are much more expensive. On price alone it may not be worth it, and I have yet to find info that will tell the story on the performance. Cheers, -- Regards, Terry Duell

On Sat, 03 Nov 2012 17:50:58 +1100, Jason White <jason@jasonjgw.net> wrote:
Terry Duell <tduell@iinet.net.au> wrote:
I may have been a bit premature...the installed drive is an IDE.
I am very surprised that a new laptop would have an IDE drive these days.
It's not new. In my original post I did say "...I have inherited an Acer laptop on which I intend to install Linux"
My laptop is over three years old and it has a SATA drive according to the hardware manual.
I had looked through the manual but it didn't specify the drive type, and I assumed it was SATA, so it was a bit of a surprise when I opened the case to see an IDE. Not to worry, I'll just have to live with 'normal' performance :-) Cheers, -- Regards, Terry Duell

Terry Duell wrote:
On Sat, 03 Nov 2012 13:15:50 +1100, Terry Duell<tduell@iinet.net.au> wrote:
Hello All, To expand on the subject line, I have inherited an Acer laptop on which I intend to install Linux. I think the machine spec should be OK for most linux distros...Core 2 duo T5600, 4GB, 120GB, Nvidia graphics. I am considering replacing the 120GB drive with an SSD, as it should smarten up the performance a tad and probably improve battery life a bit as well.
Hi Terry, It is worth it from a performance point of view, although as you say, the IDE drives are more expensive. Depending on how much you plan to use the laptop CF may be an option, I have an old benq celeron mobile 1.6GHz/1GB ram running on a CF (8GB) and it speeds things up a lot in the boot department especially. And with CF at 32GB or greater at reasonable prices these days... Although if you plan to use it a bit I would still recommend SSD, as these are optimised for use as dynamic storage, I don't think the wear leveling on CF is as good.
I may have been a bit premature...the installed drive is an IDE. MSY only have SATA SSDs in their price list, and a quick snoop about suggests the IDE SSDs that are available are much more expensive. On price alone it may not be worth it, and I have yet to find info that will tell the story on the performance.
Cheers,
cheers Robert

Hello Robert, On Sat, 03 Nov 2012 17:56:55 +1100, Robert Moonen <n0b0dy@bigpond.net.au> wrote: [snip]
It is worth it from a performance point of view, although as you say, the IDE drives are more expensive.
Depending on how much you plan to use the laptop CF may be an option, I have an old benq celeron mobile 1.6GHz/1GB ram running on a CF (8GB) and it speeds things up a lot in the boot department especially. And with CF at 32GB or greater at reasonable prices these days...
I had a bit of a snoop about for info on CF, and it isn't obvious to me how I could use one on the Acer. I'm now thinking I should just install Linux on the IDE drive and see how it performs. Cheers, -- Regards, Terry Duell

I had a bit of a snoop about for info on CF, and it isn't obvious to me how I could use one on the Acer.
You used to be able to get IDE<->CF adapters that were the size of a 2.5" drive. I assume they are still around but maybe not so easy to get these days.
I'm now thinking I should just install Linux on the IDE drive and see how it performs.
That's probably a better starting point. What are the specs of your inherited laptop? I have an inherited HP IDE-era laptop that still performs pretty well. The Ethernet is fried on it, and the IDE disk sometimes fails to detect on boot. 1680x1050 resolution though - it would have been a pretty expensive unit when it was new. James

Hello James, On Sun, 04 Nov 2012 16:25:06 +1100, James Harper <james.harper@bendigoit.com.au> wrote:
I'm now thinking I should just install Linux on the IDE drive and see how it performs.
That's probably a better starting point. What are the specs of your inherited laptop? I have an inherited HP IDE-era laptop that still performs pretty well. The Ethernet is fried on it, and the IDE disk sometimes fails to detect on boot. 1680x1050 resolution though - it would have been a pretty expensive unit when it was new.
My Acer has a core-2-duo T5600 (1.83GHz), 4GB ram, 120GB HDD, Nvidia GeForce 7300, 1280x800 It's not a bad little unit, a bit slow starting Win XP, but will probably be better with a suitable linux. I'm still thinking about a distro, and have recently installed Mint 13 and Manjaro in Virtualbox on my main system, to get a feel. Cheers, -- Regards, Terry Duell

Terry Duell <tduell@iinet.net.au> wrote:
It's not a bad little unit, a bit slow starting Win XP, but will probably be better with a suitable linux. I'm still thinking about a distro, and have recently installed Mint 13 and Manjaro in Virtualbox on my main system, to get a feel.
I've read good reports about Arch Linux, but I haven't tried it. If you're concerned about performance, choose a distribution that lets you install a minimal operating system and then add packages to it. For example, Debian can be installed this way.

Hello Jason, On Sun, 04 Nov 2012 17:12:43 +1100, Jason White <jason@jasonjgw.net> wrote:
Terry Duell <tduell@iinet.net.au> wrote:
It's not a bad little unit, a bit slow starting Win XP, but will probably be better with a suitable linux. I'm still thinking about a distro, and have recently installed Mint 13 and Manjaro in Virtualbox on my main system, to get a feel.
I've read good reports about Arch Linux, but I haven't tried it.
If you're concerned about performance, choose a distribution that lets you install a minimal operating system and then add packages to it. For example, Debian can be installed this way.
Yes, that would be a good approach. I have been looking at Mint and Mint-Debian, but really haven't spent enough time with any of them thus far. Cheers, -- Regards, Terry Duell

On Mon, Nov 5, 2012 at 8:44 AM, Terry Duell <tduell@iinet.net.au> wrote:
If you're concerned about performance, choose a distribution that lets you install a minimal operating system and then add packages to it. For example, Debian can be installed this way.
Yes, that would be a good approach. I have been looking at Mint and Mint-Debian, but really haven't spent enough time with any of them thus far.
I have a Panasonic toughbook I use for Ham radio in the field. It is a 1.4 GHZ processor and has 1.5GB of RAM and a 64GB IDE SSD, I am running Xubuntu on it and it runs quite fast, certainly much smoother then XP that came on it when I bought it. I cant say how much of a difference the SSD made as I swapped it in the same time I installed Linux. Xubuntu isnt as light as doing a minimal install of debian but it is much lighter then Mint and being Ubuntu based it all just worked for me including the touch screen, although I did have to make the touch screen calibration persistent across reboots. YMMV Mark -- Mark "Pockets" Clohesy Mob Phone: (+61) 406 417 877 Email: hiddensoul@twistedsouls.com G-Talk: mark.clohesy@gmail.com - GNU/Linux.. Linux Counter #457297 "I would love to change the world, but they won't give me the source code" "Linux is user friendly...its just selective about who its friends are" "Never underestimate the bandwidth of a V8 station wagon full of tapes hurtling down the highway" "The difference between e-mail and regular mail is that computers handle e-mail, and computers never decide to come to work one day and shoot all the other computers"

Terry Duell wrote:
On Sun, 04 Nov 2012 17:12:43 +1100, Jason White<jason@jasonjgw.net> wrote: [snip]
I've read good reports about Arch Linux, but I haven't tried it.
If you're concerned about performance, choose a distribution that lets you install a minimal operating system and then add packages to it. For example, Debian can be installed this way.
Yes, that would be a good approach. I have been looking at Mint and Mint-Debian, but really haven't spent enough time with any of them thus far.
I run mint9-lxde on an old m20 and it runs OK on that, gnome is a bit heavy. ;-) But on the old benq I rolled my own Debian install using blackbox as the WM and on the CF it is lightning fast, even on that old platform. The only problem is needing to learn/remember some less often used commands. cheers.

On 04/11/12 17:12, Jason White wrote:
I've read good reports about Arch Linux, but I haven't tried it.
I ran Arch on a 9 year old laptop with 512MB RAM when I found that none of the *buntu's install CD's would boot on it. It's firmly in the "configure it yourself" camp which I liked, and I ran it with LXDE for some time until the hardware started to be unstable. cheers, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

Terry Duell wrote:
I had a bit of a snoop about for info on CF, and it isn't obvious to me how I could use one on the Acer. Oh sorry, I had left the detail out; you just need a CF to IDE adapter. As CF is pin-pin compatible with IDE it is just a header conversion board, quite cheap on ebay. I'm now thinking I should just install Linux on the IDE drive and see how it performs. You'll find that quite acceptable, but CF/SSD is a lot faster to boot, as for running afterwards, really not much difference unless you are doing a lot of disk intensive work, in which case SSD is still a bit of a worry anyway.
cheers.

On 03/11/12 13:15, Terry Duell wrote:
Hello All, To expand on the subject line, I have inherited an Acer laptop on which I intend to install Linux. I think the machine spec should be OK for most linux distros...Core 2 duo T5600, 4GB, 120GB, Nvidia graphics. I am considering replacing the 120GB drive with an SSD, as it should smarten up the performance a tad and probably improve battery life a bit as well. Are there issues with running Linux from an SSD, that one needs to take into account when first installing/partitioning/whatever?
If you use a modern distro, it'll probably set everything up correctly, but things to watch out for: * set the 'discard' mount flag if you're using ext4, or 'ssd' on btrfs. (This tells the SSD when you've deleted stuff to free up space, and allows it to work a lot more efficiently) * set 'relatime' mount flag as well; it helps performance on everything, and in ssd cases, avoids a bunch of unnecessary writes. * Align partitions and filesystems on multiples of the ssd erase block.. or if you don't know what that is, just make sure it's a multiple of a number like 16k or something.. just not the default of 512. There are some tools around that'll benchmark different block sizes to work out the appropriate value, too. However note that this issue only seemed to be a *big* issue for earlier generation SSDs.. the current ones seem to cope pretty well regardless.

Hello Toby, On Mon, 05 Nov 2012 10:07:43 +1100, Toby Corkindale <toby.corkindale@strategicdata.com.au> wrote:
On 03/11/12 13:15, Terry Duell wrote:
[snip]
Are there issues with running Linux from an SSD, that one needs to take into account when first installing/partitioning/whatever?
If you use a modern distro, it'll probably set everything up correctly, but things to watch out for:
* set the 'discard' mount flag if you're using ext4, or 'ssd' on btrfs. (This tells the SSD when you've deleted stuff to free up space, and allows it to work a lot more efficiently) * set 'relatime' mount flag as well; it helps performance on
[snip] Thanks for your help, but the SSD is now off the agenda, having found that the laptop has an IDE interface. Cheers, -- Regards, Terry Duell

Terry Duell wrote:
[snip] Thanks for your help, but the SSD is now off the agenda, having found that the laptop has an IDE interface. Cheers, Pity it hasn't got a spare mini-PCIe slot; or you could try something like: http://ap.apacer.com/products/mPDM-M; which seems to be about 100GB; regards Rohan McLeod

Toby Corkindale <toby.corkindale@strategicdata.com.au> wrote:
* set the 'discard' mount flag if you're using ext4, or 'ssd' on btrfs. (This tells the SSD when you've deleted stuff to free up space, and allows it to work a lot more efficiently) * set 'relatime' mount flag as well; it helps performance on everything, and in ssd cases, avoids a bunch of unnecessary writes. * Align partitions and filesystems on multiples of the ssd erase block.. or if you don't know what that is, just make sure it's a multiple of a number like 16k or something.. just not the default of 512. There are some tools around that'll benchmark different block sizes to work out the appropriate value, too. However note that this issue only seemed to be a *big* issue for earlier generation SSDs.. the current ones seem to cope pretty well regardless.
That's a very helpful summary. Also, be sure that /tmp is a tmpfs file system rather than a directory on the ssd; this will reduce write activity. I think most distributions default to tmpfs already now.

On 05/11/12 10:07, Toby Corkindale wrote:
* set the 'discard' mount flag if you're using ext4, or 'ssd' on btrfs. (This tells the SSD when you've deleted stuff to free up space, and allows it to work a lot more efficiently)
That option is "discard" for btrfs too, the "ssd" option (which is automatically set if the kernel detects the disk as being non-rotational) determines allocation and commit log strategies. The "discard" option is not set by default as it can result in poor performance on some SSDs.
* set 'relatime' mount flag as well; it helps performance on everything, and in ssd cases, avoids a bunch of unnecessary writes.
That's the default on any vaguely modern kernel (set as default in 2.6.30 in 2009). :-) cheers, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

Hi Chris, all
* set 'relatime' mount flag as well; it helps performance on everything, and in ssd cases, avoids a bunch of unnecessary writes.
That's the default on any vaguely modern kernel (set as default in 2.6.30 in 2009). :-)
"With this option enabled, atime data is written to the disk only if the file has been modified since the atime data was last updated (mtime), or if the file was last accessed more than a certain length of time ago (by default, one day)." Unless you want to do auditing of read-access, atime is just not relevant. Incremental backups work on mtime not atime. So I use noatime rather than relatime on my SSD laptops, just as we do on database partitions of servers since obviously there we fully expect there to be lots of reads and tracking that info is nonsense. Cheers, Arjen. -- Exec.Director @ Open Query (http://openquery.com) MySQL services Sane business strategy explorations at http://upstarta.com.au Personal blog at http://lentz.com.au/blog/

Arjen Lentz <arjen@lentz.com.au> wrote:
Unless you want to do auditing of read-access, atime is just not relevant. Incremental backups work on mtime not atime.
From memory, Mutt is one application that relies on atime (in that case, to determine which mail folders have been read since they were last updated). This still works if you use relatime though.

Jason White <jason@jasonjgw.net> writes:
Arjen Lentz <arjen@lentz.com.au> wrote:
Unless you want to do auditing of read-access, atime is just not relevant. Incremental backups work on mtime not atime.
From memory, Mutt is one application that relies on atime (in that case, to determine which mail folders have been read since they were last updated). This still works if you use relatime though.
If you're using mbox, that is. Obviously mutt doesn't need it for IMAP, and (I think) not for maildir.

On 05/11/12 13:24, Arjen Lentz wrote:
Hi Chris, all
* set 'relatime' mount flag as well; it helps performance on everything, and in ssd cases, avoids a bunch of unnecessary writes.
That's the default on any vaguely modern kernel (set as default in 2.6.30 in 2009). :-)
"With this option enabled, atime data is written to the disk only if the file has been modified since the atime data was last updated (mtime), or if the file was last accessed more than a certain length of time ago (by default, one day)."
Unless you want to do auditing of read-access, atime is just not relevant. Incremental backups work on mtime not atime.
So I use noatime rather than relatime on my SSD laptops, just as we do on database partitions of servers since obviously there we fully expect there to be lots of reads and tracking that info is nonsense.
Agreed, I used noatime on my db volumes, and other systems where I know it'll be fine, but if I'm making blanket recommendations to strangers, relatime is safer.

On 05/11/12 13:24, Arjen Lentz wrote:
Unless you want to do auditing of read-access, atime is just not relevant.
Not having it updated can break some user space apps like Mutt, so relatime was seen to be a way of reducing the impact of atime (and stopping laptop drives spinning up unnecessarily) without breaking those apps. cheers, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

Chris Samuel <chris@csamuel.org> writes:
On 05/11/12 10:07, Toby Corkindale wrote:
* set the 'discard' mount flag if you're using ext4, or 'ssd' on btrfs. (This tells the SSD when you've deleted stuff to free up space, and allows it to work a lot more efficiently)
That option is "discard" for btrfs too, the "ssd" option (which is automatically set if the kernel detects the disk as being non-rotational) determines allocation and commit log strategies.
Also ssd_sparse IIRC, for a different flavour of FTL. It's not very clear to me how to know which of the two to use.

On 09/11/12 11:07, Trent W. Buck wrote:
Also ssd_sparse IIRC, for a different flavour of FTL. It's not very clear to me how to know which of the two to use.
Aha, ssd_spread, looks like it's meant for lower grade SSD's. samuel@eris:~/Downloads/linux$ git log -Sssd_spread fs/btrfs/ commit 451d7585a8bb1b9bec0d676ce3dece1923164e55 Author: Chris Mason <chris.mason@oracle.com> Date: Tue Jun 9 20:28:34 2009 -0400 Btrfs: add mount -o ssd_spread to spread allocations out Some SSDs perform best when reusing block numbers often, while others perform much better when clustering strictly allocates big chunks of unused space. The default mount -o ssd will find rough groupings of blocks where there are a bunch of free blocks that might have some allocated blocks mixed in. mount -o ssd_spread will make sure there are no allocated blocks mixed in. It should perform better on lower end SSDs. Signed-off-by: Chris Mason <chris.mason@oracle.com> -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

Chris Samuel <chris@csamuel.org> writes:
On 09/11/12 11:07, Trent W. Buck wrote:
Also ssd_sparse IIRC, for a different flavour of FTL. It's not very clear to me how to know which of the two to use.
Aha, ssd_spread, looks like it's meant for lower grade SSD's.
What does that actually mean in practice, though? Once upon a time, it was intel, or <everything else>. Even if you can determine that your OCZ Vertex NN runs a Sandforce MM chipset, which do you use? Maybe the rule of thumb is to use ssd for SSDs, and ssd_spread for USB keys and SD/MMC cards?
samuel@eris:~/Downloads/linux$ git log -Sssd_spread fs/btrfs/ commit 451d7585a8bb1b9bec0d676ce3dece1923164e55 Author: Chris Mason <chris.mason@oracle.com> Date: Tue Jun 9 20:28:34 2009 -0400
Btrfs: add mount -o ssd_spread to spread allocations out
Some SSDs perform best when reusing block numbers often, while others perform much better when clustering strictly allocates big chunks of unused space.
The default mount -o ssd will find rough groupings of blocks where there are a bunch of free blocks that might have some allocated blocks mixed in.
mount -o ssd_spread will make sure there are no allocated blocks mixed in. It should perform better on lower end SSDs.
Signed-off-by: Chris Mason <chris.mason@oracle.com>

Hi all, just found this here: http://wiki.freebsd.org/WhatsNew/FreeBSD10#Storage_subsystems.27_improvement... "As a world's first, FreeBSD now has TRIM support in ZFS!" Interesting not just for the technical fact (TRIM is a "SSD supportin feature") but also for the fact that ZFS is getting new features after the "OpenSolaris source" was closed. Another thing that came to mind when reading the e-mails over the last days: The "80% feature" (performance degrates after 80% space is filled) is due to the fact that the block selection changes then - from "first fit" to "best fit" (and that seems to suck..) Under Solaris there is a "magic hack" (I don't find the URL at the moment) to delay the change of policy. I don't know about any way to tweak it under FreeBSD (or Linux). It looks as btrfs avoids that problem by a different block allocation design? Regards Peter
participants (16)
-
Andrew McGlashan
-
Arjen Lentz
-
Avi Miller
-
Bruce Wang
-
Chris Samuel
-
Hiddensoul (Mark Clohesy)
-
James Harper
-
Jason White
-
Paul Dwerryhouse
-
Peter Ross
-
Robert Moonen
-
Rohan McLeod
-
Russell Coker
-
Terry Duell
-
Toby Corkindale
-
trentbuck@gmail.com