Woeful Thunderbird Performance

Hi, Hope you're all keeping cool on this rather oppressive day. I have an i5-2450M CPU @ 2.50GHz with 4Gb of memory running linux mint 17. As the title suggests I'm finding Thunderbird to be very sluggish so I would welcome suggestions to help boost performance. (actually evolution was better in this regard) I have moved about 6 years worth of emails into the archive directory -- I presume exporting these out of Thunderbird into a searchable format (which one?) would help matters. Alternative mail reader suggestions are welcome. I'm using gmail (imap.googlemail.com) should I look at hosting my own email server? Digital ocean? On a related note: is there a searchable online archive of luv-main posts? The only thing I can find is this: http://lists.luv.asn.au/pipermail/luv-main/ which doesn't seem very satisfactory. I was storing about 6 years worth of luv postings but I deleted them in a vain attempt to rectify the above problem.

Just to say, I too have some issues with Thunderbird. I backup my working hard disks regularly as I have quite a few HDD now. I wonder if this topic and other 'standard' applications and their idiosyncrasies could be a topic of a meeting. Since my energetic restart into developing apples (have you seen orage) I have had to keep track of versions of apps etc on particular distros. David, I wonder how best to share tips. Searching the internet might be one way but not necessarily the best. Thinking about a good way forward , Mike On 03/01/2015 2:09 PM, "David Zuccaro" <david.zuccaro@optusnet.com.au> wrote:
Hi,
Hope you're all keeping cool on this rather oppressive day.
I have an i5-2450M CPU @ 2.50GHz with 4Gb of memory running linux mint 17.
As the title suggests I'm finding Thunderbird to be very sluggish so I
would welcome suggestions to help boost performance. (actually evolution was better in this regard)
I have moved about 6 years worth of emails into the archive directory --
I presume exporting these out of Thunderbird into a searchable format (which one?) would help matters.
Alternative mail reader suggestions are welcome. I'm using gmail (
imap.googlemail.com) should I look at hosting my own email server? Digital ocean?
On a related note: is there a searchable online archive of luv-main posts?
The only thing I can find is this:
http://lists.luv.asn.au/pipermail/luv-main/
which doesn't seem very satisfactory. I was storing about 6 years worth
of luv postings but I deleted them in a vain attempt to rectify the above problem.
_______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main

Which particular functionality of thunderbird is slow?? Are you poping or imaping? Is this what is slow? Have you updated to latest release? Some release had a few issues, that were fixed later. Daniel. On Sat, 03 Jan 2015 14:09:36 +1100 David Zuccaro <david.zuccaro@optusnet.com.au> wrote:
Hi,
Hope you're all keeping cool on this rather oppressive day.
I have an i5-2450M CPU @ 2.50GHz with 4Gb of memory running linux mint 17.
As the title suggests I'm finding Thunderbird to be very sluggish so I would welcome suggestions to help boost performance. (actually evolution was better in this regard)
I have moved about 6 years worth of emails into the archive directory -- I presume exporting these out of Thunderbird into a searchable format (which one?) would help matters.
Alternative mail reader suggestions are welcome. I'm using gmail (imap.googlemail.com) should I look at hosting my own email server? Digital ocean?
On a related note: is there a searchable online archive of luv-main posts?
The only thing I can find is this:
http://lists.luv.asn.au/pipermail/luv-main/
which doesn't seem very satisfactory. I was storing about 6 years worth of luv postings but I deleted them in a vain attempt to rectify the above problem. _______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main
-- dan062 <dan062@yahoo.com.au>

On 03/01/15 15:35, Dan062 wrote:
Which particular functionality of thunderbird is slow?? Everything, click on directories for example. Funny thing is that it seems OK at the moment -- other times it is unusable. I have been monitoring CPU usage in top and there is nothing untoward going on there.
Are you poping or imaping? imaping Is this what is slow? maybe. Should I use pop or just completely stop using gmail? I thought IMAP was supposed to be better?
Have you updated to latest release? Some release had a few issues, that were fixed later. I'm using Mint 17, I guess I could move to 17.1 but there doesn't seem to be anything mentioned about Thunderbird in "New Features".

On Sat, 3 Jan 2015 04:32:54 PM David Zuccaro wrote:
Everything, click on directories for example. Funny thing is that it seems OK at the moment -- other times it is unusable. I have been monitoring CPU usage in top and there is nothing untoward going on there.
I'd suggest it's worth monitoring with latencytop to see if anything shows up there. -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

On Sat, 3 Jan 2015, David Zuccaro <david.zuccaro@optusnet.com.au> wrote:
I have an i5-2450M CPU @ 2.50GHz with 4Gb of memory running linux mint 17.
As the title suggests I'm finding Thunderbird to be very sluggish so I would welcome suggestions to help boost performance. (actually evolution was better in this regard)
I've just upgraded my desktop PC to 8G of RAM. Even with swap on SSD 4G wasn't enough when I had some long lived Chromium or Chrome tabs. It seems that 4G isn't regarded as a lot of RAM nowadays so lots of apps are getting wasteful. More RAM might help. Also SSD is really cheap, $120 to add an SSD to a desktop PC will dramatically improve performance. Use SSD for root, /home, and swap and then use the hard drive for the big stuff.
I have moved about 6 years worth of emails into the archive directory -- I presume exporting these out of Thunderbird into a searchable format (which one?) would help matters.
Alternative mail reader suggestions are welcome. I'm using gmail (imap.googlemail.com) should I look at hosting my own email server? Digital ocean?
It might be worth trying to put only a few years email in each folder. For Kmail I found that keeping folders to about 15,000 messages improved performance a lot both directly and indirectly because searches became faster as I generally knew which year I wanted to search in. For storage of old mail I currently use an IMAP server on my LAN with caching IMAP (so my laptop can read it all when I'm travelling) and only have the more recent mail in my regular IMAP server. This means that my phone can't access the tens of thousands of ancient messages which is more of a feature than a bug! I think that everyone here should host their own mail server. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On 3 January 2015 at 21:24, Russell Coker <russell@coker.com.au> wrote:
I think that everyone here should host their own mail server.
You say that, yet apparently even you, an expert, can't get it right. (All mail from you is still ending up in my spam filter because the DKIM and DMARC checks fail.) Given the complexities around SPF, DKIM, DMARC, DNSSEC, DomainKeys, and in the other direction, relay control, bayesian filters, storage, searching and backup, maybe it's best that you don't recommend people host their own mail server? There's a lot of time to sink into Doing It Right that could be better spent on other problems. On the other hand, yeah, sure, it's a good way to learn a bit about some of the underlying fundamentals and flaws of SMTP. IMHO, Toby

On Mon, Jan 12, 2015 at 3:40 AM, Toby Corkindale <toby@dryft.net> wrote:
On 3 January 2015 at 21:24, Russell Coker <russell@coker.com.au> wrote:
I think that everyone here should host their own mail server.
You say that, yet apparently even you, an expert, can't get it right. (All mail from you is still ending up in my spam filter because the DKIM and DMARC checks fail.)
[...] +1 it's a a bit tiresome having to check spam for the threads to make sense.

On 12/01/15 18:24, Anders Holmstrom wrote:
On Mon, Jan 12, 2015 at 3:40 AM, Toby Corkindale <toby@dryft.net <mailto:toby@dryft.net>> wrote:
On 3 January 2015 at 21:24, Russell Coker <russell@coker.com.au <mailto:russell@coker.com.au>> wrote: > I think that everyone here should host their own mail server.
You say that, yet apparently even you, an expert, can't get it right. (All mail from you is still ending up in my spam filter because the DKIM and DMARC checks fail.)
[...] +1 it's a a bit tiresome having to check spam for the threads to make sense.
Curiously, I've got it the other way around. I wasn't looking through luv mail, but this was in my junk-review folder due to an unclear classification. Working with clients like non-profits, sending quite a lot of mail, I find I'm recommending services like Sendgrid and Mandrill for managing outgoing mail. I note that Mandrill runs a free service for up to 12K emails a month, which would do just fine for most people running personal mail servers. Regards, Andrew McNaughton

On 03/01/15 13:09, David Zuccaro wrote:
I have an i5-2450M CPU @ 2.50GHz with 4Gb of memory running linux mint 17.
As the title suggests I'm finding Thunderbird to be very sluggish so I would welcome suggestions to help boost performance. (actually evolution was better in this regard)
I have moved about 6 years worth of emails into the archive directory -- I presume exporting these out of Thunderbird into a searchable format (which one?) would help matters.
I've used TB extensively over the years and supported it in the workplace. One of this issues that can happen is that if you are using IMAP and you are moving large datasets around then the index files need to be rebuild. Often this is a problem because you have the folders opened on other devices / other users. The net effect is that each starts polling and kills the other's indexing process and starts its own. Because you have such an extensive "backup" of emails the process can also take too long and time out (or you restart TB / PC to "fix" things). Mail clients are not databases and many people (including me) treat them as such. When they stop working like mail clients in my experiences it is because you have made a change and it takes time to fake being a database again as it pulls itself together. I recommend limiting your mailbox quota and using your CRM to pull in your email for searching / attachments so you have the data if you need it. Cheers P

Hi all, A couple years ago I =finally= discovered the most likely cause of this. And you will notice that it happens at the start of every new year for some of you. It does for me. It also happens if you export/import large email collections on ThunderBird. Soon after I recover from my New Year celebrations, I "tidy up" my ThunderBird email folders. e.g. Create a 2014 Sent folder and a 2014 InBox folder, etc. Then I move every InBox and Sent message from the previous year into their new folders. (Note: I don't formally archive. I like them immediately handy.) If you're like me, you save everything, so we're talking about thousands of emails being moved at once. This automatically triggers a thorough (?) re-indexing of all messages in all folders. On my system it took nearly 100% of CPU utilisation, and it ran for many hours. Shutting down ThunderBird and restarting only made the process begin again from scratch. To make matters worse: Here in Australia early January happens to be the hottest days of the year, and my CPU temperature sensors were scaring me quite a bit. I never found a way to "throttle down" a process to ease demand on the CPU. So I found myself shutting down my computer until sunset; putting it in the coolest room of the house; taking the cover off; pointing a room fan at the motherboard; and praying I wouldn't cook it overnight. Good luck. Carl Bayswater, Victoria On 05/01/15 20:49, Piers Rowan wrote:
On 03/01/15 13:09, David Zuccaro wrote:
I have an i5-2450M CPU @ 2.50GHz with 4Gb of memory running linux mint 17.
As the title suggests I'm finding Thunderbird to be very sluggish so I would welcome suggestions to help boost performance. (actually evolution was better in this regard)
I have moved about 6 years worth of emails into the archive directory -- I presume exporting these out of Thunderbird into a searchable format (which one?) would help matters.
I've used TB extensively over the years and supported it in the workplace. One of this issues that can happen is that if you are using IMAP and you are moving large datasets around then the index files need to be rebuild. Often this is a problem because you have the folders opened on other devices / other users.
The net effect is that each starts polling and kills the other's indexing process and starts its own. Because you have such an extensive "backup" of emails the process can also take too long and time out (or you restart TB / PC to "fix" things).
Mail clients are not databases and many people (including me) treat them as such. When they stop working like mail clients in my experiences it is because you have made a change and it takes time to fake being a database again as it pulls itself together.
I recommend limiting your mailbox quota and using your CRM to pull in your email for searching / attachments so you have the data if you need it.
Cheers
P _______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main

On Tue, Jan 06, 2015 at 10:07:59AM +1100, Carl Turney wrote:
I never found a way to "throttle down" a process to ease demand on the CPU.
the 'renice' command can be used to change the priority of a process. e.g. to make a process use only 'idle' cpu time, you need to find its process id (PID) and then: renice -n 19 $PID another command, 'ionice', can be used to set the I/O priority of a process (but this won't help this thunderbird issue as the imapd process is running on the server, not the client) see the man pages for renice and ionice for details. craig -- craig sanders <cas@taz.net.au>

On Tue, 6 Jan 2015, Craig Sanders <cas@taz.net.au> wrote:
On Tue, Jan 06, 2015 at 10:07:59AM +1100, Carl Turney wrote:
I never found a way to "throttle down" a process to ease demand on the CPU.
the 'renice' command can be used to change the priority of a process.
e.g. to make a process use only 'idle' cpu time, you need to find its process id (PID) and then:
renice -n 19 $PID
That will work if the issue is multiple processes competing for CPU time. But for Carl's case of a single process using 100% CPU time and overheating the machine it won't work. kill -stop will suspend a process and kill -cont will continue it. It wouldn't be difficult to write a script that runs kill -stop and kill -cont in a loop. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On 06/01/15 11:09, Russell Coker wrote:
kill -stop will suspend a process and kill -cont will continue it. It wouldn't be difficult to write a script that runs kill -stop and kill -cont in a loop.
Ah. I grasp the underlying concept, and know a bit about ps and kill. If only I weren't so very damned rusty on other Linux bash commands, and writing scripts with loops and conditionals. :-) Carl

On 6/01/2015 11:39 AM, Carl Turney wrote:
On 06/01/15 11:09, Russell Coker wrote:
kill -stop will suspend a process and kill -cont will continue it. It wouldn't be difficult to write a script that runs kill -stop and kill -cont in a loop.
Ah. I grasp the underlying concept, and know a bit about ps and kill.
If only I weren't so very damned rusty on other Linux bash commands, and writing scripts with loops and conditionals.
It's really quite simple and a great solution. ps -fe|grep someprocess You'll get the process id (PID) in the second column. Then you could set up a couple of aliases to make it simple. alias stop_tb='kill -STOP nnnn' alias start_tb='kill -CONT nnn' Now, so long as the PID doesn't change, then you can simply run the commands quickly as desired. stop_tb start_tb Cheers A.

On 06/01/15 11:55, Andrew McGlashan wrote:
Now, so long as the PID doesn't change, then you can simply run the commands quickly as desired.
On Linux, you can also use "killall" instead of "kill", which allows you to send signals to processes by name instead of by PID. Hope that helps, Andrew

On Tue, 6 Jan 2015, Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
alias stop_tb='kill -STOP nnnn' alias start_tb='kill -CONT nnn'
Now, so long as the PID doesn't change, then you can simply run the commands quickly as desired.
while true ; do stop_tb sleep 1 start_tb sleep 1 done -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Tue, 6 Jan 2015 11:09:14 AM Russell Coker wrote:
But for Carl's case of a single process using 100% CPU time and overheating the machine it won't work.
This sounds like a job for control groups (cgroups) - you can use the cpu.shares setting to restrict how much of a share of CPUs processes in that cgroup can get. The Arch Linux documentation says: https://wiki.archlinux.org/index.php/cgroups # Similarly you can change the CPU priority ("shares") of this group. # By default all groups have 1024 shares. A group with 100 shares # will get a ~10% portion of the CPU time: Worth a shot! All the best, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

On Tue, 6 Jan 2015, Trent W. Buck wrote:
Carl Turney <carl@boms.com.au> writes:
I never found a way to "throttle down" a process to ease demand on the CPU.
nice ionice -c3 thunderbird
Set the CPU governor(s) to ondemand or conservative, and set "don't speed up for niced loads".
Untested; YMMV &c.
Look at /sys/class/thermal/cooling_device?/cur_state max_state, etc too. Some of the cooling devices are CPU governors that insert a fake idle process at various percentages of 100% "CPU". Ie, they steal cycles from a process that is trying to hog the CPU, but don't heat the processor in the process[1] (some turn on more fans, or maybe make higher sleep states). [1] World record for different meanings of process in a sentence? -- Tim Connors

On Mon, 5 Jan 2015, Piers Rowan <piers.rowan@recruitonline.com.au> wrote:
I've used TB extensively over the years and supported it in the workplace. One of this issues that can happen is that if you are using IMAP and you are moving large datasets around then the index files need to be rebuild. Often this is a problem because you have the folders opened on other devices / other users.
The net effect is that each starts polling and kills the other's indexing process and starts its own. Because you have such an extensive "backup" of emails the process can also take too long and time out (or you restart TB / PC to "fix" things).
If the problem is indexing on the IMAP server then you could try configuring the IMAP server to use different indexing or to use a different IMAP server program. Dovecot seems to have some reasonable options for indexing that don't waste much CPU time. If the problem is indexing on the client side then how do you have multiple clients conflicting? -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On 06/01/15 10:11, Russell Coker wrote:
If the problem is indexing on the IMAP server then you could try configuring the IMAP server to use different indexing or to use a different IMAP server program. Dovecot seems to have some reasonable options for indexing that don't waste much CPU time.
This also depends on the nature of your back end storage - like NFS for example when you can get time out's for read lock. The issue isn't so much the dovecot can be configured to do x with y resources it has to do with the fact that things are generally in production and have web mail, AV, mailman, SMTP, procmail, SA, etc all working together in real time on a single node. Fixing/messing with one function of the server can impact the others. My recommendations are don't have endless email archives (none of us are really that important) and expect to wait when you start moving relatively large chunks of plain text files around that are split are arbitrary lines aka mail spool files. Cheers P

On Tue, 6 Jan 2015, Piers Rowan <piers.rowan@recruitonline.com.au> wrote:
On 06/01/15 10:11, Russell Coker wrote:
If the problem is indexing on the IMAP server then you could try configuring the IMAP server to use different indexing or to use a different IMAP server program. Dovecot seems to have some reasonable options for indexing that don't waste much CPU time.
This also depends on the nature of your back end storage - like NFS for example when you can get time out's for read lock.
If you want decent performance with IMAP then just don't use NFS. The write pattern of mail stores is a poor match for the way NFS works and the large number of files doesn't work too well for read caching.
The issue isn't so much the dovecot can be configured to do x with y resources it has to do with the fact that things are generally in production and have web mail, AV, mailman, SMTP, procmail, SA, etc all working together in real time on a single node. Fixing/messing with one function of the server can impact the others.
If you have a running mail server it's not too difficult to change the process that serves POP/IMAP without changing the rest. You can even have 2 programs serving POP/IMAP on different ports or IP addresses until you are happy that the new one does everything correctly.
My recommendations are don't have endless email archives (none of us are really that important) and expect to wait when you start moving relatively large chunks of plain text files around that are split are arbitrary lines aka mail spool files.
Apart from the recent versions of Kmail (which have some mandatory indexing features which seem useless and hurt performance) I've never had such problems. Use an SSD for your local mail store and you shouldn't have any problems. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

Russell Coker <russell@coker.com.au> writes:
If you want decent performance with IMAP then just don't use NFS. The write pattern of mail stores is a poor match for the way NFS works and the large number of files doesn't work too well for read caching.
Nitpick: NFS doesn't play nice with lots of small files, i.e. maildir and MH storage formats. An IMAP server need not use those formats (e.g. domino).

On Wed, Jan 07, 2015 at 11:07:52AM +1100, Trent W. Buck wrote:
Russell Coker <russell@coker.com.au> writes:
If you want decent performance with IMAP then just don't use NFS. The write pattern of mail stores is a poor match for the way NFS works and the large number of files doesn't work too well for read caching. Nitpick: NFS doesn't play nice with lots of small files, i.e. maildir and MH storage formats. An IMAP server need not use those formats (e.g. domino).
indeed. no filesystem likes a lot of small files. the old adage "a filesystem is not a database" springs to mind... somewhat tangential to slow t-bird, but this discussion got me thinking - what does "small" file really mean? 1k? 10 bytes? 100MB? can it be defined? (TL;DR - put your email on a SSD) assuming that linux and glibc etc. is perfect and filesystems have zero overhead(1), a workable definition of a "small" block operation is where the latency of the storage media dominates over the bandwidth. for a standard local disk (sata, ~= sas, fc, whatever) that would be roughly when you can read a whole track in less than the time it takes to seek to it. seek takes >= 1/120th sec (assume 7200rpm). at typical disk speeds of 120 MB/s you can read 1MB in 1/120 of a second. so by this definition files < 1MB/s are "small" for normal storage media. any i/o operations of less than that size will be tend to be iops dominated. note 1MB is not the threshold at which performance is _good_, it's only when it doesn't suck terribly. something more like >> 10MB/s is a "good" file size for disk based storage. what about network filesystems? few * 30 to 50 micro-seconds of GigE network latency don't really affect the above calculation, so +/- software(2), "small" should be roughly the same over NFS, and indeed to any other network filesystem that is ultimately backed by spinning rust. NFS's design and software overhead undoubtedly slows things down somewhat (with locking etc, 10x slower wouldn't surprise me), but ultimately small files on spinning disks are just slow. so how about SSDs? assuming 100k iops & 500 MB/s, that works out at a "small" file size of maybe ~5kB or a bit less, which is impressive. however small random write i/o is the absolute worst thing you can do to these things, so be sure to buy a good one (ie. intel only IMHO). taking things to the extreme, how about i/o to dram? ie. tmpfs. server ram is maybe ~70ns latency(3) and ballpark 30GB/s, so surprisingly about 2KB is still a "small" file even for a blindingly fast filesystem completely in ram. however at this level, software overheads (glibc, VM, VFS, slow and simplistic filesystem, writes in multiple caching levels etc.) dominate over raw media speeds, so this isn't isn't really a useful analysis for something so fast. so is mbox or anything else much better than mh? probably not from this raw i/o perspective. the same small email messages have to go somewhere, even if with mbox most are appends and with mh most are new file operations. the occasional large i/o read-modifiy-write in the middle of a mbox is probably "free" though compared to the iops (except for its effect on flushing caches) so I doubt mbox would be much worse than mh. mh also tends towards zillions of files in a dir and some fs's don't deal with that well. XFS used to handle 100k+ file/dir a lot better than ext[34]. dunno if it still does. many files in a dir is not a great idea with any fs. cheers, robin (1) the spherical cow approximation. soooo not true, but probably a good enough approximation in this case as long as your filesystem runs at less than a few GB/s. (2) 'man nfs' tells me that nfs should negotiate upwards to 1MB rpc's these days, which sounds ok. if rpcs are still 4k or 8k like in the old days then it would definitely suck. (3) http://sites.utexas.edu/jdm4372/files/2012/03/RangerLatencyChart.jpg

On 13 January 2015 at 03:12, Robin Humble <rjh+luv@cita.utoronto.ca> wrote: [snip]
so how about SSDs? assuming 100k iops & 500 MB/s, that works out at a "small" file size of maybe ~5kB or a bit less, which is impressive. however small random write i/o is the absolute worst thing you can do to these things, so be sure to buy a good one (ie. intel only IMHO).
I've been kinda unimpressed by Intel's SSDs. The high-end ones do OK, but.. only OK. And they're expensive. Other brands have enormous random-write performance now, eg. 90,000 IOPS for 4kb random writes on the current-gen Samsungs, that are also cheap and have long durability.
mh also tends towards zillions of files in a dir and some fs's don't deal with that well. XFS used to handle 100k+ file/dir a lot better than ext[34]. dunno if it still does. many files in a dir is not a great idea with any fs.
ext3/4 added indexes to directories years and years ago, and so have been fine with very large directories since then. That said, it's still not a great idea, and I've seen some systems crippled due to having thousands upon thousands of files in single directories. (GlusterFS comes to mind as one that was particularly epic-fail)
the old adage "a filesystem is not a database" springs to mind...
Indeed, that's the crux of it. And this is why your sensible mail programs end up putting your emails into SQLite or LevelDB instead of antique mail files or maildirs. :) (I'd go leveldb over sqlite myself, but with Solr or similar on the side for searching, but we seem to mostly see SQLite used on its own so far) -T

On 13/01/2015 3:12 AM, Robin Humble wrote:
(TL;DR - put your email on a SSD)
Not with TB on a client machine.... file fragmentation doesn't last with TB files; it's next to useless trying to do that. What that means in a nutshell is that far too many files get re-written over and over again and that /could/ be too much for SSD, although SSD is far more durable these days -- subjecting it to TB though is probably asking for trouble. A.

On 14 January 2015 at 11:01, Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
On 13/01/2015 3:12 AM, Robin Humble wrote:
(TL;DR - put your email on a SSD)
Not with TB on a client machine.... file fragmentation doesn't last with TB files; it's next to useless trying to do that. What that means in a nutshell is that far too many files get re-written over and over again and that /could/ be too much for SSD, although SSD is far more durable these days -- subjecting it to TB though is probably asking for trouble.
Oh come on, that's FUD. SSDs are way more durable than that! http://techreport.com/review/24841/introducing-the-ssd-endurance-experiment http://techreport.com/review/27436/the-ssd-endurance-experiment-two-freaking... Those guys have been hammering a bunch of SSDs *continuously* - like a burn-in test - FOR A YEAR AND A BIT SO FAR - and still haven't killed all the drives. As of the last count, they were up to 2 petabytes of writes on the Samsung 840 Pro and it's still going strong. The earliest drive to drop out was an Intel SSD, at 750 TB of writes, (but that's apparently because they're programmed to go into read-only mode at exactly that amount, rather than risk sudden failure or data loss later). My point here being -- even though it's more about number of block writes rather than total data written, you've still got a massive number of writes you can do to an SSD before it "wears out". Working a handful of mail files during the day is NOT going to be a realistic problem. Your drive will last a thousand years. Toby

On 14/01/2015 2:02 PM, Toby Corkindale wrote:
On 14 January 2015 at 11:01, Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
On 13/01/2015 3:12 AM, Robin Humble wrote:
(TL;DR - put your email on a SSD)
Not with TB on a client machine.... file fragmentation doesn't last with TB files; it's next to useless trying to do that. What that means in a nutshell is that far too many files get re-written over and over again and that /could/ be too much for SSD, although SSD is far more durable these days -- subjecting it to TB though is probably asking for trouble.
Oh come on, that's FUD. SSDs are way more durable than that!
It's not FUD.
http://techreport.com/review/24841/introducing-the-ssd-endurance-experiment http://techreport.com/review/27436/the-ssd-endurance-experiment-two-freaking...
Not news to me. Not much doubt about the benefits of SSD, that's why I italicized the /could/ word..... In any case, I have seen TB be absolutely awful with files on at least NTFS .... but I still use it, I won't use it with an SSD though at this stage, but I might one day. What I really want is for TB to store mail in Maildir folders, no matter what the client is -- I am positive that would be perfect no matter what the medium. There has been talk about TB having a Maildir storage option, but last I checked it wasn't progressing. Cheers A.

On Wed, 14 Jan 2015 03:49:01 PM Andrew McGlashan wrote:
In any case, I have seen TB be absolutely awful with files on at least NTFS .... but I still use it, I won't use it with an SSD though at this stage, but I might one day.
I've been running TB on 2 work laptops with SSDs for the last 5 years without any issues. All the best, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

maybe the question is "which distro and what does that distro do about updates?" From my experience some are a pain with FF and TB updates. It would not be such a pain if the instructions (often generic) made sense and / or worked - you can get several levels deep in satisfying requirments etc Mike On 14/01/15 20:40, Chris Samuel wrote:
On Wed, 14 Jan 2015 03:49:01 PM Andrew McGlashan wrote:
In any case, I have seen TB be absolutely awful with files on at least NTFS .... but I still use it, I won't use it with an SSD though at this stage, but I might one day. I've been running TB on 2 work laptops with SSDs for the last 5 years without any issues.
All the best, Chris

On Wed, 14 Jan 2015 09:57:30 PM Mike wrote:
maybe the question is "which distro and what does that distro do about updates?"
I'm using Kubuntu 14.04.x which I believe tracks Firefox and Thunderbird updates pretty well. -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

On 14/01/2015 8:40 PM, Chris Samuel wrote:
On Wed, 14 Jan 2015 03:49:01 PM Andrew McGlashan wrote:
In any case, I have seen TB be absolutely awful with files on at least NTFS .... but I still use it, I won't use it with an SSD though at this stage, but I might one day.
I've been running TB on 2 work laptops with SSDs for the last 5 years without any issues.
I don't doubt that my use of TB is extreme and likely to be nothing like 99.99% of users. My own stores of emails has many 10's thousands of messages in lots of folders -- some folders have well over 100,000 messages. Debian lists alone in one aggregate folder has nearly 250,000 messages, those are collected from all groups I've subscribed over the years. Each group then has it's own folder as well (so lots of duplication). My Debian-user folder (it's own folder), has 145,470 messages right now, dating back to the 19th January, 2009. There is no doubt that alot of that bulk is now worthless, but TB is working, but I do, understandably have unique issues due to my extreme use of storage. If anyone has a greater store in their TB, I would be surprised. Cheers A.

On 15/01/2015 8:32 AM, Andrew McGlashan wrote:
If anyone has a greater store in their TB, I would be surprised.
On the server with ext4 fs, this is my from my Maildir folder: # time find . -type f -name '1*' |wc -l 1835027 real 1m26.659s user 0m9.117s sys 0m28.942s So, heading towards 2 million emails. Cheers A.

Andrew McGlashan writes:
On 15/01/2015 8:32 AM, Andrew McGlashan wrote:
If anyone has a greater store in their TB, I would be surprised.
On the server with ext4 fs, this is my from my Maildir folder:
# time find . -type f -name '1*' |wc -l 1835027
real 1m26.659s user 0m9.117s sys 0m28.942s
So, heading towards 2 million emails.
OK, gloves off ;-P # /usr/bin/time --portability nice ionice -c3 find -O3 /var/mail/mailsec/shared-r[ow] -type f -name 1\* -printf x | wc -c real 73.32 user 2.98 sys 10.03 2068625 That's an email corpus from Jan 2009 to present, stored in /<year>/<week-of-year> folders, accessed via prayer via dovecot 1.2.9. We used tbird in the past, but had problems tricking bolting on some unusual workflows. Also at the time dovecot exported ~/Mail over IMAP, and tbird read IMAP and saved copies back into THE SAME ~/Mail over NFS... with hilarious consequences.

On 15/01/2015 11:25 AM, Trent W. Buck wrote:
Andrew McGlashan writes:
On 15/01/2015 8:32 AM, Andrew McGlashan wrote:
If anyone has a greater store in their TB, I would be surprised.
On the server with ext4 fs, this is my from my Maildir folder:
# time find . -type f -name '1*' |wc -l 1835027
real 1m26.659s user 0m9.117s sys 0m28.942s
So, heading towards 2 million emails.
OK, gloves off ;-P
Haha.
# /usr/bin/time --portability nice ionice -c3 find -O3 /var/mail/mailsec/shared-r[ow] -type f -name 1\* -printf x | wc -c real 73.32 user 2.98 sys 10.03
Okay.... is that one person's archive or more people? You win if it is one person, I win if it is more ;-) And I do delete log [email] files after 90 days by cron jobs and enough spam rubbish gets deleted too. TB gave me serious troubles about a year ago, it was then that I stopped downloading every email to the client machine. Cheers A.

Andrew McGlashan writes:
Debian lists alone in one aggregate folder has nearly 250,000 messages, those are collected from all groups I've subscribed over the years. Each group then has it's own folder as well (so lots of duplication).
I use & recommend gmane for public MLs. WFM, YMMV. (This is when I learn that tbird can't do NNTP...)

On 14 January 2015 at 15:49, Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
On 14/01/2015 2:02 PM, Toby Corkindale wrote:
On 14 January 2015 at 11:01, Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
On 13/01/2015 3:12 AM, Robin Humble wrote:
(TL;DR - put your email on a SSD)
Not with TB on a client machine.... file fragmentation doesn't last with TB files; it's next to useless trying to do that. What that means in a nutshell is that far too many files get re-written over and over again and that /could/ be too much for SSD, although SSD is far more durable these days -- subjecting it to TB though is probably asking for trouble.
Oh come on, that's FUD. SSDs are way more durable than that!
It's not FUD.
http://techreport.com/review/24841/introducing-the-ssd-endurance-experiment http://techreport.com/review/27436/the-ssd-endurance-experiment-two-freaking...
Not news to me.
Not much doubt about the benefits of SSD, that's why I italicized the /could/ word.....
In any case, I have seen TB be absolutely awful with files on at least NTFS .... but I still use it, I won't use it with an SSD though at this stage, but I might one day.
You know, what you said doesn't really make sense. You acknowledge that SSDs offer significant benefits to your workload, and you seem to acknowledge that SSDs are actually completely fine for durability of your workload.. So why are you still saying you wouldn't use one? Why are you not worried about your spinning rust drives wearing out? After all, those bearings and motors won't last forever, and all those rapid head seeks to deal with random i/o must put increased wear and tear on them. -T

On 16/01/2015 1:51 PM, Toby Corkindale wrote:
You know, what you said doesn't really make sense. You acknowledge that SSDs offer significant benefits to your workload, and you seem to acknowledge that SSDs are actually completely fine for durability of your workload.. So why are you still saying you wouldn't use one?
Well ..... yes SSD are very good, but given what I've seen with TB, I believe it would still be a premature end of life -- still it might be good for quite a while.
Why are you not worried about your spinning rust drives wearing out? After all, those bearings and motors won't last forever, and all those rapid head seeks to deal with random i/o must put increased wear and tear on them.
The spinning rust, so to speak, doesn't have write limitations that today's SSD units do. A /better/ class of SSDs will (if they come to market) self heal the oxide layer that is damaged from regular writing. That technology does super heating of the oxide layer to restore it back in to near new condition [1] -- at least that's my rough understanding of the tech involved. If we get those drives, then it might be a case of replacing them for greater capacity and/or even faster speeds as it sounds like they'll last forever. Looks like "racetrack" [2] memory isn't going to happen, but this Crossbar RRAM tech [3] is not so far away. Besides spinning disks still have far greater capacity today and with 8TB drives on the market now and 20TB ones coming later.... [1] http://www.extremetech.com/computing/142096-self-healing-self-heating-flash-... [2] http://en.wikipedia.org/wiki/Racetrack_memory [3] http://www.pcmag.com/article2/0,2817,2422734,00.asp Cheers A.

On 16 January 2015 at 14:25, Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
On 16/01/2015 1:51 PM, Toby Corkindale wrote:
You know, what you said doesn't really make sense. You acknowledge that SSDs offer significant benefits to your workload, and you seem to acknowledge that SSDs are actually completely fine for durability of your workload.. So why are you still saying you wouldn't use one?
Well ..... yes SSD are very good, but given what I've seen with TB, I believe it would still be a premature end of life -- still it might be good for quite a while.
If by "quite a while" you mean "If I received a million emails every day, my drive would still possibly last more than the rest of my life" :) Anyway, never mind. It's your choice. I'm just trying to point out that once you check the numbers, this technology doesn't need to be scary to you. And once it's going, it's more reliable than your hard drives that can suffer stepper motor or bearing failures due to all their moving parts. -Toby

My two-penneth into this : The reference to bearing failures might be OK - however I think most HDD use voice coils for the head movement these days.. Mike ( are we going to see you guys at the LUV BBQ ? ) On 16/01/15 14:55, Toby Corkindale wrote:
On 16 January 2015 at 14:25, Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
On 16/01/2015 1:51 PM, Toby Corkindale wrote:
You know, what you said doesn't really make sense. You acknowledge that SSDs offer significant benefits to your workload, and you seem to acknowledge that SSDs are actually completely fine for durability of your workload.. So why are you still saying you wouldn't use one? Well ..... yes SSD are very good, but given what I've seen with TB, I believe it would still be a premature end of life -- still it might be good for quite a while. If by "quite a while" you mean "If I received a million emails every day, my drive would still possibly last more than the rest of my life" :)
Anyway, never mind. It's your choice. I'm just trying to point out that once you check the numbers, this technology doesn't need to be scary to you. And once it's going, it's more reliable than your hard drives that can suffer stepper motor or bearing failures due to all their moving parts.
-Toby _______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main

On 06/01/15 18:02, Russell Coker wrote:
If you want decent performance with IMAP then just don't use NFS. The write pattern of mail stores is a poor match for the way NFS works and the large number of files doesn't work too well for read caching.
Wasn't playing nice with NFS the main motivation for Maildir's development? I think the issue was mostly with locking rather than performance, but still...
The issue isn't so much the dovecot can be configured to do x with y resources it has to do with the fact that things are generally in production and have web mail, AV, mailman, SMTP, procmail, SA, etc all working together in real time on a single node. Fixing/messing with one function of the server can impact the others.
If you have a running mail server it's not too difficult to change the process that serves POP/IMAP without changing the rest.
You can even have 2 programs serving POP/IMAP on different ports or IP addresses until you are happy that the new one does everything correctly.
Hmm. You wouldn't expect other mail daemons to know about Dovecot's indexing systems. It might work correctly, but would at least have a performance impact.

Maildir was designed to work over NFS but I don't think that anyone cared much about performance. Mail servers tend to either be small enough that performance is never a problem or big enough that NFS just isn't viable. Dovecot and other IMAP servers should work together. Dovecot is designed to regenerate indexes when necessary. I don't think performance would suffer much given that a common use case is having a program other than Dovecot doing delivery. On January 12, 2015 10:35:19 PM GMT+13:00, Andrew McN <andrew@mcnaughty.com> wrote:
On 06/01/15 18:02, Russell Coker wrote:
If you want decent performance with IMAP then just don't use NFS. The write pattern of mail stores is a poor match for the way NFS works and the large number of files doesn't work too well for read caching.
Wasn't playing nice with NFS the main motivation for Maildir's development? I think the issue was mostly with locking rather than performance, but still...
The issue isn't so much the dovecot can be configured to do x with y resources it has to do with the fact that things are generally in production and have web mail, AV, mailman, SMTP, procmail, SA, etc all working together in real time on a single node. Fixing/messing with one function of the server can impact the others.
If you have a running mail server it's not too difficult to change the process that serves POP/IMAP without changing the rest.
You can even have 2 programs serving POP/IMAP on different ports or IP addresses until you are happy that the new one does everything correctly.
Hmm. You wouldn't expect other mail daemons to know about Dovecot's indexing systems. It might work correctly, but would at least have a performance impact. _______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main
-- Sent from my Samsung Galaxy Note 3 with K-9 Mail.
participants (17)
-
Anders Holmstrom
-
Andrew McGlashan
-
Andrew McN
-
Andrew Pam
-
Carl Turney
-
Chris Samuel
-
Craig Sanders
-
Dan062
-
David Zuccaro
-
Mike
-
Mike Hewitt
-
Piers Rowan
-
Robin Humble
-
Russell Coker
-
Tim Connors
-
Toby Corkindale
-
trentbuck@gmail.com