
On 09.12.14 20:38, Peter Ross wrote:
Well, there may be a way to fetch all your e-mail and store it in the database. But it seems to be so "disconnected" to the reading of e-mails (which can be everywhere, of course) so it feels rather clumsy.
Any ideas to solve this in a elegant way?
It may be that elegance lies in the eye of the beer holder, but I handle untold thousands of emails worth keeping in a very simple database. First procmail distributes them to a mailbox per list, plus one for family, and one default. Those which are not deleted are manually saved in e.g. one of 88 mailboxes for categories of Vim stuff, and one of 419 categories of LinuxCNC stuff, 1167 mailboxes in total, at time of posting. Reading a post is 99% of the sorting process. The cost of then dropping it into the right category mailbox is minimal. Mutt can throw up a list of filename-completion alternatives, or I use a simple shell function to list matches to filename (category) fragments I offer it. (Scrolling through 419 alternatives is a bit too GUI for my taste.) E.g. if I want to save a post relating to cnc offsets: $ mls 'cnc*offset' /home/erik/mail/cnc_linux_axis_angle_offset /home/erik/mail/cnc_linux_coordinate_offsets /home/erik/mail/cnc_linux_quick_temporary_offset /home/erik/mail/cnc_linux_tool_offsets Now filename completion offers quicker destination selection than does list scrolling, unless filename completion has narrowed the choice to just a few. And, later, when needing to find information on a subject, I can point mutt at the preferred tiny subset of umpty thousand posts, and only need to either visually scan a few hundred thread subjects, or perform a header or body search in mutt. If there's no related sorting category, then grep of e.g. cnc_linux* will throw up mailboxes worth a look in mutt. But I dislike seeking a needle in a haystack - sorted information is much more accessible. There are no new tools to learn - mutt, grep, procmail, and ls do it all. The filesystem is the database. Sorting the posts on receipt is the key to adding value to the information. And _really_ useful gems go into a (nearly) 400 page text file which folds to a brief TOC: UNIX USER ENVIRONMENT & TOOLS 59 P TEXT TOOLS & PRINTING 51 P LINUX SYSTEM ADMINISTRATION 141 P PROGRAMMING & EMBEDDED TOOLS 114 P LinuxCNC: EMC2: CNC: 6 P ATTIC: ~/misc/unix/Obsolete_Help OK, it'll take a few evenings to clear the backlog of 1867 posts which greeted me the night before last, after a week and a half away, but >90% will be deleted, some threads unread. Granted, a more databaserish approach allows attaching multiple keys to a post, so that it is retrievable multiple ways. I handle that by saving a multifaceted post in two or three mailboxes, if there is enough content value to warrant. In this instance there are only 58261 posts in the 1167 categories, even after quite a few years of collecting. The method is well suited to managing a distillation of a flood of posts, for ease of retrieving useful guff. Like with hashing, a large number of buckets increases the return on the categorisation process. It is perhaps no great recommendation to say that I have found nothing better, but I am so happy with it that I'm not looking. Erik -- I have long felt that most computers today do not use electricity. They instead seem to be powered by the "pumping" motion of the mouse! - William Shotts, Jr. on http://linuxcommand.org/learning_the_shell.ph