
I'm after a simple CLI MUA for sending GPG signed and/or encrypted messages. I use kmail for almost all my email as everyone who reads mail headers knows. ;) Kmail is generally quite good in everything that isn't broken. Some time ago I started a discussion about a replacement for Kmail due to some problems, the most serious of which is that versions newer than Wheezy don't work reliably for me. As I haven't found anything which is suitable for my needs (thanks for the advice, it was good but just didn't suit me) I'm still using Kmail. Kmail's GPG support seems flakey to me to the stage where I've given up trying. So my plan now is to use something simple for sending GPG encrypted mail (which is a small portion of my email) and use Kmail for the majority of mail for which it works quite well. What do you recommend? -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

Quoting Russell Coker (russell@coker.com.au):
I'm after a simple CLI MUA for sending GPG signed and/or encrypted messages. I use kmail for almost all my email as everyone who reads mail headers knows. ;)
Kmail is generally quite good in everything that isn't broken. Some time ago I started a discussion about a replacement for Kmail due to some problems, the most serious of which is that versions newer than Wheezy don't work reliably for me. As I haven't found anything which is suitable for my needs (thanks for the advice, it was good but just didn't suit me) I'm still using Kmail.
Kmail's GPG support seems flakey to me to the stage where I've given up trying.
So my plan now is to use something simple for sending GPG encrypted mail (which is a small portion of my email) and use Kmail for the majority of mail for which it works quite well.
What do you recommend?
FWIW, my linuxmafia.com knowledgebase includes a bestiary of all known MUAs available for Linux. Sadly, the page's emphasis has always been breadth of coverage rather than depth, and I can't say I've put much if any emphasis on sufficiency of crypto support, but nonetheless it might be useful as a point of departure for your explorations: 'MUAs' on http://linuxmafia.com/kb/Mail/ Generally speaking, you are going to encounter the annoyance of GPG support requiring add-ons / extensions / modifications, presumably for many reasons including MUA authors not wanting the base MUA package to be banned (or prosecuted) in militantly backwards countries. That's my guess, anyway. If you'd said graphical, I'd have suggested starting with Claws Mail. (Initially, I assumed based on the citation of Kmail you'd be looking for something similar. Then I belatedly paid attention to your Subject header and opening sentence.) CLI, not so sure. mutt's GPG integration works well enough for me, but I've not compared with alternatives because I've been a stick-in-the-mud (stick in the mutt?) mutt user. I guess you could try Paranoy, if not mutt-enamoured. -- Cheers, <blazemore> omg i love this song Rick Moen <blazemore> Now playing: Unknown Artist - Track 2 rick@linuxmafia.com @ 128 Kbps. (0:47/3:24) McQ! (4x80) <Javi> blazemore: Yeah, that's a bad-ass song.

Hi, On 7/04/2016 2:22 AM, Russell Coker via luv-main wrote:
So my plan now is to use something simple for sending GPG encrypted mail (which is a small portion of my email) and use Kmail for the majority of mail for which it works quite well.
Simple? Is this simple enough? $ gpg -e -s -a --default-key B7EFE2FB \ -r andrew.mcglashan@affinityvision.com.au| \ mailx -s 'test gpg cli' fred@example.net \ -b fred@example.com Type your passphrase, then type the message and end the message with <ctrl-d>. That does encryption and signing [you may want one or both] with ascii armouring of STDIN to STDOUT. If your desired signing key is in your gpg.conf file, then you don't need the --default-key option. Oh and the -b is optional, for bcc to yourself... Cheers AndrewM

Andrew McGlashan via luv-main <luv-main@luv.asn.au> writes:
On 7/04/2016 2:22 AM, Russell Coker via luv-main wrote:
So my plan now is to use something simple for sending GPG encrypted mail (which is a small portion of my email) and use Kmail for the majority of mail for which it works quite well.
Simple? Is this simple enough?
$ gpg -e -s -a --default-key B7EFE2FB \ -r andrew.mcglashan@affinityvision.com.au| \ mailx -s 'test gpg cli' fred@example.net \ -b fred@example.com
Why does everyone *still* use gnupg 1.x ?

On Fri, 8 Apr 2016, Trent W. Buck wrote:
Andrew McGlashan via luv-main <luv-main@luv.asn.au> writes:
On 7/04/2016 2:22 AM, Russell Coker via luv-main wrote:
So my plan now is to use something simple for sending GPG encrypted mail (which is a small portion of my email) and use Kmail for the majority of mail for which it works quite well.
Simple? Is this simple enough?
$ gpg -e -s -a --default-key B7EFE2FB \ -r andrew.mcglashan@affinityvision.com.au| \ mailx -s 'test gpg cli' fred@example.net \ -b fred@example.com
Why does everyone *still* use gnupg 1.x ?
'cause that's what's in Debian? -- Tim Connors

Tim Connors via luv-main <luv-main@luv.asn.au> writes:
On Fri, 8 Apr 2016, Trent W. Buck wrote:
Andrew McGlashan via luv-main <luv-main@luv.asn.au> writes:
On 7/04/2016 2:22 AM, Russell Coker via luv-main wrote:
So my plan now is to use something simple for sending GPG encrypted mail (which is a small portion of my email) and use Kmail for the majority of mail for which it works quite well.
Simple? Is this simple enough?
$ gpg -e -s -a --default-key B7EFE2FB \ -r andrew.mcglashan@affinityvision.com.au| \ mailx -s 'test gpg cli' fred@example.net \ -b fred@example.com
Why does everyone *still* use gnupg 1.x ?
'cause that's what's in Debian?
Both are, since forever. bash4$ rmadison -aamd64 gnupg gnupg2 debian: gnupg | 1.4.10-4+squeeze4 | squeeze-security | amd64 gnupg | 1.4.10-4+squeeze4 | squeeze | amd64 gnupg | 1.4.12-7+deb7u7 | wheezy-security | amd64 gnupg | 1.4.12-7+deb7u7 | wheezy | amd64 gnupg | 1.4.18-7+deb8u1 | jessie | amd64 gnupg | 1.4.20-5 | stretch | amd64 gnupg | 1.4.20-5 | sid | amd64 gnupg2 | 2.0.14-2+squeeze2 | squeeze-security | amd64 gnupg2 | 2.0.14-2+squeeze2 | squeeze | amd64 gnupg2 | 2.0.19-2+deb7u2 | wheezy-security | amd64 gnupg2 | 2.0.19-2+deb7u2 | wheezy | amd64 gnupg2 | 2.0.25-1~bpo70+1 | wheezy-backports | amd64 gnupg2 | 2.0.26-6 | jessie | amd64 gnupg2 | 2.1.11-6 | stretch | amd64 gnupg2 | 2.1.11-6 | sid | amd64

On Fri, 8 Apr 2016 01:41:40 PM Trent W. Buck via luv-main wrote:
Why does everyone still use gnupg 1.x ?
'cause that's what's in Debian?
Both are, since forever.
dput depends on gnupg. torbrowser-launcher depends on gnupg. python-gnupginterface depends on gnupg (>= 1.2.1). If you have gnupg and gnupg2 installed then the gpg command defaults to version 1.x. You can't uninstall gnupg if you are a DD, if you use Tor, or if that python library is something you need. You can run gpg2 when using it on the command line, but reprogramming 10+ years of habits is not easy. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

Russell Coker via luv-main <luv-main@luv.asn.au> writes:
On Fri, 8 Apr 2016 01:41:40 PM Trent W. Buck via luv-main wrote:
Why does everyone still use gnupg 1.x ?
'cause that's what's in Debian?
Both are, since forever.
dput depends on gnupg. torbrowser-launcher depends on gnupg. python-gnupginterface depends on gnupg (>= 1.2.1).
If you have gnupg and gnupg2 installed then the gpg command defaults to version 1.x. You can't uninstall gnupg if you are a DD, if you use Tor, or if that python library is something you need.
These were in my mind as "everyone" when I asked.
[...] reprogramming 10+ years of habits is not easy.
Probably the most accurate answer, though annoying. :/

On Mon, 11 Apr 2016, Trent W. Buck via luv-main wrote:
Russell Coker via luv-main <luv-main@luv.asn.au> writes:
On Fri, 8 Apr 2016 01:41:40 PM Trent W. Buck via luv-main wrote:
Why does everyone still use gnupg 1.x ?
'cause that's what's in Debian?
Both are, since forever.
dput depends on gnupg. torbrowser-launcher depends on gnupg. python-gnupginterface depends on gnupg (>= 1.2.1).
If you have gnupg and gnupg2 installed then the gpg command defaults to version 1.x. You can't uninstall gnupg if you are a DD, if you use Tor, or if that python library is something you need.
These were in my mind as "everyone" when I asked.
Want to file an RC (security?) bug to them? -- Tim Connors

Okay, I've got to ask. What exactly does gpg2 offer that makes it more suitable than gpg for most usage? There should be a /feature/ comparison, where is it? gpg 1.x is still maintained, it works and the vast majority of users /seem/ to use it and only it. I compiled gpg2 to make a 16384 length key... but that turned out to be completely painful, if not, almost impossible to use. Besides RSA keys are the past, but maybe not so if so many people are only using gpg (v1.x) A.

Andrew McGlashan via luv-main <luv-main@luv.asn.au> writes:
What exactly does gpg2 offer that makes it more suitable than gpg for most usage?
I believe they rewrote gnupg-agent to actually make it secure - like ssh-agent - instead of just storing your passphase it stores the private key and denies any processes getting direct access to the unencrypted private key. This also has other advantages apart from just security. However, I haven't used gpg2 enough yet to know how to use this just yet. -- Brian May <brian@linuxpenguins.xyz> https://linuxpenguins.xyz/brian/

On 11/04/2016 4:24 PM, Brian May via luv-main wrote:
Andrew McGlashan via luv-main <luv-main@luv.asn.au> writes:
What exactly does gpg2 offer that makes it more suitable than gpg for most usage?
I believe they rewrote gnupg-agent to actually make it secure - like ssh-agent - instead of just storing your passphase it stores the private key and denies any processes getting direct access to the unencrypted private key. This also has other advantages apart from just security.
I use 0 minutes to cache my passphrase, so that should make me safe? Every single time I want to decrypt something or sign something, then I must enter my passphrase; this is quite deliberate and exactly how I want it.
However, I haven't used gpg2 enough yet to know how to use this just yet.
Okay. Thanks AndrewM

On Tue, 12 Apr 2016 03:56:17 AM Andrew McGlashan via luv-main wrote:
On 11/04/2016 4:24 PM, Brian May via luv-main wrote:
Andrew McGlashan via luv-main <luv-main@luv.asn.au> writes:
What exactly does gpg2 offer that makes it more suitable than gpg for most usage?
I believe they rewrote gnupg-agent to actually make it secure - like ssh-agent - instead of just storing your passphase it stores the private key and denies any processes getting direct access to the unencrypted private key. This also has other advantages apart from just security.
I use 0 minutes to cache my passphrase, so that should make me safe?
It depends on what types of attack you are vulnerable to. If there is a possibility of someone observing your keyboard (or monitoring the sound of key presses if you are more paranoid) then reducing the frequency of passphrase use is good for security - IE longer cache times. If you have a device that doesn't permit root access (IE the logged in account doesn't have sudo permission) and the cache is secure (locked memory from a SUID/SGID process) then it might be better to have the cache remain for a long time. An attack on cache memory (or process address space for temporary storage of the passphrase and/or decrypted private key) is something that could hang around. As gpg is no longer SGID (when did that change happen?) it's possible for any other process under the same UID to ptrace it. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Mon, Apr 11, 2016 at 04:09:05PM +1000, Andrew McGlashan via luv-main wrote:
Okay, I've got to ask.
What exactly does gpg2 offer that makes it more suitable than gpg for most usage?
It depends on whether you mean GPG 2.0.x or 2.1.x. I skipped 2.0 because it was too annoying, but 2.1 is what I'm using these days. I still keep 1.4 around for those ancient relics who insist on using either old style keys (the kind Kleopatra made by default until last year when certain people got smacked upside the head for not making encryption subkeys by default) or making 16K keys (they're a waste of time and effort, if you must be that paranoid then 8K is still fine and 4K for comms ... well, it was good enough for Ed Snowden).
There should be a /feature/ comparison, where is it?
In the Changelogs. Anyway, short version is that 2.1 has finally dropped support for the old PGP 2.x era keys (those 1K RSA ones from the mid-90s that no one should be using anyway). OTOH it has greater support for EC keys including, well, this: gpg (GnuPG) 2.1.11 libgcrypt 1.6.5 Supported algorithms: Pubkey: RSA, ELG, DSA, ECDH, ECDSA, EDDSA Cipher: IDEA, 3DES, CAST5, BLOWFISH, AES, AES192, AES256, TWOFISH, CAMELLIA128, CAMELLIA192, CAMELLIA256 Hash: SHA1, RIPEMD160, SHA256, SHA384, SHA512, SHA224 Compression: Uncompressed, ZIP, ZLIB, BZIP2 Compression will probably be dropped altogether in a later version, people are far better off just tying the thing to available common algorithms which provide more efficient compression like xz anyway. Though there's a fair argument for moving the compression out of GPG itself, but still integrating it with GPGME so that it can remain functionally present without tying up the essential functions of GPG. The whole suite, however, is more than just an OpenPGP implementation. For example by default GPGME includes support for GnuTLS (i.e. how to connect to all those keyservers securely), GPG, an S/MIME implementation, dirmngr (which now supports proxying through tor directly instead of having to pipe everything through proxychains or something like it). There's also been an update to the pubring format; it now uses the keybox format (.kbx) which facilitates faster searches for keys, whereas the original format was basically just a flat file and thus suffered when selecting keys from larger keyrings. The work to add encryption curves is what Werner is working on right now, but there will be greater additions later this year (after the revision of RFC 4880 is complete in around July).
gpg 1.x is still maintained, it works and the vast majority of users /seem/ to use it and only it.
That's changing, mainly due to curves anf the fact that the version of gpg-agent with 2.0 was awful. I consider 2.0 a stepping stone to better things that are being implemented in 2.1. That said I do still keep 1.4 around in case I need backwards compatibility with something.
I compiled gpg2 to make a 16384 length key... but that turned out to be completely painful, if not, almost impossible to use. Besides RSA keys are the past, but maybe not so if so many people are only using gpg (v1.x)
Large key support from 2.1 will basically stop at 8K, if you really want to make a 16K key then the easiest way is to modify the source for 1.4. You'll need to raise the key size maximums and increase the secmem. I'll leave the rest as an exercise to those who should know better, but otherwise think they know what they're doing. Regards, Ben

Ben McGinnes via luv-main writes:
[...] I still keep 1.4 around for [...] or making 16K keys (they're a waste of time and effort, if you must be that paranoid then 8K is still fine and 4K for comms ... well, it was good enough for Ed Snowden). [...] Large key support from 2.1 will basically stop at 8K, if you really want to make a 16K key then the easiest way is to modify the source for 1.4. You'll need to raise the key size maximums and increase the secmem. I'll leave the rest as an exercise to those who should know better, but otherwise think they know what they're doing.
When someone says "I need 16K RSA keys", don't they really mean "I want EC keys"? Because, like, RSA needs to be a lot longer than EC to provide the same security level. Obviously there's problems with that in practice (for GPG) because you need to interact with people still running gpg1 --- cf. EC in OpenSSH.

Quoting Trent W. Buck (trentbuck@gmail.com):
When someone says "I need 16K RSA keys", don't they really mean "I want EC keys"?
Because, like, RSA needs to be a lot longer than EC to provide the same security level.
I absolutely take you seriously on such things, Trent, but wonder if you can refer me to background materials about cryptographic strength. (Certainly, I am behind my times on readings concerning ciphers.) A point Schneier often makes about cipher algorithms and crypto implementations is that, other variables being roughly equal, newer is bad and should be distrusted -- in the sense that we trust ciphers and implementations more if they've withstood many years of determined, expert attack. To illustrate his point, he said he _thought_ (and hoped) that his Twofish symmetric cipher was extremely good, but that Blowfish was a safer bet by pragmatic crypto standards, because Twofish was (then) brand-new, while Blowfish had proven robeust over many years of wide usage and testing.

On 12/04/16 17:37, Rick Moen via luv-main wrote:
Quoting Trent W. Buck (trentbuck@gmail.com):
Because, like, RSA needs to be a lot longer than EC to provide the same security level.
but wonder if you can refer me to background materials about cryptographic strength.
Ecrypt have published a couple of reports on keysizes. A 512bit EC keysize is roughly equivalent to a 15424 bit RSA keysize. http://www.ecrypt.eu.org/ecrypt2/documents/D.SPA.20.pdf These are really just a statement of the mathematical difficulty of brute forcing the keys using the best current algorithms, eg a general number field sieve for prime factoring vs a naive meet-in-the-middle attack to find a discrete logarithm. There are no mathematical proofs of the hardness of any of these problems. As you point out, security also involves other factors - how well an algorithm has been examined by third parties, the soundness of the protocols, endpoint security, and so on. Glenn -- sks-keyservers.net 0x6d656d65

Quoting Glenn McIntosh (neonsignal@meme.net.au):
Ecrypt have published a couple of reports on keysizes. A 512bit EC keysize is roughly equivalent to a 15424 bit RSA keysize. http://www.ecrypt.eu.org/ecrypt2/documents/D.SPA.20.pdf
These are really just a statement of the mathematical difficulty of brute forcing the keys using the best current algorithms, eg a general number field sieve for prime factoring vs a naive meet-in-the-middle attack to find a discrete logarithm. There are no mathematical proofs of the hardness of any of these problems.
As you point out, security also involves other factors - how well an algorithm has been examined by third parties, the soundness of the protocols, endpoint security, and so on.
Thank you. I note, without special objection to the elliptic curve cryptography recommendation but merely for completeness, that at least one ECC-based standards, a random number generator based on elliptic curve mathematics, has proven upon examination to have been compromised: http://www.wired.com/2013/09/nsa-backdoor/ Early this month the New York Times drew a connection between their talk and memos leaked by Edward Snowden, classified Top Secret, that apparently confirms that the weakness in the standard and so-called Dual_EC_DRBG algorithm was indeed a backdoor. The Times story implies that the backdoor was intentionally put there by the NSA as part of a $250-million, decade-long covert operation by the agency to weaken and undermine the integrity of a number of encryption systems used by millions of people around the world. The Times story has kindled a firestorm over the integrity of the byzantine process that produces security standards. The National Institute of Standards and Technology, which approved Dual_EC_DRBG and the standard, is now facing a crisis of confidence [...] Yeah, thank you _so_ much, Never Say Anything people. Now, I have to worry that I can't trust anything from NIST. Bastards. IETF and CFRG drew the same conclusions last year, and started moving towards non-NIST elliptic curves for Internet standards: https://tools.ietf.org/html/draft-irtf-cfrg-curves-02 I also note this curio from half a year ago: https://www.schneier.com/blog/archives/2015/10/why_is_the_nsa_.html Why Is the NSA Moving Away from Elliptic Curve Cryptography? In August, I wrote [link] about the NSA's plans to move to quantum-resistant algorithms for its own cryptographic needs. Cryptographers Neal Koblitz and Alfred Menezes just published a long paper [link] speculating as to the government's real motives for doing this. They range from some new cryptanalysis of ECC to a political need after the DUAL_EC_PRNG disaster -- to the stated reason of quantum computing fears. Read the whole paper. (Feel free to skip over the math if it gets too hard, but keep going until the end.) EDITED TO ADD (11/15): A commentary and critique [link] of the paper by Matthew Green. I found the Green paper particularly interesting. Some days, seems like Charles Stross's _Halting State_ is becoming non-fiction.

The NIST problem is specific to /their/ earlier recommendations; and no, I don't think you can trust NIST. But if you stay clear of the particular NIST EC option, then other EC options are okay. A.

Quoting Andrew McGlashan (andrew.mcglashan@affinityvision.com.au):
The NIST problem is specific to /their/ earlier recommendations; and no, I don't think you can trust NIST.
For me specifically as opposed to most people here, the subversion of NIST was particularly irritating because it's funded by _my_ tax dollars. ('Their recommendtions' were seemingly fed to them by No Such Agency -- and NIST had the abysmal judgement to accept same uncritically.)
But if you stay clear of the particular NIST EC option, then other EC options are okay.
Well, that's the interesting question, isn't it? It's not at all clear that such are OK. (Please see links.) Much has necessarily been cast into doubt. -- Cheers, Controversy is dreaded only by the advocates of error. Rick Moen -- Benjamin Rush rick@linuxmafia.com

Rick Moen via luv-main writes:
But if you stay clear of the particular NIST EC option, then other EC options are okay.
Well, that's the interesting question, isn't it? It's not at all clear that such are OK. (Please see links.) Much has necessarily been cast into doubt.
My (armchair, inexpert) impression is that this isn't a reasonable inference. It'd be like saying "the wheel feel off my bicycle, therefore all wheeled vehicles are suspect".
For me specifically as opposed to most people here, the subversion of NIST was particularly irritating because it's funded by _my_ tax dollars. ('Their recommendtions' were seemingly fed to them by No Such Agency -- and NIST had the abysmal judgement to accept same uncritically.)
You may also wish to be angry about more broadly, about https://en.wikipedia.org/wiki/FIPS_140-2#Reception http://opensslrampage.org/post/83555615721/the-future-or-lack-thereof-of-lib... You may also wish to be angry about the slow takeup of IPSec. ISTR rumours of the NSA filibustering the design committee, with the goal of making it so painful to use that most people wouldn't bother. Mission accomplished.

On 13/04/2016 6:09 PM, Trent W. Buck via luv-main wrote:
You may also wish to be angry about the slow takeup of IPSec. ISTR rumours of the NSA filibustering the design committee, with the goal of making it so painful to use that most people wouldn't bother. Mission accomplished.
I read that they (NSA) wanted IPSEC rolled in to IPv6, that makes it suspect to start with. :( A.

Quoting Trent W. Buck (trentbuck@gmail.com):
My (armchair, inexpert) impression is that this isn't a reasonable inference.
It'd be like saying "the wheel feel off my bicycle, therefore all wheeled vehicles are suspect".
Oh, I certainly wasn't saying 'doubt everything', as unfocussed paranoia is pointless and non-functional. Rather doubt _more_ (and examine carefully), is all I was saying. In case you weren't following links, Schneier noted six months ago the curiosity of the Never Say Anything people moving away from elliptic curve cryptography citing some alleged future threat from quantum computing, and linked to both a long academic paper by two cryptographers speculating as to the government's real motives for doing this, and a much shorter commentary and critique of that paper by Matthew Green (http://blog.cryptographyengineering.com/2015/10/a-riddle-wrapped-in-curve.ht...). If you’re looking for a nice dose of crypto conspiracy theorizing and want to read a paper by some very knowledgeable cryptographers, I have just the paper for you. Titled “A Riddle Wrapped in an Enigma” by Neal Koblitz and Alfred J. Menezes, it tackles one of the great mysteries of the year 2015. Namely: why did the NSA just freak out and throw its Suite B program down the toilet? Interesting reading -- and again I think of Schneier's dictum that in cryptography newer is worse, all other things being equal. In a nutshell, what Green finds to be the most plausible and compelling hypothesis in Koblitz and Menezes's paper is that NSA isn’t worried about quantum computers at all, but rather that they’ve made a major advance in _classical_ cryptanalysis of the elliptic curve discrete logarithm problem, rendering ECC as a class of ciphers generically weak and making its advantage in key length no longer worth the drawback.
You may also wish to be angry about more broadly, about https://en.wikipedia.org/wiki/FIPS_140-2#Reception http://opensslrampage.org/post/83555615721/the-future-or-lack-thereof-of-lib...
I frequently do admire the attitude of Theo de Raadt and company.

On Tue, Apr 12, 2016 at 03:56:45PM -0700, Rick Moen via luv-main wrote:
Quoting Andrew McGlashan (andrew.mcglashan@affinityvision.com.au):
The NIST problem is specific to /their/ earlier recommendations; and no, I don't think you can trust NIST.
For me specifically as opposed to most people here, the subversion of NIST was particularly irritating because it's funded by _my_ tax dollars. ('Their recommendtions' were seemingly fed to them by No Such Agency -- and NIST had the abysmal judgement to accept same uncritically.)
Don't worry, our tax dollars haven't been used much better: http://www.adversary.org/wp/2013/09/10/australias-dsd-recommends-weak-encryp... And before anyone pipes up with "they're ASD now" like certain pedants on Twitter, they weren't when that correspondence took place in 2012 (and they were in the process of changing names in 2013 when I went public).
But if you stay clear of the particular NIST EC option, then other EC options are okay.
Well, that's the interesting question, isn't it? It's not at all clear that such are OK. (Please see links.) Much has necessarily been cast into doubt.
There's been a *lot* of discussion of that on gnupg-users, so some selective Googling of the archives ought to answer a lot of questions. Curve25519 is already available in GPG 2.1 (and I think 2.0) for signing subkeys, but work is continuing on an equivalent encryption component. Regards, Ben

On 12.04.16 11:01, Rick Moen via luv-main wrote:
In August, I wrote [link] about the NSA's plans to move to quantum-resistant algorithms for its own cryptographic needs.
Rick, Reading your post in mutt, I see 3 x "[link]", but no urls anywhere. Opening the raw post in vim, and piping the body to "base64 -d", the result is still the same. Were there urls in it when posted? Erik

Quoting luv-main@luv.asn.au (luv-main@luv.asn.au):
On 12.04.16 11:01, Rick Moen via luv-main wrote:
In August, I wrote [link] about the NSA's plans to move to quantum-resistant algorithms for its own cryptographic needs.
Rick,
Reading your post in mutt, I see 3 x "[link]", but no urls anywhere. Opening the raw post in vim, and piping the body to "base64 -d", the result is still the same. Were there urls in it when posted?
As I post in ASCII (by preference), I merely wrote '[link]' where Schneier had a hyperlink in the cited HTML article, so interested people would know that links existed there. Article is, as I said, at https://www.schneier.com/blog/archives/2015/10/why_is_the_nsa_.html, where you will find the links and can follow them. Sorry that the intent was unclear.

Rick Moen via luv-main <luv-main@luv.asn.au> writes:
Quoting Trent W. Buck (trentbuck@gmail.com):
When someone says "I need 16K RSA keys", don't they really mean "I want EC keys"?
Because, like, RSA needs to be a lot longer than EC to provide the same security level.
I absolutely take you seriously on such things, Trent, but wonder if you can refer me to background materials about cryptographic strength. (Certainly, I am behind my times on readings concerning ciphers.)
I don't have cites handy; I was just repeating what I heard somewhere. The two things I remember (from when OpenSSH got EC support) is that 1. The closed community (NSA/military types) have used EC for about as long as the open community have been using prime factorization (RSA). 2. a 2KiB RSA key is as strong as a <much smaller> ECDSA key. That's why ssh-keygen has 256/384/521 ECDSA & can't do 4KiB ECDSA.

On Tue, Apr 12, 2016 at 11:41:02AM +1000, Trent W. Buck via luv-main wrote:
Ben McGinnes via luv-main writes:
[...] I still keep 1.4 around for [...] or making 16K keys (they're a waste of time and effort, if you must be that paranoid then 8K is still fine and 4K for comms ... well, it was good enough for Ed Snowden). [...] Large key support from 2.1 will basically stop at 8K, if you really want to make a 16K key then the easiest way is to modify the source for 1.4. You'll need to raise the key size maximums and increase the secmem. I'll leave the rest as an exercise to those who should know better, but otherwise think they know what they're doing.
When someone says "I need 16K RSA keys", don't they really mean "I want EC keys"?
Pretty much, certainly for any kind of effective use.
Because, like, RSA needs to be a lot longer than EC to provide the same security level.
It's a bit rough, but 256-bit symmetric encryption from something like Twofish, Camellia or Serpent is about equivalent to an EC key twice the size (maybe a little less) and an RSA or El-Gamal key of about 15K (but everyone rounds up to 16K). Rijndael (aka AES) is slightly weaker than the other three but a lot faster (which is why it won the AES competition and it still hasn't been broken so seems "good enough" for most people ... at least when they're not using something that shares primes [insert pointed look at SSL here]). It's also been subjected to more efforts to try to break it. The definitions for SECRET level in Australia and the USA stipulate 192-bit symmetric encryption or its equivalent at 3072-bit asymmetric. So that's usually set to 192-bit AES and 3072-bit RSA. The TOP SECRET specifications are 256-bit symmetric and an unspecified asymmetric equivalent.
Obviously there's problems with that in practice (for GPG) because you need to interact with people still running gpg1 --- cf. EC in OpenSSH.
True, but then there are people who go to the extra effort of creating 8K or even 16K keys, but still default back to Triple-DES or even CAST5 for the symmetric component of their messages. These same people then point to the NSA hoovering up everything they can to crack it one day, FFS! How many times do we have to say it? Triple-DES was *designed* by the NSA and its original theoretical security level of 168-bit has already been *publicly* knocked down to 112-bit or less. As far as I'm concerned if you can't be bothered editing your algorithm preference order in gpg.conf and editing your keys and subkeys (actually they're set according to each UID) to match then you have no business trying to make keys larger than the default maximums. That said, I still encourage everyone to make 4K keys by default for at least the cert key and the encryption subkey, signing subkeys are fine at 2K (mine is 3K with 4K for the other two). I know I haven't posted to luv-main in a while and I know everyone else posting to this thread probably already knows my GPG bona fides are, erm, pretty good these days, but I'm also fairly sure there's more than a few lurkers right now asking "who the fsck is this guy and why should I care?" Er ... short version: if you attended any of the original CryptoParties in Melbourne in 2012/2013 I was the one teaching GPG, I ported PyME (GPGME Python bindings) from Python 2 to Python 3 and they're currently in my branch(es) on git.gnupg.org (yeah, that means I'm the only Australian on the GPG dev team ... actually the only member of the team in the southern hemisphere). Regards, Ben P.S. Don't worry about DECO and ITAR, I've already made sure it's all cleared and there are no problems. I even added my ITAR questionaire results to the git server in a branch for a newer part of GPGME and Python dev, so it's on the record. ;)

Ben McGinnes wrote:
How many times do we have to say it? Triple-DES was *designed* by the NSA and its original theoretical security level of 168-bit has already been *publicly* knocked down to 112-bit or less.
Isn't 3DES even a thing only because the banking industry had BAKED-IN a key length of 56KiB throughout their ENTIRE infrastructure (including merchant POS terminals and ATMs &c), and they couldn't afford to replace them all at once?

On GPG4WIN .. only GPG 2.0 btw ... gpg --version ... gpg (GnuPG) 2.0.30 (Gpg4win 2.3.1) libgcrypt 1.6.5 Copyright (C) 2015 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Home: C:/Users/andrewm/AppData/Roaming/gnupg Supported algorithms: Pubkey: RSA, RSA, RSA, ELG, DSA Cipher: IDEA, 3DES, CAST5, BLOWFISH, AES, AES192, AES256, TWOFISH, CAMELLIA128, CAMELLIA192, CAMELLIA256 Hash: MD5, SHA1, RIPEMD160, SHA256, SHA384, SHA512, SHA224 Compression: Uncompressed, ZIP, ZLIB, BZIP2 Is this good? ... gpg.conf ... ###+++--- GPGConf ---+++### utf8-strings keyserver-options http-proxy=http://192.168.0.201:8118 keyserver hkp://keys.gnupg.net auto-key-locate local debug-level 0 ###+++--- GPGConf ---+++### 06/13/15 16:52:09 AUS Eastern Standard Time # GPGConf edited this configuration file. # It will disable options before this marked block, but it will # never change anything below these lines. default-key B7EFE2FB # Default cipher is CAST5-128, AES256 is much better cipher-algo AES256 # This forces "the use of encryption with a modification detection code". force-mdc # /extra/ torproject suggestions #utf8-strings -- already have this by default no-emit-version no-comments throw-keyids I know the main key I'm using on mailing lists is just 2048 RSA bits; not real concerned about that. Besides, it sort of gives an opportunity to use the argument that it is weak so it is deniable; not that I think I have any reason to have to use that arguement. Cheers AndrewM

On Wed, 13 Apr 2016 05:26:49 PM Ben McGinnes via luv-main wrote:
How many times do we have to say it? Triple-DES was designed by the NSA and its original theoretical security level of 168-bit has already been publicly knocked down to 112-bit or less.
It was designed when the idea was to simply ban export of strong crypto. While the people in power believed that such a ban was useful there wasn't a call to weaken security. I'm sure that the people in power then believed that they could develop strong crypto for communicating with our peaceful allies like Osama bin Laden while the "Godless Communists" who were trying to persecute Osama et al for their religious beliefs would never be able to access it. Meanwhile around the world police are legitimately arresting wanted criminals when they make unusual orders of pizza or tacos. When food for many people is delivered to the home of a known criminal then police don't need to crack any crypto to know that there might be someone worth arresting in residence.
As far as I'm concerned if you can't be bothered editing your algorithm preference order in gpg.conf and editing your keys and subkeys (actually they're set according to each UID) to match then you have no business trying to make keys larger than the default maximums.
Actually I think it's the responsibility of DDs in question (and other OS developers) to ensure that GPG defaults to the correct algorithm preference. Also it would be handy if there was a tool to check your GPG configuration and key setup for obvious mistakes.
That said, I still encourage everyone to make 4K keys by default for at least the cert key and the encryption subkey, signing subkeys are fine at 2K (mine is 3K with 4K for the other two).
-- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Wed, Apr 13, 2016 at 10:06:11PM +1000, Russell Coker wrote:
On Wed, 13 Apr 2016 05:26:49 PM Ben McGinnes via luv-main wrote:
As far as I'm concerned if you can't be bothered editing your algorithm preference order in gpg.conf and editing your keys and subkeys (actually they're set according to each UID) to match then you have no business trying to make keys larger than the default maximums.
Actually I think it's the responsibility of DDs in question (and other OS developers) to ensure that GPG defaults to the correct algorithm preference.
Currently the default in most Linux distros (or OSes for that matter) is to create ~/.gnupg/ if its not there when the program is invoked, but not to generate a default gpg.conf. Distributions could set more sensible defaults by setting a basic system wide gpg.conf to be copied to a user's directory if it didn't exist, but the problem is that the first command for a lot of new users is --gen-key and if the gpg.conf is not already in place when the command is run then it won't affect the results.
Also it would be handy if there was a tool to check your GPG configuration and key setup for obvious mistakes.
That's a very good idea, the biggest hurdle I can see at the moment is that the info is normally only visible interactively by editing a key and using the showpref command. OTOH I haven't had nearly enough caffeine yet to be firing on all cylinders, so let it simmer in the back of my brain for a while and we'll see. ;) My main GPGME Python work is dependent on an overhaul of GPGME itself (someone needs to rip all that GTK2 crap out of the C API for a start). So this might give me something useful to do in the mean time. Regards, Ben

[Warning: super ranty email follows.] Ben McGinnes wrote:
On Wed, Apr 13, 2016 at 10:06:11PM +1000, Russell Coker wrote:
On Wed, 13 Apr 2016 05:26:49 PM Ben McGinnes via luv-main wrote:
As far as I'm concerned if you can't be bothered editing your algorithm preference order in gpg.conf and editing your keys and subkeys (actually they're set according to each UID) to match then you have no business trying to make keys larger than the default maximums.
Actually I think it's the responsibility of DDs in question (and other OS developers) to ensure that GPG defaults to the correct algorithm preference.
Currently the default in most Linux distros (or OSes for that matter) is to create ~/.gnupg/ if its not there when the program is invoked, but not to generate a default gpg.conf.
Why is it the DD's responsibility, rather than upstream GnuPG project's responsibility? Surely people *writing* crypto software know better than people *packaging* crypto software, what the Best Current Practice is. Upstream & distro might choose different defaults because of a balance between usability & security (especially if it's a specialist distro), but in general I'd expect upstream to lean heavily towards security. Is gnupg upstream defaulting to weak settings? If so, why? Just to avoid bikeshedding on the ML, or something?
Distributions could set more sensible defaults by setting a basic system wide gpg.conf to be copied to a user's directory if it didn't exist, but the problem is that the first command for a lot of new users is --gen-key and if the gpg.conf is not already in place when the command is run then it won't affect the results.
If there's a host-wide default, it shouldn't need to be copied into ~ to take effect. If it has to be copied in, what happens when both /etc and ~ versions are edited? Do the upstream improvements just get ignored for existing users? Compare with how OpenSSH built-in defaults are overridden by /etc/ssh_config, which is in turn overriden by ~/.ssh/config. [BEGIN TANGENTIAL RANT] One problem I had configuring OpenSSH is that you can say My preferred cipher order is: <strong ciphers as at 2016> But you can't have JUST a blacklist Never use: <weak ciphers as at 2016> If means if I set up my .ssh/config sensibly TODAY, then forget to go back and maintain them, I'll end up with security that's *worse* than the upstream default, instead of better then the upstream default. Because upstream will say KexAlgorithms <strong as at 2020>, <medium as at 2020> And my .ssh/config will override it to say KexAlgorithms <strong as at 2016> Where <strong as at 2016> will equate to <medium as at 2020>, and my config never sees <strong as at 2020>. [END TANGENTIAL RANT] I dunno if that applies to GnuPG because, frankly, I've been obscenely lazy.
Also it would be handy if there was a tool to check your GPG configuration and key setup for obvious mistakes.
An actively maintained "linter" that checked both config and the keys themselves, to make sure they followed current best practice, sounds like a great idea. I am thinking something like lintian or perlcritic, which not only says "this is wrong", but also explains why it's wrong, & suggests how to fix it.

On Thu, Apr 14, 2016 at 05:41:34PM +1000, Trent W. Buck via luv-main wrote:
[Warning: super ranty email follows.]
Ben McGinnes wrote:
On Wed, Apr 13, 2016 at 10:06:11PM +1000, Russell Coker wrote:
On Wed, 13 Apr 2016 05:26:49 PM Ben McGinnes via luv-main wrote:
As far as I'm concerned if you can't be bothered editing your algorithm preference order in gpg.conf and editing your keys and subkeys (actually they're set according to each UID) to match then you have no business trying to make keys larger than the default maximums.
Actually I think it's the responsibility of DDs in question (and other OS developers) to ensure that GPG defaults to the correct algorithm preference.
Currently the default in most Linux distros (or OSes for that matter) is to create ~/.gnupg/ if its not there when the program is invoked, but not to generate a default gpg.conf.
Why is it the DD's responsibility, rather than upstream GnuPG project's responsibility?
Because most distros will decide which options their default installation will compile with. You're assuming that the most common defaults will be so everywhere *and* that every OS or distro compiles in all the available algorithms, neither of these are true. For instance everyone's "favourite" distro and its derivatives (Debian) does not include Caqmellia in a default installation and that won't be quite so popular in Japan.
Surely people *writing* crypto software know better than people *packaging* crypto software, what the Best Current Practice is.
Yes, but they're also concerned about backwards compatibility to an extent, not to mention not taking decisions away from people regarding their own threat models.
Upstream & distro might choose different defaults because of a balance between usability & security (especially if it's a specialist distro), but in general I'd expect upstream to lean heavily towards security.
Upstream also takes into account multiple device types and security models, so for a long time the default key sizes also took into account the physical limitations of key sizes on cards and card readers. They couldn't handle 4K keys for a *long* time, but the additional security advantages of keeping secret keys on the device outweighed that for a lot of people.
Is gnupg upstream defaulting to weak settings?
No, but it isn't defaulting to the strongest possible settings either.
If so, why?
A combination of factors including the processing power limitations of some devices (e.g. phones), the key size limitations of other devices (e.g. cards/card readers), compatibility with legacy configurations, defaulting to AES for symmetric encryption (it's not broken yet, but some implementations leave a lot to be desired) and so on.
Just to avoid bikeshedding on the ML, or something?
Distributions could set more sensible defaults by setting a basic system wide gpg.conf to be copied to a user's directory if it didn't exist, but the problem is that the first command for a lot of new users is --gen-key and if the gpg.conf is not already in place when the command is run then it won't affect the results.
If there's a host-wide default, it shouldn't need to be copied into ~ to take effect.
No, there are the builtin defaults (effectively the same thing), but there are a couple of samples in testing/openpgp/ in the source directory. Pretty much most, if not all, of the command line flags are the options anyway. At least their long form versions (i.e. not the short -k, -r, -u, -a and so on, the others invoked with "--" normally).
If it has to be copied in, what happens when both /etc and ~ versions are edited? Do the upstream improvements just get ignored for existing users?
A user's own gpg.conf takes precedence if the local installation supports all the settings, but settings saved into a key itself will override that. So if I change the order of my preferred algorithms to place AES256 (back) to the preferred cipher in my gpg.conf that won't affect my current key unless I also edit it, but it will affect the preferences for a new key I generate. So my key is set with cipher preferences placing TWOFISH and CAMELLIA256 before AES256, but GPG will usually still select AES256 when others send me encrypted email because their systems usually don't have either of the first two compiled in at all, let alone have a preference list including them.
Compare with how OpenSSH built-in defaults are overridden by /etc/ssh_config, which is in turn overriden by ~/.ssh/config.
Yeah, it's kind of similar in a number of ways, but gets a little more complicated by the cipher selection process used by GPG. Especially when encrypting a message to multiple keys as it tries to select the most preferred symmetric cipher common to all the keys the message is encrypted to.
[BEGIN TANGENTIAL RANT] [SNIP] [END TANGENTIAL RANT]
Yeah ... I don't think anyone has a solution for that ... except maybe to set yourself a reminder (cronjob) to check it every N months.
I dunno if that applies to GnuPG because, frankly, I've been obscenely lazy.
All coders are lazy, that's what produces most of the scripts and smaller projects we produce: trying to make things quicker and easier. Hell, I've even got one that checks coronial findings for certain names and then tells me if I need to pour myself a doube-shot of bourbon before I proceed.
Also it would be handy if there was a tool to check your GPG configuration and key setup for obvious mistakes.
An actively maintained "linter" that checked both config and the keys themselves, to make sure they followed current best practice, sounds like a great idea.
I am thinking something like lintian or perlcritic, which not only says "this is wrong", but also explains why it's wrong, & suggests how to fix it.
Well, Werner got back to me on that and there are already plans for some new interfaces to be included with the upcoming GPG 2.2 (which follows straight on from 2.1) and you can look forward to an announcement for an EOL for 2.0 at around the same time (about 2 years from the announcement which should be a bit later this year). Don't worry, we're keeping Classic for all the usual reasons. Anyway, the new interfaces will at the very least check that keys conform to the current (at the time) RFC requirements (e.g. that they must be N-bits in length, include an encryption subkey, don't include encryption with the certification key, etc.). Once that's done it can be extended to include recommendations (i.e. the SHOULD parts of the RFC) or whatever. In the mean time, though, I think I might have to play with this other little unadvertised gem I found on the git server. It seems that Werner wrote a Stripe payment processor to take GPG donations and then didn't tell anyone (well, no big announcement). Regards, Ben

Ben McGinnes via luv-main <luv-main@luv.asn.au> writes:
On Thu, Apr 14, 2016 at 05:41:34PM +1000, Trent W. Buck via luv-main wrote:
Ben McGinnes wrote:
On Wed, Apr 13, 2016 at 10:06:11PM +1000, Russell Coker wrote:
On Wed, 13 Apr 2016 05:26:49 PM Ben McGinnes via luv-main wrote:
As far as I'm concerned if you can't be bothered editing your algorithm preference order in gpg.conf and editing your keys and subkeys (actually they're set according to each UID) to match then you have no business trying to make keys larger than the default maximums.
Actually I think it's the responsibility of DDs in question (and other OS developers) to ensure that GPG defaults to the correct algorithm preference.
Currently the default in most Linux distros (or OSes for that matter) is to create ~/.gnupg/ if its not there when the program is invoked, but not to generate a default gpg.conf.
Why is it the DD's responsibility, rather than upstream GnuPG project's responsibility?
Because most distros will decide which options their default installation will compile with. You're assuming that the most common defaults will be so everywhere *and* that every OS or distro compiles in all the available algorithms, neither of these are true.
My actual thinking was: 1. upstream decides what's best for everyone (in general); 2. distro amends that for distro-specific requirements; 3. sysadmin amends that for site-specific requirements; & 4. end user amends that for user-specific requirements. ...so that if any one entity in that chain doesn't bother, hopefully it will fall back to the sensible defaults set above them. I was grumpy because I keep running into people assuming that one of those steps isn't needed. For example, as a (3) I cannot put folders in users' default Thunar sidebar, except via /etc/skel which- oh yeah, I only just finished ranting about that. I read your your initial assertion very... dogmatically pushing all responsibility onto (4), which felt unrealistic when the end users aren't crypto geeks. You're right that I didn't consider a desktop distro shipping very strong defaults & creating a problem when the desktop user corresponds with someone on an embedded platform with fewer resources. I assumed most distros would either * ./configure --enable-*, but leave the default cipher order as-is; or * ./configure && make, relying entirely on (1).
Surely people *writing* crypto software know better than people *packaging* crypto software, what the Best Current Practice is.
Yes, but they're also concerned about backwards compatibility to an extent, not to mention not taking decisions away from people regarding their own threat models.
[Lots more details about upstream's rationale, especially devices with fewer resources than a desktop.]
OK, fair enough.

On Wed, Apr 20, 2016 at 08:20:13PM +1000, Trent W. Buck via luv-main wrote:
My actual thinking was:
1. upstream decides what's best for everyone (in general); 2. distro amends that for distro-specific requirements;
And a lot of them just inherit from no. 1, without considering the full algorithm availability. Which is why we see things like hash digest preferences that go SHA256, SHA1 and then digests larger than 256.
3. sysadmin amends that for site-specific requirements; & 4. end user amends that for user-specific requirements.
...so that if any one entity in that chain doesn't bother, hopefully it will fall back to the sensible defaults set above them.
I was grumpy because I keep running into people assuming that one of those steps isn't needed. For example, as a (3) I cannot put folders in users' default Thunar sidebar, except via /etc/skel which- oh yeah, I only just finished ranting about that.
Heh.
I read your your initial assertion very... dogmatically pushing all responsibility onto (4), which felt unrealistic when the end users aren't crypto geeks.
My only real quibbling here is with people who want to fill the world with 16K keys without considering any other security precaution as if having a 16K key is all that they need to protect themselves from, say, the NSA. Nevermind the fact that if they came to that level of attention they'd experience the wonderful world of trojans and keystroke loggers.
You're right that I didn't consider a desktop distro shipping very strong defaults & creating a problem when the desktop user corresponds with someone on an embedded platform with fewer resources.
I assumed most distros would either
* ./configure --enable-*, but leave the default cipher order as-is; or * ./configure && make, relying entirely on (1).
The other fun one is corresponding with and encrypting to multiple recipients ... to the tune of around 30+ recipients. Even when most of them have 2K keys you'll still end up with most messages being around 40K (and the actual messages being fairly short). Regards, Ben

On Wed, Apr 13, 2016 at 10:06:11PM +1000, Russell Coker wrote:
Also it would be handy if there was a tool to check your GPG configuration and key setup for obvious mistakes.
And in proof that I had had neither enough caffeine nor blood moving through my brain; I have now remembered and confirmed that all that info is available from --list-packets and more human readable with pgpdump. So scripting something which uses export of the public key and then list-packets to check it should be fairly straight forward. I figure some little command line thing where you enter a key ID, fingerprint or UID and it'll do the rest. For UIDs then checking the secret keyring first is probably best, but multiple matches can either be dealt with a "pick one from the list" option or just running the check against every match (maybe, some public keyrings are getting a bit large now). Anyway, I'll check with the others and make sure there isn't already something like that tucked away somewhere and if not then it seems like a good side project to add somewhere. I just need to decide precisely where. Regards, Ben -- | Ben McGinnes | Adversarial Press | Twitter: benmcginnes | | Writer, Publisher, Systems Administrator, Trainer, ICT Consultant | | http://www.adversary.org/ http://publishing.adversary.org/ | | GPG Made Easy (GPGME) Python 3 API Maintainer, GNU Privacy Guard | | https://www.gnupg.org/ https://ssd.eff.org/ | | GPG key: 0x321E4E2373590E5D http://www.adversary.org/ben-key.asc |

Tim Connors via luv-main <luv-main@luv.asn.au> writes:
On Mon, 11 Apr 2016, Trent W. Buck via luv-main wrote:
Russell Coker via luv-main <luv-main@luv.asn.au> writes:
On Fri, 8 Apr 2016 01:41:40 PM Trent W. Buck via luv-main wrote:
Why does everyone still use gnupg 1.x ?
'cause that's what's in Debian?
Both are, since forever.
dput depends on gnupg. torbrowser-launcher depends on gnupg. python-gnupginterface depends on gnupg (>= 1.2.1).
If you have gnupg and gnupg2 installed then the gpg command defaults to version 1.x. You can't uninstall gnupg if you are a DD, if you use Tor, or if that python library is something you need.
These were in my mind as "everyone" when I asked.
Want to file an RC (security?) bug to them?
AFAIK gnupg1 is still maintained by the gnupg people. I'm just going on the assumptions that: * "stable" sounds a lot better than "classic"; and * EC is cool now. Oh, also I guess that split-out libgcrypt in 2.x is used in other stuff, like xwayland and ntfs-3g and wireshark... IIRC the main argument *AGAINST* 2.x for apt, is that you can't install gnupg2 without also installing gnupg-agent. And nobody wants that on all their routers and phones. I hoped https://www.gnupg.org/faq/gnupg-faq.html would have a section like "Why Should I Use Stable (not Classic)?", but I can't find it.

On Tue, Apr 12, 2016 at 11:33:17AM +1000, Trent W. Buck via luv-main wrote:
Tim Connors via luv-main <luv-main@luv.asn.au> writes:
Want to file an RC (security?) bug to them?
AFAIK gnupg1 is still maintained by the gnupg people.
It is, but most of the dev work is handled by either DKG or David Shaw these days, while Werner concentrates on 2.0 and 2.1.
I'm just going on the assumptions that:
* "stable" sounds a lot better than "classic"; and
But "modern" is more fun! ;)
* EC is cool now.
Also not entirely proven, but hopefully that will come with time.
Oh, also I guess that split-out libgcrypt in 2.x is used in other stuff, like xwayland and ntfs-3g and wireshark...
IIRC the main argument *AGAINST* 2.x for apt, is that you can't install gnupg2 without also installing gnupg-agent. And nobody wants that on all their routers and phones.
Most of the arguments against gpg-agent aren't actually against gpg-agent, they're against pinentry, which is what people see. So there has been work on improving the ncurses and tty interfaces (and an Emacs specific one) to address that. The design of both for 2.0 was a bit crap and the major reason I skipped "Stable" entirely. The design and use for 2.1 is *much* better. For someone like you, Russell or Rick, I *highly* recommend making an /opt/gnupg or something and compiling 2.1 to have a play with it where it won't screw with anything essential just in case to see what I mean.
I hoped https://www.gnupg.org/faq/gnupg-faq.html would have a section like "Why Should I Use Stable (not Classic)?", but I can't find it.
Classic is still good for servers and things where you want something entirely self-contained with no dependencies beyond your compiler. Regards, Ben

Ben McGinnes wrote:
IIRC the main argument *AGAINST* 2.x for apt, is that you can't install gnupg2 without also installing gnupg-agent. And nobody wants that on all their routers and phones.
Most of the arguments against gpg-agent aren't actually against gpg-agent, they're against pinentry, [...]
I hoped https://www.gnupg.org/faq/gnupg-faq.html would have a section like "Why Should I Use Stable (not Classic)?", but I can't find it.
Classic is still good for servers and things where you want something entirely self-contained with no dependencies beyond your compiler.
That was what I was thinking when I said "routers and phones". I didn't say "servers" because those have, like, more than 2MB of nonvolatile storage & RAM, so whinging that you need to waste 128kiB on some stupid agent, is less defensible. Hm, mental note: check what level of trust verification opkg in OpenWRT actually performs.

"Trent W. Buck via luv-main" <luv-main@luv.asn.au> writes:
Why does everyone *still* use gnupg 1.x ?
This article talks about gnupg in Debian. Apparently there is a experimental transition being planned. https://www.preining.info/blog/2016/04/gnupg-subkeys-yubikey/ -- Brian May <brian@linuxpenguins.xyz> https://linuxpenguins.xyz/brian/

On Thu, Apr 07, 2016 at 02:22:16AM +1000, Russell Coker via luv-main wrote:
So my plan now is to use something simple for sending GPG encrypted mail (which is a small portion of my email) and use Kmail for the majority of mail for which it works quite well.
What do you recommend?
I've recently returned to using Mutt since Thunderbird shat itself. Depending on your mail volumes Thunderbird with Enigmail might suffice, but I can no longer recommend it, especially to anyone I suspect has mail volumes comparable to my own. ;) Mutt still has one considerable advantage for GPG which all the competition has yet to meet: if you compile it with GPGME support and enable that option in your .muttrc then it will have complete access to *everything* in GPGME (even though Mutt itself doesn't use all of that, the potential is there). These are the perks of a mail client which Werner Koch used for a number of years and so he wrote all the code to hook Mutt into GPGME directly. Though he's switched the Gnus now, so I guess that's an option too (if Emacs is your thing and if I recall correctly it isn't). Regards, Ben
participants (10)
-
Andrew McGlashan
-
Ben McGinnes
-
Brian May
-
Erik Christiansen
-
Glenn McIntosh
-
Rick Moen
-
Russell Coker
-
Tim Connors
-
Trent W. Buck
-
trentbuck@gmail.com