That's not really compatible with piped output. The encrypted message can't be authenticated until it has been completely processed, but the whole point of piping is to output bytes as soon as they're available.
Perhaps the moral of this story is to disable GPG's pipe feature? But it's a legitimate and significant performance improvement for authentic messages. You "just" have to remember to check the error code and it's fine/safe.
Perhaps that's just too much to ask. Maybe we just can't have fast streaming decryption because it's too hard for client developers to use safely. But that point of view is at least not obvious.
(On the other hand, what were you planning to do with the piped output in the first place? Probably render it, right? If GPG clients stream unauthenticated bytes into a high-performance HTML renderer, the result will surely be efail.)
That is not the whole point of piping, and the default behavior of GPG should be to buffer, validate the MDC, and release plaintext only after it's been authenticated. Pipes are a standardized Unix interface between processes, not a solemn pledge to deliver bytes as quickly as possible.
If pipes had the connotation you claim they do, it would never be safe to pipe ciphertext, because the whole goal of modern AEAD cryptography is never to release unauthenticated plaintext to callers.
Clients encrypting whole ISO images should expect that decryption will require a non-default flag. Ideally, GPG would do two-pass decryption, first checking the MDC and then decrypting. Either way, the vast, commanding majority of all messages GPG ever processes --- in fact, that the PGP protocol processes --- should be buffered and checked.
If you have a complaint about how unwieldy this process is, your complaint is with the PGP protocol. The researchers, and cryptographers in general, agree with you.
Is it so easy for GPG to buffer the cleartext while validating the MDC? As the cleartext may not fit in RAM, this means that GPG could need to write it to a temporary file, right? But then, if decryption is aborted messily (e.g., the machine loses power), then this means that GPG would be leaving a file with part of the cleartext behind, which has obvious security implications.
You could also imagine a two-pass approach where you first verify and then decrypt, but then what about a timing attack where a process would be modifying the encrypted file between the two passes?
Again, the cleartext virtually always fits trivially in RAM, and when it doesn't, it can error out and require a flag to process. Yes, this is easy to fix.
OpenPGP needs to change as well, but that doesn't make insecure behavior acceptable in the interim, no matter what Werner Koch thinks.
What is the point of piping, in your view? My understanding is that it's a stream of bytes with backpressure, designed specifically to minimize buffering (by pausing output when downstream receivers are full/busy).
> If pipes had the connotation you claim they do, it would never be safe to pipe ciphertext, because the whole goal of modern AEAD cryptography is never to release unauthenticated plaintext to callers.
You say that like it's a reductio ad absurdum, but I think that's essentially right; you can't do backpressure with unauthenticated ciphertext. You have to buffer the entire output to be sure that it's safe for further processing.
Thus, if you want to buffer the entire output, don't use a pipe; ask the tool to generate a file, and then read the file only when the process says that the file is done and correct.
(I'd say that's a lot more wieldy than two-pass decryption.)
Based on your other remarks about PGP, (nuke it from orbit) I'm not sure you have any constructive remarks to make on how to improve GPG, but I guess making it two-pass by default (with a --dangerous-single-pass flag) would be an improvement.
For normal size emails, users probably wouldn't notice the performance cost of the second pass, and clients who care about performance at that level can opt into single-pass decryption and just promise to check the error code.
Backpressure is a feature of Unix pipes. It isn't their raison d'être.
I don't care how you implement it, but any claim that you can't check the MDC in GPG because it's a piped interface is obviously false. GPG can, like any number of Unix utilities, some casually written and some carefully written, simply buffer the data, process it, and write it.
Nobody said "you can't check the MDC." Everybody said "you have to check GPG's error code."
And I think it's clear to everybody (in this thread) that GPG's approach is a dangerous blame-the-user approach to API design, even granting that this dangerous approach offers optimum performance (especially relative to adding an entire second pass).
You are getting your concepts confused. Pipes are one thing, but the concept at hand is in fact filters, programs that primarily read data from their standard inputs and write other (processed) data to their standard outputs. What pipes involve is a red herring, because filters do not necessitate pipes. Filters have no requirements that they write output whilst there is still input to be read, and several well-known filter programs indeed do not do that.
This is why I initially hesitated to implement a streaming interface for my crypto library¹ (authenticated encryption and signatures). I eventually did it, but felt compelled to sprinkle the manual with warnings about how streaming interfaces encourage the processing of unauthenticated messages.
Now that we have a vulnerability with a name, I think I can make those warning even scarier. I'll update that manual.
Perhaps the moral of this story is to disable GPG's pipe feature? But it's a legitimate and significant performance improvement for authentic messages. You "just" have to remember to check the error code and it's fine/safe.
Perhaps that's just too much to ask. Maybe we just can't have fast streaming decryption because it's too hard for client developers to use safely. But that point of view is at least not obvious.
(On the other hand, what were you planning to do with the piped output in the first place? Probably render it, right? If GPG clients stream unauthenticated bytes into a high-performance HTML renderer, the result will surely be efail.)