Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For something of this magnitude, I think the expected behavior would be to delay the release of the paper until the ecosystem had time to adapt. To prove the paper is valid (and to assert precedence) you would offer to factor several "nothing up my sleeve" numbers -- like sha512("schnorr1") + sha512("schnorr2").

As it is, if the algorithm presented is valid then this potentially compromises currently operating systems.



I do not agree that so-called “responsible disclosure” would or should be the expected behavior. I do understand how someone accustomed to corporate bug bounties and private security mailing lists may think so, though. Full disclosure is a perfectly reasonable strategy especially when we’re operating in the academic realm. Industry always takes years to catch up to academia anyway.


If this paper allows to produce a working RSA cracker in a month, much of high-value IT infrastructure is under imminent threat.

Yes, you can replace your SSH keys with elliptic ones, and maybe adjust your TLS accepted algorithms. Even this is not always easy or cheap.

But other things that may rely on RSA (or triple RSA) may have trouble upgrading fast, and upgrading them at all is going to cost a lot.


Yeah, but also the industry knows for DECADES that RSA will likely be broken eventually. Not having reacted yet is not the researcher's fault.

We know that Quantum computers can break it too, yet nobody really acts on it with any urgency. If suddenly there is a breakthrough and we can reach this state within a year, then there will be no time to adapt as well.

It all boils down to the general corporate attitude of not fixing catastrophic problems without precedence. We see the same with climate change and once it hits hard, it will be too late to adapt.


I asked a few years ago(I don't remember the specifics) what was the recommended post-quantum crypto, and basically the answer was that there wasn't any vetted and proven post-quantum crypto. So I don't think you can blame the industry here.


There are some promising experiments. All of them are much worse than the algorithms we use today (including RSA) in practical terms, except that Shor's algorithm doesn't apply to them, and so they specifically fulfil the criterion of post-quantum public key crypto.

[For symmetric encryption quantum computers would only matter if they were pretty fast/cheap and we didn't have ready-to-go 256-bit symmetric crypto, but we do]

OpenSSH actually has an implementation of a reasonable contender for SSH. Google have experimented (in Chrome builds) with some of these contenders for TLS too. What you would likely want to do - since by definition these are relatively untried algorithms and so might be unsafe against adversaries with an existing not-at-all-quantum computer - is combine with an existing algorithm, most likely elliptic curve based, but RSA would be possible, under a "swiss cheese" model where you're only dead if an adversary penetrates all the layers.

But like I said, much worse. Given that there aren't any suitably large quantum computers (and it's always possible that we'll eventually figure out we just can't build usefully large quantum computers, just like we eventually found out that while you can travel faster than sound you can't travel faster than light) it would make no sense to deploy this today, even though it continues to make sense to do research Just In Case.


Wouldn't surprise me if there are classified quantum computers crunching numbers as we speak tbh.


I would be very surprised. That would involve keeping some huge leaps in basic research under wraps.


Could you expand on "much worse?" Are they slower, are they harder to implement?


Just to expand the other answer you got.

Slower, in a different complexity class from the classical ones (AKA O(n^2) instead of O(n), but I don't know what exact complexities the top candidates have right now, as a function of key length).

Larger messages, in that a signature requires 100's or 10's of Mb instead of a few kb.

Larger public keys, as again 100's of Mb instead of a few kb.

Larger private keys, as in a few to 10's of Mb instead of 1/2 to a few kb.

But I guess the most relevant "much worse" is that people have much less confidence that those algorithms are secure, and they are complex enough to hide huge traps that make implement RSA look like a walk in the park.


You seem to be way outdated. A lot of work has been put on reducing key sizes. Dunno about signatures, but for key exchange SIKE (https://sike.org/) uses keys of a few hundred bytes, comparable in size to RSA.


Sure, SIKE is not so much bigger than existing approaches, but it is much slower than they are.

Some of the other choices aren't so much slower but are far bigger, for example McEliece systems.

There's lots of opportunities to make different trade-offs - at least if all of them survive a bit more scrutiny by smart motivated opponents - but they're all generally worse than what we have now - except that they resist Shor's algorithm.


That's true, a quick Google search tells me that an optimized SIKE implementation is ~30x slower than an optimized X25519 implementation. Still, according to this document, running time of SIKEp751 is ~25ms: https://sike.org/files/SIDH-spec.pdf

I don't think that's a problem for end users, you are not constantly generating keys. It will be a problem for servers handling thousands of connections per second, but I'm sure dedicated HSMs will appear if there is a need for them.

In any case, I'm not an expert in crypto, just a poor sysadmin-by-accident who likes reading about the latest security developments so the bad guys don't pwn my servers. And as you said, engineering is always full of trade-offs, let's see what the NIST PQC standardization process will decide.


slower

larger messages

larger public keys

larger private keys


Not only quantum computers. Every cryptographic method relies on fact that there is no easy way to decide NP problems. And we don't really know whether P!=NP (in fact, according to polls, about a quarter of researchers disagree), or, in case there actually is a polynomial algorithm, its exponent will make it untractable in practice.

So there might be DOOMSDAY in the future, where all cryptography will cease to work, because somebody just figures a way to decide NP problems quickly enough.


>Yeah, but also the industry knows for DECADES that RSA will likely be broken eventually.

Interesting. Reference?


It's generally accepted for almost all crypto protocols that it's going to be broken eventually (a few rare cases are provably secure, one-time pads and not much else).

If nothing else, quantum computers should break RSA in particular (the algorithm is already known and just waiting for hardware) and the writing has been on the wall there for a long time.


> It's generally accepted for almost all crypto protocols that it's going to be broken eventually

Just like it was generally accepted that god exists. Those claims are of similar strength.

> If nothing else, quantum computers should break RSA in particular

Quantum computers with enough qubits do not exist and it's absolutely not obvious whether they will exist at all.


Cryptographically protocols have attacks designed against them all the time.


> triple RSA

Yeah, no. 3DES (Triple DES) is/ was a thing, but Triple RSA is not.


That was the joke.


> If this paper allows to produce a working RSA cracker in a month

The blog post claims that people have been trying to reproduce his results for two years, though.


Note that (despite the ill-considered claim about RSA in the abstract) this isn't security research. It's a maths paper in a very important area of number theory. It doesn't disclose a patchable flaw in RSA---it claims that RSA is based on an unproven conjecture that turns out to be false. That RSA is vulnerable to fast prime factoring has been known since it was first implemented. That's not fixable, so delay serves little purpose, especially with a (if it turns out to be true) seminal result in number theory.


If you find something that can break all cryptography in the world, then I think your best option (even for your personal security) is to release everything publicly.


A one time pad can not be broken.


I always enjoy the reaction when I tell students that we've had provably unbreakable encryption since the 19th century.


And yet any time someone proposes to actually implement it, the response is always negative: "don't roll your own encryption" or "sharing and storing the pads is infeasible" or some such.

Seems like we should have figured out a way by now to use one time pad encyption by default for critical paths, even if that requires new industries to distribute pads and guarantee their security.


OTP is perfectly secure, but (for many cases) perfectly useless. Transmitting the key safely is exactly as hard as transmitting the message safely.

They're useful in exactly one situation: when you have a temporary secure communication channel, and a long-term insecure channel. Then you can use the temporary channel to pre-share a lot of key material (say, a 1TB micro SD card carried covertly) and then use that for future messages. But that scenario is very rare.


The rarity of that scenario is dictated by there rarely being a need for the security it offers. But that, in turn, is a function of our knowledge of cryptography, and may change over time. Who knows; perhaps someday we'll see something like what Vinge described in AFutD:

"Our main cargo is a one-time cryptographic pad. The source is Commercial Security at Sjandra Kei; the destination is the certificants' High colony. It was the usual arrangement: We're carrying a one-third xor of the pad. Independent shippers are carrying the others. At the destination, the three parts would be xor'd together. The result could supply a dozen worlds' crypto needs on the Net for ..."


Yup, that bit from Fire Upon the Deep is exactly what I was thinking of when I mentioned that bit about industries to safely transmit OTP data.

I don't really think the parent comment understands that there are creative ways around the difficulty of sharing secure pads. We don't need it for all data; but I think Vinge does hint at a totally viable means of sharing, and scenario in which it's practical.


Quantum computing may eventually force us to move to OTPs. Of course it's going to be a pretty long time before we'll have quantum computers that can work across an 800-bit field.


It won‘t. Symmetric ciphers are holding up just fine against quantum computers.

Key exchanges would be annoying, but they are even more so for one-time pads.

But it will never come to even that: There are plenty quantum-safe asymmetric cryptosystems around.


Yeah. I take up the key exchange problem (and Diffie-Hellman) as well as difficulty of achieving true randomness.


I've made matlab scripts that pull true randomness out of the less-significant figures of the output of an audio microphone listening to static...it's not impossible.

If we were serious about it, sharing flash drives and even using couriers could make it work pretty well.


I haven't read the paper or anything, but if the expected adaptation is to drop support for RSA; no reasonable amount of time will make it a seamless transition.

There are so many devices and pieces of software that are stuck on RSA, a headsup of say 5 years would still result in a clossal mess; may as well have the mess now.


RSA has published the same set of "come and break me" keys for years, decades I think. Breaking those public (in both senses!) keys would be a good start.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: