Hacker Newsnew | past | comments | ask | show | jobs | submit | Strilanc's commentslogin

The newest transaction mechanism (taproot; P2TR) exposes the public key of the receiver as part of the transaction. If it becomes more commonly used, the supply of bitcoins with exposed public keys would start going up again. See figure 5 of https://arxiv.org/pdf/2603.28846#page=14 .

Caution: that 10M estimate assumes gate error rates 10x lower than the ones assumed in the papers from TFA.

You are assuming that progress on factoring will be smooth, but this is unlikely to be true. The scaling challenges of quantum computers are very front-loaded. I know this sounds crazy, but there is a sense in which the step from 15 to 21 is larger than the step from 21 to 1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350692006139 (the RSA100 challenge number).

Consider the neutral atom proposal from TFA. They say they need tens of thousands of qubits to attack 256 bit keys. Existing machines have demonstrated six thousand atom qubits [1]. Since the size is ~halfway there, why haven't the existing machines broken 128 bit keys yet? Basically: because they need to improve gate fidelity and do system integration to combine together various pieces that have so far only been demonstrated separately and solve some other problems. These dense block codes have minimum sizes and minimum qubit qualities you must satisfy in order for the code to function. In that kind of situation, gradual improvement can take you surprisingly suddenly from "the dense code isn't working yet so I can't factor 21" to "the dense code is working great now, so I can factor RSA100". Probably things won't play out quite like that... but if your job is to be prepared for quantum attacks then you really need to worry about those kinds of scenarios.

[1]: https://www.nature.com/articles/s41586-025-09641-4


The best proposal I have heard for rescuing P2SH wallets after cryptographically relevant quantum computers exist is to require vulnerable wallets to precommit to transactions a day ahead of time. The precommitment doesn't reveal the public key. When the public key must be exposed as part of the actual transaction, an attacker cannot redirect the transaction for at least one day because they don't have a valid precommitment to point to yet.

That’s kind of adorable. Would you need to pay to record a commitment? If so, how? If not, what stops someone from DoSing the whole scheme?

I don't think you're understanding how cryptography works. A commitment is basically a hash that is both binding and hiding. In this example it's probably easiest to think of it as a hash. So you hash your post-quantum public key (something like falcon-512) and then sign that hash with your actual bitcoin private key (ecdsa, discrete-log, not quantum safe) and then publish that message to the bitcoin network. Then quantum happens at some point and bitcoin needs to migrate but where do funds go? Well you reveal the post-quantum public key and then you can prove that funds from the ecdsa key should go there. From a technical perspective, this is a complete and fool proof system. DoSing isn't really a concern if you publish to the actual bitcoin network and it's impossible for someone to use up the key space (2^108 combinations at least).

The reason this is a dumb idea is because coordination and timing. When does the cutover happen? Who decides which transactions no longer count as they were "broken" b/c of quantum computing? The idea is broken but not from technical fundamentals.


The DoS attack in this scenario is someone just submitting reasonable-looking but ultimately bad precommitments as fast as possible. The intuition is that precommitments must be hard to validate because, if there was an easy validation mechanism, you would have just used that mechanism as the transaction mechanism. And so all these junk random precommitments look potentially legitimate and end up being stored for later verification. So all you have to do to take down the system is fill up the available storage with junk, which (given the size of bot networks and the cost of storing something for a day) seems very doable.

If the question is storage, bitcoin itself provides a perfectly good mechanism. idk the exact costs but it'd be in the range of ~$0.45 to store a commitment. That's cheap enough to enable good users with small numbers of keys but also expensive enough to prevent spam. It's kind of the whole point of blockchains.

As for verification being expensive, it sounds like you don't know the actual costs. It's basically a hash. Finding the pre-image of a hash is very expensive to the point of being impossible. Verifying a pre-image + hash function = a hash is extremely cheap. That's the whole point of 1-way functions. Bitcoin itself is at ~1000 EH/s (exahashes per second)

Again, this isn't a technical problem. It's a coordination problem.


Yes, that would be a concern. You could require a proof of work to submit a precommitment, so that DoSing was at least expensive to do. You could have some sort of deposit mechanism, where a precommitment would lock down 0.1 bitcoins (from a quantum-secure wallet) until the precommitment was used. I admit I'm glad I don't have to figure out those details.

24-hour latency to make a payment? What is this, the 20th century?

This is for rescue, not for payment. Once you've moved the coins to quantum-secure wallet, the delay would no longer be needed.

...probably some people would be very inconvenienced by this. But not as inconvenienced as having the coins stolen or declared forever inaccessible.


> ...probably some people would be very inconvenienced by this. But not as inconvenienced as having the coins stolen or declared forever inaccessible.

I don't know why anyone f's around with crypto anymore. So many caveats, such a scammy ecosystem. It just doesn't seem worth the trouble to support a ransomware and money laundering tool.


> [0.1% gate error rate] is still wildly out of reach

This is false. When Fowler et al assumed 0.1% gate error rates would be reached for his estimates in 2012 [0], that was ostentatious. Now it's frankly a bit overly conservative. All the big architectures are approaching or surpassing 0.1% gate error rates.

From 2022 to 2024, the google team improved mean two qubit gate error rate from 0.6% [1] to 0.4% [2]. Quantinuum's Helios has a two qubit gate error rate of 0.08% [3]. IBM has Heron processors available on their cloud service with two qubit gate error rates ranging from 0.2% to 0.7% [4]. Neutral atom machines have demonstrated 0.5% gate error rates [5].

[0]: https://arxiv.org/abs/1208.0928

[1]: fig 1c of https://arxiv.org/pdf/2207.06431

[2]: fig 1b of https://arxiv.org/pdf/2408.13687

[3]: https://arxiv.org/abs/2511.05465

[4]: https://quantum.cloud.ibm.com/computers?processorType=Heron (numbers may vary as the website is not static)

[5]: https://arxiv.org/abs/2304.05420


I can think of a case where it turned out that there was some aspect of the noise performance that made the technology unsuitable for running Shor's algorithm. So would one of the presented low noise approaches actually work for Shor's?

What do you mean? The original 2019 supremacy experiment was eventually simulated, as better classical methods were found, but the followups are still holding strong (for example [4] and [5]). There was recently a series of blog posts by Dominik Hangleiter summarizing the situation: [1][2][3].

[1]: https://quantumfrontiers.com/2026/01/06/has-quantum-advantag...

[2]: https://quantumfrontiers.com/2026/01/25/has-quantum-advantag...

[3]: https://quantumfrontiers.com/2026/02/28/what-is-next-in-quan...

[4]: https://arxiv.org/abs/2303.04792

[5]: https://arxiv.org/abs/2406.02501


Minor update: Dominik condensed the blog posts into a pre-print: https://arxiv.org/abs/2603.09901


Agree. Scott is exactly correct when he just straight calls it crap.

It's inaccurate to say it wins on small numbers because on small numbers you would use classical computers. By the time you get to numbers that take more than a minute to factor classically, and start dreaming of quantum computers, you're well beyond the size where you could tractably do the proposed state preparation.


I believe the appropriate technical term is "bollocks" rather than "crap", see https://www.cs.auckland.ac.nz/~pgut001/pubs/bollocks.pdf.


That slide deck is complaining that correct work on quantum attacks should be seen as negligible priority or as distractions. TFA is complaining that JVG isn't even correct. They are pretty different concerns.

To be clear, I think that slide deck will be looked back upon as naive. In particular, it makes the classic mistake of assuming the size of number factored should be growing smoothly. That's naive because 15 is such a huge cost outlier and because quantum error correction has frontloaded costs. See [1] and [2] for details.

[1]: https://algassert.com/post/2500

[2]: https://algassert.com/post/2503


Well, the reviewers missed it too.


What reviewers? It's not a peer reviewed article.


Ok.


Honestly i think he was remarkably polite given the sort of crap we are talking about.


The very first demonstration of factoring 15 with a quantum computer, back in 2001, used a valid modular exponentiation circuit [1].

The trickiest part of the circuit is they compile conditional multiplication by 4 (mod 15) into two controlled swaps. That's a very elegant way to do the multiplication, but most modular multiplication circuits are much more complex. 15 is a huge outlier on the difficulty of actually doing the modular exponentiation. Which is why so far 15 is the only number that's been factored by a quantum computer while meeting the bar of "yes you have to actually do the modular exponentiation required by Shor's algorithm".

[1]: https://arxiv.org/pdf/quant-ph/0112176#page=15


would other mersenne numbers admit the same trick? if so, factoring 2047 would be really interesting to see. it's still well within the toy range, but it's big enough that it would be a lot easier to believe that the quantum computer was doing something (15 is so small that picking an odd number less than sqrt(15) is guaranteed to be a correct factorization)


No, 15 is unique in that all multiplications by a known constant coprime to 15 correspond to bit rotations and/or bit flips. For 2047 that only occurs for a teeny tiny fraction of the selectable multipliers.

Shor's algorithm specifies that you should pick the base (which determines the multipliers) at random. Somehow picking a rare base that is cheap to do really does start overlapping with knowing the factors as part of making the circuit. By far the biggest cheat you can do is to "somehow" pick a number g such that g^2=1 (mod n) but g isn't 1 or N-1. Because that's exactly the number that Shor's algorithm is looking for, and the whole thing collapses into triviality.


For each chick they do 24 trials divided into 4 blocks with retraining on the ambiguous shape and actual rewards after each block. During the actual tests they didn't give rewards. In figure 1 they show the data bucketed by trial index. It's a bit surprising it doesn't show any apparent effect vs trial number, e.g. the first trial after retraining being slightly different.

I have to admit I'm super skeptical there's not some stupid mistake here. Definitely thought provoking. But I wish they'd kept iteratively removing elements until the correlation stopped happening, so they could nail down causation more precisely.


I do agree my skepticism level rises extremely high in any experimental psychology experiment. There’s just so many ways to bias results, in addition to “do enough experiments and one of them will get a statistically unlikely result” problem.

This group does a lot like this https://www.dpg.unipd.it/en/compcog/publications … so that’s tempting to think they keep trying things until something odd happens (kind of like physicists who look for 5th forces… eventually they find something odd but often it’s just an experimental issue they need to understand further).


Wasn't this study immediately debunked due to bad statistical methods? See https://zenodo.org/records/18002186

> Using simple simulations,we show that this pattern arises naturally from collider bias when selection into elitesamples depends on both early and adult performance. Consequently, associationsestimated within elite samples are descriptively accurate for the selected population,but causally misleading, and should not be used to infer developmental mechanisms


Is that paper in print? I can't seem to find if it was peer reviewed.

If the paper is true, then, yeesh! That's a pretty big miss on the part of Güllich et al.

Reading through the very short paper there, it seems to not have gone through review yet (typos, mispellings, etc). Also, it's not clear that the data in the tables or the figure are from Güllich's work or are simulations meant to illustrate their idea (" True and estimated covariate effects in the presence of simulated collider bias in the full and selected samples"). Being more clear where the data is coming from may help the argument, but I likely just missed some sentence or something.

I'll be interested to see where this goes. That Güllich managed to get the paper into Science in the first place lends credence to them having gone through something as simple as Berkson's Paradox and have accounted for that. It's not everyday you get something as 'soft' as that paper into Science, after all. If not, then wow! standards for review really have slipped!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: