Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
An 11-qubit atom processor in silicon with all fidelities from 99.10% to 99.99% (nature.com)
86 points by giuliomagnifico 4 days ago | hide | past | favorite | 58 comments




Silicon is not one of the leading modalities for quantum computers, but it has progressed a lot in the past ~2-3 years. Here are a few key advancements that have happened as of late:

- Intel can now do 2D which means a Surface code can be run on these devices: https://arxiv.org/abs/2412.14918

- HRL can now do 2D as well: https://arxiv.org/abs/2502.08861

- They are solving the wiring problem: https://www.nature.com/articles/s41565-023-01491-3

- Their interconnects are high fidelity: https://www.nature.com/articles/s41586-025-09827-w


The engineering at those scales is pretty magical isn't it! Getting a whole bunch of individual atoms exactly where they want them. I wonder what the success rate is - i.e. how many do they build to get one working.

Usually they randomly shoot atoms at the substrate and then just search for a spot (among thousands) where it randomly has the configuration they want. Still pretty amazing.

Can they do that here, they've got quite a few sets of 4/5 atoms which they've interconnected, so that's a lot to get by shotgunning it. I'd assumed they were using something like a STM to nudge the atoms around.

The “precision manufacturing” reference in the paper is to this 2012 paper about an STM placement technique. [0]

[0] https://www.nature.com/articles/nnano.2012.21


hmm. i remember my electron microscopes prof being very excited about his ability to manipulate single atoms exactly where he wants them ~10years ago.

id have assumed the holography has gotten more common and able to operate on bigger volumes


This being a research paper, the rate is 1.0. They built one, then tinkered until it worked, then published.


This is a PR release meant to accompany the scientific work shown in the actual source / link. I don’t mean to be argumentative, just, would have taken back the time I spent reading it after reading the Nature version. It’s just “go read Nature” + 3 bullet points + anodyne CXO quotes.

What's closer to practical application these days, photonic/optical computing or quantum computing (silicon or not)?

Photonic computing has a lot of practical applications for signal transfer already.

Basically anytime we send a signal across a large fiber optic cable we need to convert signal from light back to electricity and that requires some level of photonic computing. Its used at scale today. https://www.ebsco.com/research-starters/science/photonics

However I suspect that you mean photonic computing where a computer on chip device uses photons instead of electrons to communicate. In which case, as far as I know is still research phase.


Thank you, I didn't know that. and you were right, I meant photons instead of electrons being used at the processor logic-gate level.

Quantum computers can almost, but not quite, factor numbers bigger than 10.

Time for git to break all workflows by showing huge alerts if a server is using crypto not proven quantum-proof!


Can it run Shor's?

No, and Shor's is not a good benchmark for these early quantum computers: https://algassert.com/post/2500

That's a 404; here's a working link: https://algassert.com/post/2500

Oops, updated. Thanks!

I'm not sure you can really call it "early days" anymore. The first quantum computer was in 1998. That's 27 years ago.

"early days" means that the 1998 computer didn't have qubits that were below the error correction threshold. Now we have hundreds of qubits below threshold. We'll need millions of qubits like these for quantum computing to be useful. If that take decades, this is the "early days" relatively.

It's not only early days in hardware, it's early days in practical applications as well: https://arxiv.org/abs/2511.09124


I admit it's early days in practical application. But in hardware definitely not.

Depends on what we mean by "early days on hardware".

If we mean "we've have been working on this for almost 3 decades. That's a very long time to be working on something!". I agree.

If we mean "We just now only have a few logical qubits that outperform their physical counterparts and we'll need thousands of these logical qubits to run anything useful" then we are still in the early days.


can you give a bit more information on 100's of qubits below threshold? I wasn't aware of 100's...

https://www.nature.com/articles/s41586-025-09848-5 performs CZ gates on up to 256 qubits with fidelities of 99.5%, which is good enough to run surface codes below threshold.

Maybe the real quantum computing was the friends we made along the way

It should be able to factor 15.

So can a 10 year old. The breakthrough I’m waiting for is factoring something I cant do in my head.


Apart from being a fun read, I learned that I should be skeptical at papers claiming to have factorized certain numbers. Thanks.

How much money or time do they owe you, though?

But it can’t because the error rate is still too high even for the most trivial examples

Ahh yes another quantum processor that creates noise.

This processor is state-of-the-art for silicon quantum computing. It's where modalities like superconducting were 15 years ago, and superconducting does not create noise these days https://www.nature.com/articles/s41586-024-08449-y

Gate fidelity significantly less than 100 is always noisy, regardless of the qubit itself

Sure, I'm not disagreeing that this processor is noisy, just providing enough context to say that it's fine. Historically, these devices improve enough to be under threshold at which point it doesn't matter that they are noisy cause error correction protocols can be run on top of them.

Quantum Computing is a scam.

I have not seen any progress or breakthroughs in the QC field at all that are significant.

If the only goal for QC is to try to run Shor's algorithm or to "try to break the bitcoin blockchain" then it is worse than useless.


QC progress happens super-exponentially: https://news.ycombinator.com/item?id=46383233

Graphs aren't telling me anything.

What are the real world use cases now, today? The only thing I see in the QC space, are QC stocks and funding paying for the employment of scientific experimentation, which isn't a real world application.

Do I have to wait 15 to 30 years for a series of real world changing breakthroughs that I can already do on a NVIDIA GPU card?

That doesn't exponential at all, in fact that sounds very very bearish.


> The only thing I see in the QC space, are QC stocks and funding paying for the employment of scientific experimentation

Then invest accordingly, and later reinvest your winnings in a different direction.


The graphs aren't telling you that QC hardware is not improving at a super-exponential pace?

There are no real world use cases today. The hardware is not advanced enough yet, but it's improving exponentially.


I think the point being made is that the graphs don't show real world applications progress. Being 99.9999999% or 0.000001% of the way to a useful application could be argued as no progress given the stated metric. Is there a guarantee that these things can and will work given enough time?

> Is there a guarantee that these things can and will work given enough time?

Quantum theory predicts that they will work given enough time. If they don't work, there is something about physics that we are missing.


Quantum theory says that quantum computers are mathematically plausible. It doesn't say anything about whether it's possible to construct a quantum computer in the real world of a given configuration. It's entirely possible that there's a physical limit that makes useful quantum computers impossible to construct.

Quantum theory says that quantum computers are physically plausible. Quantum theory lies in the realm of physics, not mathematics. As a physical theory, it makes predictions about what is plausible in the real world. One of those predictions is that it's possible to build a large-scale fault tolerant quantum computer.

The way to test out this theory is to try out an experiment to see if this is so. If this experiment fails, we'll have to figure out why theory predicted it but the experiment didn't deliver.


> One of those predictions is that it's possible to build a large-scale fault tolerant quantum computer.

Quantum theory doesn't predict that it's possible to build a large scale quantum computer. It merely says that a large scale quantum computer is consistent with theory.

Dyson spheres and space elevators are also consistent with quantum theory, but that doesn't mean that it's possible to build one.

Physical theories are subtractive, something that is consistent with the lowest levels of theory can still be ruled out by higher levels.


Good point. I didn't sufficiently delineate what counts as a scientific problem and what counts as an engineering problem in QC.

Quantum theory, like all physical theories, makes predictions. In this case, quantum theory predicts that if the physical error rate of qubits is below a threshold, then error correction can be used to increase the quality of a logical at arbitrarily high levels. This prediction can be false. We currently don't know all of the potential noise sources that will prevent us from building a quantum logic gate that is of similar quality as a classical logic gate.

Building thousands of these logical qubits is an engineering problem similar to Dyson spheres and space elevators. You're right that the lower levels of building 1 really good logical qubit doesn't mean that we can build thousands of them.

If our case, even the lower-levels haven't been validated. This is what I meant when I implied that the project of building a large-scale QC might teach us something new about physics.


> The way to test out this theory is to try out an experiment to see if this is so. If this experiment fails, we'll have to figure out why theory predicted it but the experiment didn't deliver.

If "this experiment" is trying to build a machine, then failure doesn't give much evidence against the theory. Most machine-building failures are caused by insufficient hardware/engineering.


Quantum theory predicts this: https://en.wikipedia.org/wiki/Threshold_theorem. An experiment can show that this prediction is false. This is a scientific problem not an engineering one. Physical theories have to be verified with experiments. If the results of the experiment don't match what the theory predicts then you have to do things like re-examine data, revise the theory e.t.c.

But that theorem being true doesn't mean "they will work given enough time". That's my objection. If a setup is physically possible but sufficiently thorny to actually build, there's a good chance it won't be built ever.

In the specific spot I commented, I guess you were just talking about the physics part? But the GP was talking about both physics and physical realization, so I thought you were also talking about the combination too.

Yes we can probably test the quantum theory. But verifying the physics isn't what this comment chain is really about. It's about working machines. With enough reliable qubits to do useful work.


You're right. I didn't sufficiently separate experimental physics QC from engineering QC.

On the engineering end, the question on if a large-scale quantum computer can be built is leaning to be "yes" so far. DARPA QBI https://www.darpa.mil/research/programs/quantum-benchmarking... was made to answer this question and 11 teams have made it to Stage B. Of course, only people who believe DARPA will trust this evidence, but that's all I have to go on.

On the application front, the jury is still out for applications that are not related to simulation or cryptography: https://arxiv.org/abs/2511.09124


Sounds like a pursuit where we win either way

Publishing findings that amount to an admission that you and others spent a fortune studying a dead end is career suicide and guarantees your excommunication from the realm of study and polite society. If a popular theory is wrong, some unlucky martyr must first introduce incontrovertible proof and then humanity must wait for the entire generation of practitioners whose careers are built on it to die.

Quantum theory is so unlikely to be wrong that if large-scale fault tolerant quantum computers could not be built, the effort to try to build them will not be a dead end, but instead a revolution in physics.

Unless the overall cost is too high, but yes it's definitely worth pursuing as far as we currently know.

It’s not, but I can understand how it might look that way to a tech industry professional used to dealing with scams (indeed, there are lots of scam-adjacent startups with quantum-flavored branding). Real science and engineering are just very difficult and take a long time. You can go to the arXiv, read the papers, and see the progress and breakthroughs that are made every year. But scientists are relatively honest, so even their breakthroughs are incremental.

this does not explain something like the manhattan project.

its not necessarily time that real science and engineering takes, but resources.

there's lots of fast progress happening in areas that get a lot of resources invested into them, and much slower on areas that dont have financial champions. moving fast doesn't necessitate that something is a scam


Sorry, I'm not sure I follow what the disagreement is? I don't claim that moving fast necessitates that something is a scam.

In any case (and I don't think this bears on your point, it's just something I'd like to add), building a quantum computer is very unlike building a nuclear fission device. Echoing my other comments here, it's almost misleading to call it "building a quantum computer," as that puts people in mind of 'unlocking' some single discrete technology in a strategy game tech tree. It's not that at all; it's a huge umbrella of (in many cases) extremely sophisticated technologies. The Manhattan project, as complex and astonishing a feat as it was, was a little closer to the strategy-game vision of research in that way. There's a reason it was possible in 3-4 years in the 1940s!


Maybe I should clarify that this isn't meant in a combative way, although it is in defense of scientists, who shouldn't be liable for other people's marketing.

Here's what's going on here: there's a way that people talk past each other, because they mean different things by the same words, because they ultimately have different cultures and values.

There's one kind of person (let's call them "technologists," but I'm sure there's a better word) who feels deeply and intuitively that the point of a technology is to Create Shareholder Value. There's another kind (let's call them "scientists") who feels deeply and intuitively that the point of a technology is to Evince That We Have Known The Mind Of God. I think that these two kinds of people have a hard time understanding one another. Sometimes they don't realize, as strange as it sounds, that the other exists.

There are many scientists who have been working on problems falling loosely under the umbrella of "quantum computing" for a few decades now. Most of them are not literally Building A Quantum Computer, or even trying to. Not exactly. For this reason it might be better to call the field "things you can do with coherent control of isolated quantum systems" than "quantum computing." There are many strange and wonderful things that you can see when you have good coherent control of isolated quantum systems. The scientists are largely interested in seeing those things, in order to Evince That We Have Known The Mind Of God. One sort of strange and wonderful thing, way down the line, is maybe factoring big numbers? The scientists honestly call that a "goal," because it would be strange and wonderful indeed. But it's not really the goal. The scientists don't really care about it for its own sake, and certainly not for the sake of Creating Shareholder Value. It's just one thing that would Evince That We Have Known The Mind Of God.

Incidentally, over those last couple of decades, we've gotten way better at coherent control of isolated quantum systems, and have, in many ways, succeeded at Evincing That We Have Known The Mind Of God again and again. We have made, and continue to make, amazing progress. One day we probably will factor large numbers. But that's not really the goal for the scientists.

On the other hand, there are "technologists" who hear about the goal of factoring large numbers, take this to be, in some sense, "the point" (that is, a proxy for Creating Shareholder Value), and expect it to happen in short order. They raise lots of money and promise a payout. They might act in very "commercial" ways, telling people what things are going to happen when, using an idiosyncratic, personal definition of truth. This is understood and expected in commercial situations. They and their creditors may be disappointed.

The trouble is that it's hard for people on the outside to tell the difference between the scientists and the technologists! This makes things confusing. On some level, this is a failure of science communication: laypeople hear about breakthroughs (from scientists), then don't see the promises of technologists immediately fulfilled, they get confused, and they start to think the scientists are lying. But they're not! They're different people.

Another thing that laypeople don't really know is that there are commercially-useful and near-commercially-useful technologies using coherent control of isolated quantum systems. They've come out of the same research program, but aren't strictly "quantum computing." I don't know why it's not more widely known that quantum sensors made out of qubits (usually a different kind of qubit than the kind used for computing applications!) are on the market today, and beat other sensors along a variety of axes.

This might sound like goalpost-moving, but I promise you it's not. If it sounds like goalpost-moving, it's because there are two different relevant groups of people you hadn't previously resolved!


Here's an analogous situation that might clarify the dynamic somewhat:

1. Sam Altman: [tells a tall tale to raise 100 quintillion dollars]

2. Outside observer: "hey, these so-called AI researchers have been pulling the wool over our eyes! They've promised AGI for decades. Where's my robot maid?"

3. Researcher who's been making steady progress in a niche subfield of optimization algorithms at Nebraska State University for the last 20 years: "huh?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: