Feasibility of Shor's Algorithm in serious doubt

author: Robert R. Tucci
date: Dec 21, 2001


This article refers to factoring numbers that can't be factored quickly by a contemporary classical computer, numbers with say 100 to 200 digits. Recently, IBM has built a "revolutionary" (in the romper room) 7-qubit NMR computer that can factor 15 into 3 times 5. Unfortunately, the probability of error for a quantum computer(QC) grows exponentially with the number of qubits. Thus, the probability of error for a 7-qubit NMR computer is very small compared with  the probability of error for a QC with the many hundreds of qubits that would be required to thwart present day RSA cryptography. Pretending that because we can factor 15 with an NMR computer, this pretty much guarantees that we will be able to factor a 100 digit (or even a 10 digit) number with a QC is like pretending that, because we can predict the weather one day in advance, this guarantees that we can predict accurately the weather one month in advance.

Note also that, as discussed in the scientific literature, NMR computers don't work for sizes larger than about 10 qubits, and they do not produce quantum entanglement. Quantum entanglement is widely believed to be one of the main reasons why quantum computers can outperform classical computers.

An interesting thread called "quantum computing=bad science?" occurred on Oct/Nov 2001 in the sci.physics.research news group. You can read the whole thread here(part1, Oct messages) and here(part2, Nov messages). Peter Shor posted the following very instructive message in that thread:


From: shor@research.att.com (Peter Shor)
Newsgroups: sci.physics.research, sci.physics
Subject: Re: quantum computing=bad science?
Date: 10 Nov 2001 22:50:06 GMT

In article <9samb3$768$1@glue.ucr.edu>, baez@galaxy.ucr.edu (John Baez) writes:
In article <3BE824F8.D5A05B8E@aic.nrl.navy.mil>, Ralph Hartley <hartley@aic.nrl.navy.mil> wrote: I believe there are actually "threshold" results that show that if the "error rate" (which includes unwanted entanglements) is lower that a certain level, then there are codes that reduce it as close to 0 as needed.

When I last checked, these results only applied to errors of a certain special sort, which does not include all the errors to which quantum computers will actually be subject - which is why I don't believe quantum computers will ever be practical. There was a discussion of this a while back in which Peter Shor seemed to agree with me on this point. Of course, there could have been more progress on quantum error correction since then. Anyway, anyone interested should check out the s.p.r. archives: http://www.lns.cornell.edu/spr/  To find this discussion it probably makes to search for posts by Peter Shor, since there aren't very many.

I've been following this discussion for a little while, and now that John has mentioned my name, I think I'll say something.

It currently appears that these threshold results for fault-tolerant computing should hold if the "errors," including unwanted entanglements, are either independent or mildly dependent in that the probability of errors that affect k qubits drops off exponentially in k. (Although I think the only rigorous proof written down so far is Dorit Ahoronov and Michael Ben-Or's paper, quant-ph/9906129, which assumes independence.) Robert Alicki and the Horodeckis have a paper (quant-ph/0105115) which shows that qubits coupled to a common quantum reservoir do not satisfy this assumption. However, my belief is that this assumption is stronger than what is really required for fault-tolerant quantum computing.

 This, of course, leaves open the question of whether fault-tolerant quantum computing is possible for real-world physical error models. In fact, when you think about it, you realize that even classical computers don't strictly adequately satisfy the assumption about independence of errors. For example, a power outage (or meteor strike, EM pulse, etc.) can affect all the transistors on a P.C. So the real question is: can we get quantum computers with low enough error rates so that in practice they solve problems that we want solved?

I don't have any answers, but I think that to properly address this question you need to take a lot of quantum mechanics and real-world physics into account. Nobody has yet found a fundamental physical principle that proves quantum computers can't work (as the second law of thermodynamics proves that perpetual motion machines can't work), and it's not because smart people haven't been looking for one.

Peter


From: tucci@ar-tiste.com (Robert Tucci)
Newsgroups: sci.physics.research, sci.physics
Title: Re: quantum computing=bad science?
Date: Thu, 15 Nov 2001 23:50:24 +0000 (UTC)

SHOR> It currently appears that these threshold results for fault-tolerant computing should hold if the "errors," including unwanted entanglements, are either independent or mildly dependent in that the probability of errors that affect k qubits drops off exponentially in k. (Although I think the only rigorous proof written down so far is Dorit Ahoronov and Michael Ben-Or's paper, quant-ph/9906129, which assumes independence.) Robert Alicki and the Horodeckis have a paper (quant-ph/0105115) which shows that qubits coupled to a common quantum reservoir do not satisfy this assumption. However, my belief is that this assumption is stronger than what is really required for fault-tolerant quantum computing.

RT>I'm no expert so I may be very wrong, but my impression is that the Aharonov/Ben-Or (AB) paper refers to a different type of noise than the Alicki/Horodecki (AH) paper. Both kinds of noise will always be present in any QC, to some extent. There is always going to be some coupling, no matter how tenuous, between the QC and its external environment (a sea of harmonic oscillators), so AH noise has to be accounted for, somehow, by any complete theory of QC errors. It seems(??) that Fault Tolerant Error Correction can fight AB noise but not AH noise. AH noise will therefore accumulate during a calculation, despite error correction. And if there is enough AH noise (where enough might not be very much), the output of a QC performing Shor factorization will be ruined. The often quoted 10^-5 error threshold for Shor factorization, as difficult as it will be to achieve, only refers to the more forgiving AB type of noise.

Despite this pessimistic assessment, I still think that QCs can perform useful algorithms, but of a different sort than those currently envisioned. The currently envisioned algorithms (Jozsa-Deutsch, Simon-Shor, Grover) are very few and require extremely high precision (a consequence of the fact that they demand nearly deterministic outputs). I conjecture that there are other very useful QC algorithms (such as those related to weak AI and decision theory) with non-deterministic outputs. These algorithms are more numerous, and require much less precision.


More References:

(1)Efforts to Transform Computers Reach Milestone, by George Johnson, New York Times, Dec. 20, 2001. Note that this inflated propaganda piece by the NYT was written before the scientific paper alluded to was publicly available. The Isaac Chuang et al. article can be found in the Dec.20 and Dec.27 double issue of Nature Magazine (Warning about Nature magazine).


[Home Page]