This is an open letter to Dr. Scott Aaronson, the famous quantum physicist who offered $100,000 to anyone who could convince him that quantum computers are unscalable. I'm not writing this because I think that there's any universe in which I'm awarded the prize, but rather, because I think that the concept of scalable quantum computing subtly relies on the existence of exponentially more universes in the multiverse (if not an infinite number thereof) than the entire experimental history of quantum physics has actually proven, precise and voluminous though it may be, and this gap deserves some attention. Still, I hope I'm wrong.
To begin with, let's imagine that despite what your quantum physics professor taught you, there are actually only 2 universes in the multiverse. You don't believe this, so you set up an experiment:
Take 8 qubits and entangle them. Then collapse them, leaving them as a set of 8 bits in random states. On the first run, you get C5 (hexadecimal, 11000101 binary). On the second, you get E4. But on the third, you get 3A. There, you conclude, is proof that there are more than 2 universes in the multiverse. After all, how could you get 3 unique output states if there were only 2 universes in the multiverse?
Unfortunately, this experiment is uninformative, and serves as an architype for the series of historical experiments which has led us to the rather pompous assumption that we can extrapolate to an infinite number of universes from the mere existence of a plurality of universes. The reason that the aforementioned experiment is uninformative is that perhaps the 3 multiverse states (3 sets of 2-verse states, if you like) were: {C5, 09}, {21, E4}, {34, 3A}. In other words, all 2^8 bytes were possible output states, but only 2 such possibilities actually existed per experimental iteration. Taken to the extreme of a single universe in the multiverse, this reduces to the hidden variable theory, which is that the quantum state is fixed from the outset, but isn't evident to the observer until a measurement is made, creating the illusion that all states with nonzero possibility are in fact physically possible within the same experimental iteration -- in other words, that the multiverse is actually an illusory interpretation facilitated by observer ignorance in a single universe. The hidden variable theory was ostensibly disproven by John Clauser et al.
I say "ostensibly" because what Clauser's group actually did prove was that there is a plurality of universes in the multiverse -- not that the number thereof is infinite. Bearing that in mind, consider why it is that quantum computers hold the promise of massive computational acceleration: probability amplification.
First of all, it's possible, given a set of inputs and outputs to an integer function, to model that function on a sufficiently large quantum computer. For example, you might map one 32-bit integer to another in some very scattered way. So long as the mapping of inputs to outputs is one-to-one, this is at least theoretically possible. (If not, we could duplicate some of the inputs in the output string, so as to guarantee reversibility, but this gets into a discussion of junk bits which isn't relevant here.) Probability amplification then works thusly: If X is a possible output, and X would be an optimal answer to the question at hand, then we can make X have probability P of occurring in the output register. Now there are limits on P, that is, we can't just make X occur 100% of the time without breaking entanglement. For instance, some algos are able to skew the probability of obtaining X as an output by a square root factor, relative to guessing X at random. So for instance, the cost of searching a 32-bit space by brute force reduces to a 16-bit brute force search, assuming a sufficiently low error rate. (Other acceleration factors are possible, depending on the physics of the situation. The square root is just an example.) After a measurement is made (i.e. after the wave function collapses), and if we find the output to be X, then we can look at the inputs which led to X (because all the gates collapse simultaneously), thereby solving some sort of inverse function which would otherwise be "hard". Amplification typically exploits the relative probability of the system settling in the ground state as opposed to a (less probable) excited state. By xoring the actual output with the desired output (X) in each universe, the desired answer can be made to have all or mostly all 0s, which means it can be made energetically close to the ground state, and thus more probable. Duplicating this xor several times in parallel increases the energy difference between ground and the lowest excited states by a linearly increasing margin, thereby potentially increasing the probability skew even further, up to some asymptotic limit.
If there are an infinite number of universes in the multiverse, then every 32-bit integer is a possible state of 32 entangled qubits. As in the above example, we can make X the output with probablity of about 1/2^16, accomplishing quantum acceleration. But now let's assume that there are only 2^10 universes in the multiverse. So the probability that X is even a physical (as opposed to mathematical) possibility to begin with is just 1-((1-1/2^32)^(2^10)), or about 1/2^22. After the fact, the acceleration factor still applies, which means that of the 2^10 physically possible outputs, one of them can be made to have probability 1/2^5. So the overall probability of finding X in the output register is 1/2^(22+5), or 1/2^27 -- much larger than brute force probability, but much smaller than what quantum physics theoretically allows.
The problem is that we hope to use quantum computers to reduce classical complexity by some unlimited exponential factor (but not, in general, to make exponentially hard problems merely polynomially hard). But this is all predicated on the assumption that there are enough universes to contain the correct answer with usefully high probability.
So how many universes are in the multiverse? To answer that question even approximately would, ironically, require building a quantum computer. At some particular qubit count, we might observe an asymptotic maximization of the acceleration factor, the "acceleration horizon". So for instance, we might never be able to accelerate classical problems by more than a factor of 2^50, regardless of the number of qubits involved, and regardless of how well we might isolate the machine from environmental noise.
But what if the acceleration horizon is 2^1000? Surely we can do enough useful stuff even with such a "small" quantum computer. The problem is that we have no evidence of the existence of a "usefully high" acceleration horizon, so belief in such a horizon merely represents hope that usefully large quantum computers can exist. This is in parallel to the debate about error correction, which appears to be the zeitgeist among quantum computing skeptics for good thermodynamic reasons. Nevertheless, if we assume that cryonics and quantum error correction will solve those problems, the acceleration horizon remains a distinct threat to scalability.
Unfortunately, the only way to discover the acceleration horizon is to crash into it suddenly; it can't be calculated from parameters because the number of universes in the multiverse would appear to be a fundamental property of reality, rather like the number of whales in a particular cubic kilometer of ocean water. While none of this proves that quantum computers of a certain size are impossible, it does suggest that belief in the possibility of their existence is arbitrary.
I think it's worth addressing one of the strongest arguments in favor of scalable quantum computing, which is biological proteins. Proponents argue that, even with thousands of atoms, they almost always seem to fold into their correct topology in a matter of microseconds, despite the fact that our best classical modeling systems fall many orders of magnitude short of being able to model that behavior. I don't think this proves that quantum computing is scalable, but rather, that quantum computing is scalable when the desired solution is actually a high probability solution, in other words, a solution that might consist of thousands of bits, but is nevertheless so likely that it could be discovered with high probability even with a "small" number of universes.
Furtheremore, is there such a thing as "multiverse weather"? In other words, is the universe density constant across all of space, or can there be more universes in this cubic millimeter than that one, depending on the weather, just as clouds cause the wide variation of water concentration in the atmosphere? If so, is the weather merely a function of temperature, or are there other variables which matter? It would be a troublesome problem indeed if the multiverse weather varied throughout a quantum computer, particularly if only some parts of the system were left above the acceleration horizon.
Everyone is invited to comment.
To begin with, let's imagine that despite what your quantum physics professor taught you, there are actually only 2 universes in the multiverse. You don't believe this, so you set up an experiment:
Take 8 qubits and entangle them. Then collapse them, leaving them as a set of 8 bits in random states. On the first run, you get C5 (hexadecimal, 11000101 binary). On the second, you get E4. But on the third, you get 3A. There, you conclude, is proof that there are more than 2 universes in the multiverse. After all, how could you get 3 unique output states if there were only 2 universes in the multiverse?
Unfortunately, this experiment is uninformative, and serves as an architype for the series of historical experiments which has led us to the rather pompous assumption that we can extrapolate to an infinite number of universes from the mere existence of a plurality of universes. The reason that the aforementioned experiment is uninformative is that perhaps the 3 multiverse states (3 sets of 2-verse states, if you like) were: {C5, 09}, {21, E4}, {34, 3A}. In other words, all 2^8 bytes were possible output states, but only 2 such possibilities actually existed per experimental iteration. Taken to the extreme of a single universe in the multiverse, this reduces to the hidden variable theory, which is that the quantum state is fixed from the outset, but isn't evident to the observer until a measurement is made, creating the illusion that all states with nonzero possibility are in fact physically possible within the same experimental iteration -- in other words, that the multiverse is actually an illusory interpretation facilitated by observer ignorance in a single universe. The hidden variable theory was ostensibly disproven by John Clauser et al.
I say "ostensibly" because what Clauser's group actually did prove was that there is a plurality of universes in the multiverse -- not that the number thereof is infinite. Bearing that in mind, consider why it is that quantum computers hold the promise of massive computational acceleration: probability amplification.
First of all, it's possible, given a set of inputs and outputs to an integer function, to model that function on a sufficiently large quantum computer. For example, you might map one 32-bit integer to another in some very scattered way. So long as the mapping of inputs to outputs is one-to-one, this is at least theoretically possible. (If not, we could duplicate some of the inputs in the output string, so as to guarantee reversibility, but this gets into a discussion of junk bits which isn't relevant here.) Probability amplification then works thusly: If X is a possible output, and X would be an optimal answer to the question at hand, then we can make X have probability P of occurring in the output register. Now there are limits on P, that is, we can't just make X occur 100% of the time without breaking entanglement. For instance, some algos are able to skew the probability of obtaining X as an output by a square root factor, relative to guessing X at random. So for instance, the cost of searching a 32-bit space by brute force reduces to a 16-bit brute force search, assuming a sufficiently low error rate. (Other acceleration factors are possible, depending on the physics of the situation. The square root is just an example.) After a measurement is made (i.e. after the wave function collapses), and if we find the output to be X, then we can look at the inputs which led to X (because all the gates collapse simultaneously), thereby solving some sort of inverse function which would otherwise be "hard". Amplification typically exploits the relative probability of the system settling in the ground state as opposed to a (less probable) excited state. By xoring the actual output with the desired output (X) in each universe, the desired answer can be made to have all or mostly all 0s, which means it can be made energetically close to the ground state, and thus more probable. Duplicating this xor several times in parallel increases the energy difference between ground and the lowest excited states by a linearly increasing margin, thereby potentially increasing the probability skew even further, up to some asymptotic limit.
If there are an infinite number of universes in the multiverse, then every 32-bit integer is a possible state of 32 entangled qubits. As in the above example, we can make X the output with probablity of about 1/2^16, accomplishing quantum acceleration. But now let's assume that there are only 2^10 universes in the multiverse. So the probability that X is even a physical (as opposed to mathematical) possibility to begin with is just 1-((1-1/2^32)^(2^10)), or about 1/2^22. After the fact, the acceleration factor still applies, which means that of the 2^10 physically possible outputs, one of them can be made to have probability 1/2^5. So the overall probability of finding X in the output register is 1/2^(22+5), or 1/2^27 -- much larger than brute force probability, but much smaller than what quantum physics theoretically allows.
The problem is that we hope to use quantum computers to reduce classical complexity by some unlimited exponential factor (but not, in general, to make exponentially hard problems merely polynomially hard). But this is all predicated on the assumption that there are enough universes to contain the correct answer with usefully high probability.
So how many universes are in the multiverse? To answer that question even approximately would, ironically, require building a quantum computer. At some particular qubit count, we might observe an asymptotic maximization of the acceleration factor, the "acceleration horizon". So for instance, we might never be able to accelerate classical problems by more than a factor of 2^50, regardless of the number of qubits involved, and regardless of how well we might isolate the machine from environmental noise.
But what if the acceleration horizon is 2^1000? Surely we can do enough useful stuff even with such a "small" quantum computer. The problem is that we have no evidence of the existence of a "usefully high" acceleration horizon, so belief in such a horizon merely represents hope that usefully large quantum computers can exist. This is in parallel to the debate about error correction, which appears to be the zeitgeist among quantum computing skeptics for good thermodynamic reasons. Nevertheless, if we assume that cryonics and quantum error correction will solve those problems, the acceleration horizon remains a distinct threat to scalability.
Unfortunately, the only way to discover the acceleration horizon is to crash into it suddenly; it can't be calculated from parameters because the number of universes in the multiverse would appear to be a fundamental property of reality, rather like the number of whales in a particular cubic kilometer of ocean water. While none of this proves that quantum computers of a certain size are impossible, it does suggest that belief in the possibility of their existence is arbitrary.
I think it's worth addressing one of the strongest arguments in favor of scalable quantum computing, which is biological proteins. Proponents argue that, even with thousands of atoms, they almost always seem to fold into their correct topology in a matter of microseconds, despite the fact that our best classical modeling systems fall many orders of magnitude short of being able to model that behavior. I don't think this proves that quantum computing is scalable, but rather, that quantum computing is scalable when the desired solution is actually a high probability solution, in other words, a solution that might consist of thousands of bits, but is nevertheless so likely that it could be discovered with high probability even with a "small" number of universes.
Furtheremore, is there such a thing as "multiverse weather"? In other words, is the universe density constant across all of space, or can there be more universes in this cubic millimeter than that one, depending on the weather, just as clouds cause the wide variation of water concentration in the atmosphere? If so, is the weather merely a function of temperature, or are there other variables which matter? It would be a troublesome problem indeed if the multiverse weather varied throughout a quantum computer, particularly if only some parts of the system were left above the acceleration horizon.
Everyone is invited to comment.