Scott Aaronson is an accomplished computer scientist whose research work focuses mainly on computational complexity. Computational complexity has to do with how difficult it is for a computer to solve a particular class of problems that you can ask it. It seems that you can ask your PC anything you want as long as you're a competent enough programmer but there are at least three ingredients that characterize the sorts of questions suitable for a computer: (i) you have some input information available to you at the start, (2) there is a known procedure or algorithm for attacking the problem (the simplest of which will be some type of brute-force approach), and (3) you desire a particular kind of output.

The theory of computational complexity classifies all sorts of question into various complexity classes according to how long it will take a computer to answer them. The classes are defined according to the number of steps necessary to get a solution expressed in terms of the size of the input, which can always be expressed as a binary string of 0s and 1s.

The class of the manageable problems for a computer is called P, which stands for polynomial. Class P contains all problems whose solutions can be computed in polynomial time, that is, an ordinary PC can solve them efficiently. For example, any set of linear equations constitutes a class-P problem since it can always be solved using Gaussian elimination.

Oftentimes, the other class of interest is class NP, for "non-deterministic polynomial". One way to think of class NP is this: if you allow you computer to do two or more things at once in each step (by some sort of parallel computing), then the NP-problem can be answered by that computer in polynomial time. The hardest computational tasks you can imagine are most likely in NP. This class includes famous questions like the travelling-salesman problem (if you have to visit a number of cities, what is the shortest route to travel without going to any cities more than once) or finding the prime factors of a number.

What do these complexity classes have to do with physics? It turns out that because physics is written in terms of math, it is possible to think of the dynamics in some physical situations as a sort of computational problem, so that the initial and final states of the physical system correspond to the input and outputs of the equivalent problem.

The physics version has to do with the configuration of a large number of identical, non-interacting particles. If you imagine you have N particles and M boxes (and you have more boxes than particles, i.e., M > N) and you distribute your particles into those boxes, then the number of particles in every box describes an initial state. There's two rules you can impose: you can let any number you wish in any box you like or you can allow only one particle per box. Then let's also suppose that there's a dynamical process that transforms the initial state by re-distributing the particles differently, maintaining the rule of single- or multiple-occupancy whichever rule applies. If the process is deterministic and known to us, then it is possible to solve what the final configuration will be if you are given the starting point.

For the actual experiment, what we have is known as a linear optics setup. The setup consists of a source of particles, usually those of light, which are going to be passed through a series of linear optical devices like beam splitters, phase-shifting crystals and mirrors, and are detected at an output terminal. The action of these optical elements on the incoming beam of particles allows the initial state to be transformed into a different final configuration.

The equivalent computational problem to this physics experiment turns out to be the calculation of determinants and permanents of a particular matrix. If you’ve done elementary linear algebra, you may recall that the determinant of a matrix can be calculated using what’s called the Laplace expansion. The permanent of a matrix is a much lesser known quantity even to scientists and engineers and it’s almost the same as the determinant except for a small (but not insignificant) difference: if you use the Laplace formula, just drop the alternating signs--the quantity you get is called the permanent.

The determinant is a very useful for practical calculations; the permanent, in contrast, is almost utterly useless. However, what’s interesting about the permanent is that despite its seemingly subtle difference to the determinant, it’s actually a much harder quantity to compute than the determinant. Leslie Valiant proved around 30 years ago that calculating permanents is class-#P (pronounced sharp-P or number-P) problem. #P has a simple definition: it just means you can count the number of solutions to the problem in polynomial time (hence, the symbol #) but in general it is extremely difficult to find any solution. Class #P problems are at least as hard as NP ones and probably even harder. It has been shown that both calculating determinants and permanents are complete, that is, any problem belonging to the same class can be reduced to these problems with negligible increase in difficulty. Thus, we can think of determinants and permanents as representative of the classes P and #P.

So where’s the connection with physics? If we perform the optics experiment, the output configurations will be different even if we start with the same input state. It's because the experiment we are doing is a quantum mechanical one—the outcome of each run of the experiment is completely random. However, quantum theory does tell us that if we can run the device many, many times with the same input, we will find that the relative frequencies of all the possible outputs of particles in boxes (which can be obtained from a histogram) match very closely the probabilities of obtaining these final states as predicted by the theory.

Computing the probabilities of the various final configurations if we use electrons in our experiment involve calculating determinants (known as the Slater determinant). On the other hand, the same kind of probabilities for the output of an experiment with photons require the calculation of the permanents. What this suggests is that we can actually answer a very hard computer-type question (#P) simply by doing a physics experiment with bosons! This is the fascinating result by Scott Aaronson (with Alex Arkhipov) and he uses it to argue that the so-called quantum computers (computers that can do massive parallel processing by allowing bits to be simultaneously 0 and 1 by exploiting quantum laws) will be able to do some computational tasks that our regular PCs of today can’t handle efficiently.

Of course, if you’re a rather sharp thinker, you’ll notice a slight problem. Because the experiment generates probabilistic outcomes, you have no control over the permanent of which matrix your experiment computes. The quantum setup essentially gives you the permanent of some random matrix, which fortunately enough can be determined directly from the initial and final configurations. So a skeptic of quantum computing may remain unconvinced of any special powers quantum computers might bring. But it misses the crucial point: since permanent calculation (of even matrices with just zeros and ones) is #P-complete and yet it can be done efficiently by an optics setup, it does demonstrate some extra capabilities possessed by quantum systems, even though at the moment the experiments we can imagine have little practical applicability.

## No comments:

## Post a Comment