Quantum Computers’ Role in Shor’s Algorithm
Antonio J. Di Scala (Politecnico di Torino)
17/10/2023
Today widely used asymmetric cryptographic schemes are based on ideas that appear in two fundamental and very important papers from the 1970s, namely Diffie-Hellman [DH76] and RSA [RSA78]. The (difficult to solve) mathematical problems on which these schemes are based are:
- the computation of discrete logarithm;
- the factorization of large numbers.
The difficulty in solving the above problems is that the known ways to do that are computationally infeasible; namely, their cost, as measured by either the amount of memory used or the runtime, is finite but impossibly large.
In 1994, Peter Shor published a paper [Sho94] in which he explained how to solve both problems in a computationally feasible way by using a quantum computer. Thus, with just one blow, Shor showed how to break the asymmetric cryptography schemes developed in the 1970s and also their elliptic curve analogs developed in the 1980s.
The common thing shared by the above math problems is that you can solve them if you can efficiently compute the period of periodic functions, i.e., for all .
Then, the role of the quantum computer in Shor’s algorithm is to compute the period of periodic functions. More precisely, you are going to pass a periodic function as input to a quantum computer, and you are going to get as output the period of .
The mathematical link between factorization and periods was already understood by Gauss and developed in his famous “Disquisitiones Arithmeticae”. Here are the main observations:
- (a) fast computation of periods of periodic functions implies fast computation of non-trivial square roots of 1 modulo ;
- (b) fast computation of non-trivial square roots of 1 implies fast computation of non-trivial divisors of .
To understand claim (b), recall that a non-trivial square root of 1 modulo is a natural number , , such that
Take for example . Then and are non-trivial square roots of 1.
So assume you have such an . Then, moving to the left-hand side of the equation, you have
So by computing and , you get a non-trivial divisor of . Computing is known to be computationally feasible since Euclid’s times. Notice that for the two non trivial square roots gave the divisors and .
Now, to understand claim (a), pick some natural number and consider the function defined as:
where is the remainder of the division of by . It is quite easy to see that for most choices of , the function is a periodic function. So if you have a fast way to compute the period of , then you have
Now if is even, then is a non-trivial square root of , and you can use it to factorize as explained above. If the period is odd, then you try with another , i.e., you change the periodic function and it is not an issue since there are almost the same number of odd and even periods.
To give an example, consider and . Then the period of is . You can check that . Then is a non trivial square root of 1 modulo 15.
To get some insight into what is going on inside a quantum computer, we need to recall some facts about Fourier analysis.
It is well known that periodic functions are strongly related to Fourier analysis. But in almost all lectures on Fourier analysis, the period of the functions is well-known and fixed, usually to , which is the length of the unit circle. Actually, the functions you are going to learn how to compute their Fourier transforms or Fourier coefficients are functions from the unit circle to complex numbers. So it is not so clear or obvious how to use Fourier analysis to compute periods of periodic functions. Things are more clear if you know the abstract theory called commutative harmonic analysis. In such a setting, it is straightforward to notice that the support of the Fourier transform consists of multiples of the period . The setting of commutative harmonic analysis involves a commutative group , an invariant measure , the dual group of characters ’s, and the Fourier transform given by the integral:
So has as domain the dual group of characters, and basic facts of harmonic analysis imply that the support of are multiples of the periods of .
To understand the above in the case of the periodic function , the above integral becomes a sum, but I still use the integral symbol:
and if is the period of , a straightforward computation yields to
Hence the support of consists of those characters corresponding to complex numbers for an integer .
Summing up, if you have an efficient black box to compute the support of the Fourier coefficients of a periodic function, then you can compute its period .
In his paper, Shor explained how to use a quantum computer to compute the Fourier transform, hence the period. Such a quantum computer consists of a register and the operators that change the state of the register.
The register is made up of a certain number of so-called qubits, which are the quantum analog of flip-flops. The mathematical model of one qubit is the 2-dimensional complex vector space generated by two perpendicular unit vectors and . The states of the qubit are the vectors of length one. A register of qubits is modeled as the tensor product . Notice that the dimension is . The states of the register are the unit vectors of . The operators that can be used to change the state of the register are the unitary transformations of , namely, complex linear maps of that preserve the length of the vectors. At some point, to get the solution to your problem, you should read (or better yet, measure) the register. The output of such a measurement is going to be one of the unit vectors forming the basis of tensor products. These unit vectors can be regarded as the classical possible states of a register with flip-flops.
To illustrate the situation, imagine a register with 2 qubits. The state of such a register is a linear combination of the tensor products:
Assume that before the measurement, the state of the register is the linear combination:
where , , , and are complex numbers such that . If you measure the register, you can obtain with probability , with probability , and so on. Namely, the coefficients of the linear combination give you, in advance, the probabilities of the result of the measurement. Finally, after the measurement, the state of the register becomes the output of the measurement. For example, if you measure , then the state of the register becomes . Somehow after the measurement, the register loses information and is updated, by the laws of quantum physics, to the result of the measurement. So it is important to keep in mind that you are not going to read the register until you are sure, with high probability, that you are going to extract the result you need from the output of the
measurement.
You can think of a quantum program as a sequence of unitary operators that take the register from an initial state to a final state ending with a measurement. The interesting thing is that you can put the register in a state in which all outputs are equiprobable. Then you can apply an operator (or a sequence of operators) to such an equiprobable state instead of applying it to just one state, as happens in a classical register of flip-flops. This gives a kind of parallelism. For example, take as a huge composite number. To compute the Fourier transform of you need to compute all powers for too many values of , which is a computationally infeasible task. The fast square-multiply algorithm can be used to compute a power for a given , even if is huge. In his paper, Shor explains how the square-multiply algorithm can be quantized (i.e., adapted to a quantum computer) and hence applied to the equiprobable state. So you can think that the quantum computer has managed somehow to compute all the powers at once. Remember that you cannot read the register, so you can try to quantize algorithms that are somehow uniform, i.e., the flow of instructions does not depend on reading the data. Here are two examples.
First example: You have two bases of a finite-dimensional vector space. To perform a change of coordinates between them, you use a matrix . Namely, you take the coordinates of some vector, multiply , and are the new coordinates. Notice that the matrix does not depend on the values of ! So the algorithm to change coordinates is uniform.
Second example: You have a matrix , and you want to compute the Echelon reduced form . You apply elementary row reductions, but from the very beginning, to know which reduction you should apply, you need to read data from . Thus the algorithm to reduce a matrix is not uniform.
The above perhaps gives you some intuition of why the Fourier transform can be expected to be quantized: just recall that Fourier transform is a change of basis of a vector space. Indeed, the function is a vector of some vector space, and the Fourier coefficients are its coordinates w.r.t. the basis of exponential functions. Thus the Fourier transform is indeed a change of coordinates.
So roughly speaking, Shor’s algorithm implements the Fourier transform of in the setting of the quantum computer and then reads the register to obtain the support of its Fourier transform , hence it computes the period of . Actually, when you read the register to extract the period from the measurement, it is necessary to use a classical computer to get the period .
Summing up, you can regard the quantum computer as an accelerator that performs a specific computation very efficiently (e.g., periods of periodic functions). You can also regard the quantum computer on top of a classical computer as explained in Richard Borcherds’ [Bor21] YouTube post “The teapot test for quantum computers”.
References
[Bor21] Richard E. Borcherds. The teapot test for quantum computers. [Online]. Available: https://www.youtube.com/watch?v=sFhhQRxWTIM, 2021.
[DH76] W. Diffie and M. Hellman. New directions in cryptography. IEEE Transactions on Information Theory, 22(6):644–654, 1976.
[RSA78] R. L. Rivest, A. Shamir, and L. Adleman. A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM, 21(2):120–126, 1978.
[Sho94] P.W. Shor. Algorithms for quantum computation: discrete logarithms and factoring. In Proceedings 35th Annual Symposium on Foundations of Computer Science, pages 124–134, 1994.