In this section we study the problem of factoring polynomials with rational coefficients. The input of the factorisation problem is a polynomial . Our goal is to compute a factorisation
where the polynomials are pairwise relatively prime, and irreducible over , and the numbers are positive integers. By Theorem 5.4, determines, essentially uniquely, the polynomials and the exponents .
First we reduce the problem (5.18) to another problem that can be handled more easily.
which has integer coefficients and its leading coefficient is 1. Using a factorisation of , a factorisation of can be obtained efficiently.
A polynomial can be written in a unique way as the product of an integer and a primitive polynomial in . Indeed, if is the greatest common divisor of the coefficients, then . Clearly, is a primitive polynomial with integer coefficients.
Proof. We argue by contradiction and assume that is a prime number that divides all the coefficients of . Set , and let and be the smallest indices such that and . Let and consider the coefficient of in the product . This coefficient is
Both of the sums on the right-hand side of this equation are divisible by , while is not, and hence the coefficient of in cannot be divisible by after all. This, however, is a contradiction.
Proof. Let us multiply and by the least common multiple and , respectively, of the denominators of their coefficients. Then the polynomials and are primitive polynomials with integer coefficients. Hence, by Gauss' Lemma, so is the product . As the coefficients of are integers, each of its coefficients is divisible by the integer . Hence , and so . Therefore and are indeed polynomials with integer coefficients.
One can show similarly, for a polynomial , that factoring in is equivalent to factoring the primitive part of in and factoring an integer, namely the greatest common divisor of the coefficients
As we work over an infinite field, we have to pay attention to the size of the results in our computations.
The inequality implies that a polynomial with integer coefficients can be described using bits.
where is the usual conjugate of the complex number .
Performing similar computations with the right-hand side of the equation in the lemma, we obtain that
The proof of the lemma is now complete.
If contains an integer with , then this inequality will trivially hold. Let us hence assume that for every . Set and . Applying Lemma 5.73 several times, we obtain that
where . As the leading coefficient of is 1, , and so
Let us express the coefficients of using its roots:
For an arbitrary polynomial , the inequality is valid. Therefore, using inequality (5.19), we find that
The proof is now complete.
Let be an arbitrary field, and let be polynomials with degree and , respectively: , where . We recall the concept of the resultant from Chapter 3. The resultant of and is the determinant of the -matrix
The matrix above is usually referred to as the Sylvester matrix. The blank spaces in the Sylvester matrix represent zero entries.
The resultant provides information about the common factors of and . One can use it to express, particularly elegantly, the fact that two polynomials are relatively prime:
Corollary 5.76 Let be a square-free (in ), non-constant polynomial. Then is an integer. Further, assume that is a prime not dividing . Then the polynomial is square-free in if and only if does not divide .
Proof. The entries of the Sylvester matrix corresponding to and are integers, and so is its determinant. The polynomial has no multiple roots over , and so, by Exercise 5.5-1, , which gives, using (5.21), that . Let denote the polynomial reduced modulo . Then it follows from our assumptions that is precisely the residue of modulo . By Exercise 5.5-1, the polynomial is square-free precisely when , which is equivalent to . This amounts to saying that does not divide the integer .
Set . If is large enough, then
where are primes not larger than , and is the leading coefficient of .
Let us suppose, for the primes , that is not square-free in . Then the product divides , and so
(In the last two inequalities, we used the Hadamard inequality, and the fact that .) This contradicts to inequality (5.22), which must be valid because of the choice of .
We note that using the Prime Number Theorem more carefully, one can obtain a stronger bound for .
We present a general procedure that can be used to obtain, given a factorisation modulo a prime , a factorisation modulo of a polynomial with integer coefficients.
Theorem 5.78 (Hensel's lemma) Suppose that are polynomials with leading coefficient such that , and, in addition, and are relatively prime in . Then, for an arbitrary positive integer , there are polynomials such that
both of the leading coefficients of and are equal to ,
Moreover, the polynomials and satisfying the conditions above are unique modulo .
Proof. From the conditions concerning the leading coefficients, we obtain that , and, further, that and , provided the suitable polynomials and indeed exist. The existence is proved by induction on . In the initial step, and the choice and is as required.
The induction step : let us assume that there exist polynomials and that are well-defined modulo and satisfy the conditions. If the polynomials and exist, then they must satisfy the conditions imposed on and . As and are unique modulo , we may write and where and are polynomials with integer coefficients. The condition concerning the leading coefficients guarantees that and that .
By the induction hypothesis, where . The observations about the degrees of the polynomials and imply that the degree of is smaller than . Now we may compute that
As , the congruence above holds modulo . Thus and satisfy the conditions if and only if
This, however, amounts to saying, after cancelling from both sides, that
Using the congruences and we obtain that this is equivalent to the congruence
Considering the inequalities and and the fact that in the polynomials and are relatively prime, we find that equation (5.23) can be solved uniquely in . For, if and form a solution to , then, by Theorem 5.12, the polynomials
form a solution of (5.23). The uniqueness of the solution follows from the bounds on the degrees, and from the fact that and relatively prime. The details of this are left to the reader.
Corollary 5.79 Assume that , and the polynomials satisfy the conditions of Hensel's lemma. Set and let be a positive integer. Then the polynomials and can be obtained using arithmetic operations modulo .
Proof. The proof of Theorem 5.78 suggests the following algorithm.
1 is a solution, in , of 2 3
DO5 reduced modulo (in ) 6 reduced modulo (in ) 7 (in ) 8
The polynomials and can be obtained using operations in (see Theorem 5.12 and the remark following it). An iteration consists of a constant number of operations with polynomials, and the cost of one run of the main loop is operations (modulo and ). The total cost of reaching is operations.
The factorisation problem (5.18) was efficiently reduced to the case in which the polynomial has integer coefficients and leading coefficient 1. We may also assume that has no multiple factors in . Indeed, in our case , and so the possible multiple factors of can be separated using the idea that we already used over finite fields as follows. By Lemma 5.13, the polynomial is already square-free, and, using Lemma 5.14, it suffices to find its factors with multiplicity one. From Proposition 5.71, we can see that has integer coefficients and leading coefficient 1. Computing the greatest common divisor and dividing polynomials can be performed efficiently, and so the reduction can be carried out in polynomial time. (In the computation of the greatest common divisor, the intermediate expression swell can be avoided using the techniques used in number theory.)
In the sequel we assume that the polynomial
we want to factor is square-free, its coefficients are integers, and its leading coefficient is 1.
The fundamental idea of the Berlekamp-Zassenhaus algorithm is that we compute the irreducible factors of modulo where is a suitably chosen prime and is large enough. If, for instance, , and we have already computed the coefficients of a factor modulo , then, by Mignotte's theorem, we can obtain the coefficients of a factor in .
From now on, we will also assume that is a prime such that the polynomial is square-free in . Using linear search such a prime can be found in polynomial time (Corollary 5.77). One can even assume that is polynomial in the bit size of .
The irreducible factors in of the polynomial can be found using Berlekamp's deterministic method (Theorem 5.42). Let be polynomials, all with leading coefficient 1, such that the are the irreducible factors of the polynomial in .
Using the technique of Hensel's lemma (Theorem 5.78) and Corollary 5.79, the system can be lifted modulo . To simplify the notation, we assume now that are polynomials with leading coefficients 1 such that
and the are the irreducible factors of the polynomial in .
Let be an irreducible factor with leading coefficient 1 of the polynomial in . Then there is a uniquely determined set for which
Let be the smallest integer such that . Mignotte's bound shows that the polynomial on the right-hand side, if its coefficients are represented by the residues with the smallest absolute values, coincides with .
We found that determining the irreducible factors of is equivalent to finding minimal subsets for which there is a polynomial with leading coefficient 1 such that , the absolute values of the coefficients of are at most , and, moreover, divides . This can be checked by examining at most sets . The cost of examining a single is polynomial in the size of .
To summarise, we obtained the following method to factor, in , a square-free polynomial with integer coefficients and leading coefficient 1.
1 a prime such that is square-free in and 2 the irreducible factors of in (using Berlekamp's deterministic method) 3 4 the Hensel lifting of the system modulo 5 the collection of minimal subsets of such that reduced modulo divides 6
Theorem 5.80 Let be a square-free polynomial with integer coefficients and leading coefficient , and let be a prime number such that the polynomial is square-free in and . Then the irreducible factors of the polynomial in can be obtained by the Berlekamp-Zassenhaus algorithm. The cost of this algorithm is polynomial in , and where is the number of irreducible factors of the polynomial in .
where are the first prime numbers, and the product is taken over all possible combinations of the signs and . The degree of is , and one can show that it is irreducible in . On the other hand, for all primes , the polynomial is the product of factors with degree at most 2. Therefore these polynomials represent hard cases for the Berlekamp-Zassenhaus algorithm, as we need to examine about sets to find out that is irreducible.
Our goal in this section is to present the Lenstra-Lenstra-Lovász algorithm (LLL algorithm) for factoring polynomials . This was the first polynomial time method for solving the polynomial factorisation problem over . Similarly to the Berlekamp-Zassenhaus method, the LLL algorithm starts with a factorisation of modulo and then uses Hensel lifting. In the final stages of the work, it uses lattice reduction to find a proper divisor of , provided one exists. The powerful idea of the LLL algorithm is that it replaced the search, which may have exponential complexity, in the Berlekamp-Zassenhaus algorithm by an efficient lattice reduction.
Let be a square-free polynomial with leading coefficient 1 such that , and let be a prime such that the polynomial is square free in and .
Lemma 5.81 Suppose that where and are polynomials with integer coefficients and leading coefficient . Let with and assume that for some polynomial such that has integer coefficients and . Let us further assume that . Then in .
Suppose that and . (We know that . If , then , and similarly, if , then .) Rewriting the congruence, we obtain
Considering the coefficient vectors of the polynomials and , this congruence amounts to saying that adding to the -th row of the Sylvester matrix (5.20) a suitable linear combination of the other rows results in a row in which all the elements are divisible by . Consequently, . The Hadamard inequality (Corollary 5.60) yields that , but this can only happen if . However, , and so, by (5.21), .
Further, we let be a polynomial with leading coefficient 1 such that is an irreducible factor of . Set . Define the set as follows:
Clearly, is closed under addition of polynomials. We identify a polynomial with degree less than with its coefficient vector of length . Under this identification, becomes a lattice in . Indeed, it is not too hard to show (Exercise 5.5-2) that the polynomials
or, more precisely, their coefficient vectors, form a basis of .
Proof. As , it is clear that whenever is irreducible. In order to show the implication in the other direction, let us assume that is reducible and let be a proper divisor of such that is divisible by in . Using Hensel's lemma (Theorem 5.78), we conclude that is divisible by , that is, . Mignotte's theorem (Theorem 5.74) shows that
Now, if we use the properties of reduced bases (second assertion of Theorem 5.67), then we obtain
We can hence apply Lemma 5.81, which gives .
Based on the previous theorem, the LLL algorithm can be outlined as follows (we only give a version for factoring to two factors). The input is a square-free polynomial with integer coefficients and leading coefficient 1 such that .
1 a prime such that is square-free in and 2 an irreducible factor in (using Berlekamp's deterministic method) 3
Hensel-Lifting7 a basis of the lattice in (5.24) 8
Proof. The general factorisation problem, using the method introduced at the discussion of the Berlekamp-Zassenhaus procedure, can be reduced to the case in which the polynomial is square-free and has leading coefficient 1. By the observations made there, the steps in lines 1–7 can be performed in polynomial time. In line 8, the Lovász reduction can be carried out efficiently (Corollary 5.65). In line 9, we may use a modular version of the Euclidean algorithm to avoid intermediate expression swell (see Chapter 6).
The correctness of the method is asserted by Theorem 5.82. The LLL algorithm can be applied repeatedly to factor the polynomials in the output, in case they are not already irreducible.
One can show that the Hensel lifting costs operations with moderately sized integers. The total cost of the version of the LLL algorithm above is .
form a basis of the lattice in (5.24).
Hint. It suffices to show that the polynomials () can be expressed with the given polynomials. To show this, divide by and compute the remainder.
The trace in finite fields
Let be finite fields. The definition of the trace map on is as follows: if then
a. Show that the map is -linear and its image is precisely .
Hint. Use the fact that is defined using a polynomial with degree to show that is not identically zero.
b. Let be a uniformly distributed random pair of elements from . Then the probability that is .
Let and let be a polynomial of the form
where the are pairwise relatively prime and irreducible polynomials with degree in . Also assume that .
a. Let be a uniformly distributed random polynomial with degree less than . Then the greatest common divisor
is a proper divisor of with probability at least .
Hint. Apply the previous exercise taking and , and follow the argument in Theorem 5.38.
b. Using part (a), give a randomised polynomial time method for factoring a polynomial of the form (5.25) over .
Divisors and zero divisors
Let be a field. The ring is said to be an -algebra (in case is clear from the context, is simply called an algebra), if is a vector space over , and holds for all and . It is easy to see that the rings and are -algebras.
Let be a finite-dimensional -algebra. For an arbitrary , we may consider the map defined as for . The map is -linear, and so we may speak about its minimal polynomial , its characteristic polynomial , and its trace . In fact, if is an ideal in , then is an invariant subspace of , and so we can restrict to , and we may consider the minimal polynomial, the characteristic polynomial, and the trace of the restriction.
a. Let with . Show that the residue class is a zero divisor in the ring if and only if does not divide and .
b. Let be an algebra over , and let be an element with minimal polynomial . Show that if is not irreducible over , then contains a zero divisor. To be precise, if is a non-trivial factorisation (), then and form a pair of zero divisors, that is, both of them are non-zero, but their product is zero.
Factoring polynomials over algebraic number fields
a. Let be a field with characteristic zero and let be a finite-dimensional -algebra with an identity element. Let us assume that where and are non-zero -algebras. Let be a basis of over . Show that there is a such that is not irreducible in .
Hint. This exercise is for readers who are familiar with the elements of linear algebra. Let us assume that the minimal polynomial of is the irreducible polynomial . Let be the characteristic polynomial of on the invariant subspace (for ). Here and are the sets of elements of the form and , respectively where . Because of our conditions, we can find suitable exponents such that . This implies that the trace of the map on the subspace is . Set . Obviously, , which gives . If the assertion of the exercise is false, then the latter equation holds for all , and so, as the trace is linear, it holds for all . This, however, leads to a contradiction: if (1 denotes the unity in ), then clearly and .
b. Let be an algebraic number field, that is, a field of the form where , and there is an irreducible polynomial such that . Let be a square-free polynomial and set . Show that is a finite-dimensional algebra over . More precisely, if and , then the elements of the form (, ) form a basis over .
c. Show that if is reducible over , then there are -algebras such that .
Hint. Use the Chinese remainder theorem .
d. Consider the polynomial above and suppose that a field and a polynomial are given. Assume, further, that is square-free and is not irreducible over . The polynomial can be factored to the product of two non-constant polynomials in polynomial time.
Hint. By the previous remarks, the minimal polynomial over of at least one of the elements (, ) is not irreducible in . Using the LLL algorithm, can be factored efficiently in . From a factorisation of , a zero divisor of can be obtained, and this can be used to find a proper divisor of in .
The abstract algebraic concepts discussed in this chapter can be found in many textbooks; see, for instance, Hungerford's book [ 120 ].
Our main algorithmic topics, namely the factorisation of polynomials and lattice reduction are thoroughly treated in the book by von zur Gathen and Gerhard [ 83 ]. We recommend the same book to the readers who are interested in the efficient methods to solve the basic problems concerning polynomials. Theorem 8.23 of that book estimates the cost of multiplying polynomials by the Schönhage-Strassen method, while Corollary 11.6 is concerned with the cost of the asymptotically fast implementation of the Euclidean algorithm. Ajtai's result about shortest lattice vectors was published in [ 8 ].
The method by Kaltofen and Shoup is a randomised algorithm for factoring polynomials over finite fields, and currently it has one of the best time bounds among the known algorithms. The expected number of -operations in this algorithm is where . Further competitive methods were suggested by von zur Gathen and Shoup, and also by Huang and Pan. The number of operations required by the latter is , if . Among the deterministic methods, the one by von zur Gathen and Shoup is the current champion. Its cost is operations in where . An important related problem is constructing the field . The fastest randomised method is by Shoup. Its cost is . For finding a square-free factorisation, Yun gave an algorithm that requires field operations in .
The best methods to solve the problem of lattice reduction and that of factoring polynomials over the rationals use modular and numerical techniques. After slightly modifying the definition of reduced bases, an algorithm using bit operations for the former problem was presented by Storjohann. (We use the original definition introduced in the paper by Lenstra, Lenstra and Lovász [ 161 ].) We also mention Schönhage's method using bit operations for factoring polynomials with integer coefficients ( is the length of the coefficients).
Besides factoring polynomials with rational coefficients, lattice reduction can also be used to solve lots of other problems: to break knapsack cryptosystems and random number generators based on linear congruences, simultaneous Diophantine approximation, to find integer linear dependencies among real numbers (this problem plays an important role in experiments that attempt to find mathematical identities). These and other related problems are discussed in the book [ 83 ].
A further exciting application area is the numerical solution of Diophantine equations. One can read about these developments in in the books by Smart [ 245 ] and Gaál [ 79 ]. The difficulty of finding a shortest lattice vector was verified in Ajtai's paper [ 8 ].
Finally we remark that the practical implementations of the polynomial methods involving lattice reduction are not competitive with the implementations of the Berlekamp-Zassenhaus algorithm, which, in the worst case, has exponential complexity. Nevertheless, the basis reduction performs very well in practice: in fact it is usually much faster than its theoretically proven speed. For some of the problems in the application areas listed above, we do not have another useful method.
The work of the authors was supported in part by grants NK72845 and T77476 of the Hungarian Scientific Research Fund.