Our aim in the rest of this chapter is to present the Lenstra-Lenstra-Lovász algorithm for factoring polynomials with rational coefficients. First we study a geometric problem, which is interesting also in its own right, namely finding short lattice vectors. Finding a shortest non-zero lattice vector is hard: by a result of Ajtai, if this problem could be solved in polynomial time with a randomised algorithm, then so could all the problems in the complexity class . For a lattice with dimension , the lattice reduction method presented in this chapter outputs, in polynomial time, a lattice vector whose length is not greater than times the length of a shortest non-zero lattice vector.

First, we recall a couple of concepts related to real vector spaces. Let denote the collection of real vectors of length . It is routine to check that is a vector space over the field . The **scalar product** of two vectors and in is defined as the number . The quantity is called the **length** of the vector . The vectors and are said to be **orthogonal** if . A basis of the space is said to be orthonormal, if, for all , and, for all and such that , we have .

The rank and the determinant of a real matrix, and definite matrices are discussed in Section 31.1.

**Definition 5.46 **
*A set is said to be a lattice, if is a subgroup with respect to addition, and is discrete, in the sense that each bounded region of contains only finitely many points of . The rank of the lattice is the dimension of the subspace generated by . Clearly, the rank of coincides with the cardinality of a maximal linearly independent subset of . If has rank , then is said to be a full lattice. The elements of are called lattice vectors or lattice points.*

**Definition 5.47 **
*Let be linearly independent elements of a lattice . If all the elements of can be written as linear combinations of the elements with integer coefficients, then the collection is said to be a basis of .*

In this case, as the vectors are linearly independent, all vectors of can uniquely be written as real linear combinations of .

By the following theorem, the lattices are precisely those additive subgroups of that have bases.

**Theorem 5.48 **
*Let be linearly independent vectors in and let be the set of integer linear combinations of . Then is a lattice and the vectors form a basis of . Conversely, if is a lattice in , then it has a basis.*

Obviously, is a subgroup, that is, it is closed under addition and subtraction. In order to show that it is discrete, let us assume that . This assumption means no loss of generality, as the subspace spanned by is isomorphic to . In this case, is an invertible linear map of onto itself. Consequently, both and are continuous. Hence the image of a discrete set under is also discrete. As , it suffices to show that is discrete in . This, however, is obvious: if is a bounded region in , then there is a positive integer , such that the absolute value of each of the coordinates of the elements of is at most . Thus has at most elements in .

The second assertion is proved by induction on . If , then we have nothing to prove. Otherwise, by discreteness, there is a shortest non-zero vector, say, in . We claim that the vectors of that lie on the line are exactly the integer multiples of . Indeed, suppose that is a real number and consider the vector . As usual, denotes the fractional part of . Then , yet , that is is the difference of two vectors of , and so is itself in . This, however, contradicts to the fact that was a shortest non-zero vector in . Thus our claim holds.

The claim verified in the previous paragraph shows that the theorem is valid when . Let us, hence, assume that . We may write an element of as the sum of two vectors, one of them is parallel to and the other one is orthogonal to :

Simple computation shows that , and the map is linear. Let . We show that is a lattice in the subspace, or hyperplane, formed by the vectors orthogonal to . The map is linear, and so is closed under addition and subtraction. In order to show that it is discrete, let be a bounded region in . We are required to show that only finitely many points of are in . Let be a vector such that . Let be the integer that is closest to the number and let . Clearly, and . Further, we also have that , and so the vector lies in the bounded region . However, there are only finitely many vectors in this latter region, and so also has only finitely many lattice vectors .

We have, thus, shown that is a lattice in , and, by the induction hypothesis, it has a basis. Let be lattice vectors such that the vectors form a basis of the lattice . Then, for an arbitrary lattice vector , the vector can be written in the form where the coefficients are integers. Then and, as the map is linear, we have . This, however, implies that is a lattice vector on the line , and so with some integer . Therefore , that is, is an integer linear combination of the vectors . Thus the vectors form a basis of .

A lattice is always full in the linear subspace spanned by . Thus, without loss of generality, we will consider only full lattices, and, in the sequel, by a lattice we will always mean a *full lattice*.

**Example 5.4 **Two familiar lattices in :

1. The **square lattice** is the lattice in with basis .

2. The **triangular lattice** is the lattice with basis , .

The following simple fact will often be used.

**Lemma 5.49 **
*Let be a lattice in , and let be a basis of . If we reorder the basis vectors , or if we add to a basis vector an integer linear combination of the other basis vectors, then the collection so obtained will also form a basis of .*

Let be a basis in . The Gram matrix of is the matrix with entries . The matrix is positive definite, since it is of the form where is a full-rank matrix (see Theorem 31.6). Consequently, is a positive real number.

**Lemma 5.50 **
*Let and be bases of a lattice and let and be the matrices and . Then the determinants of and coincide.*

For all , the vector is of the form where the are integers. Let be the matrix with entries . Then, as

we have , and so . The number is a non-negative integer, since the entries of are integers. Swapping the two bases, the same argument shows that is also a non-negative integer. This can only happen if .

**Definition 5.51 **
**(The determinant of a lattice)**
*The determinant of a lattice is where is the Gram matrix of a basis of .*

By the previous lemma, is independent of the choice of the basis. The quantity has a geometric meaning, as is the volume of the solid body, the so-called parallelepiped, formed by the vectors .

**Remark 5.52 **
*Assume that the coordinates of the vectors in an orthonormal basis of are (). Then the Gram matrix of the vectors is where is the matrix . Consequently, if is a basis of a lattice , then .*

**Proof. **The assertion follows from the equations .

We will need a fundamental result in convex geometry. In order to prepare for this, we introduce some simple notation. Let . The set is said to be **centrally symmetric**, if implies . The set is **convex**, if implies for all .

**Theorem 5.53 **
**(Minkowski's Convex Body Theorem)**
*Let be a lattice in and let be a centrally symmetric, bounded, closed, convex set. Suppose that the volume of is at least . Then .*

**Proof. **By the conditions, the volume of the set is at least . Let be a basis of the lattice and let be the corresponding half-open parallelepiped. Then each of the vectors in can be written uniquely in the form where and . For an arbitrary lattice vector , we let

As the sets and are bounded, so is the set

As is discrete, only has finitely many points in ; that is, , except for finitely many . Hence is a finite set, and, moreover, the set is the disjoint union of the sets (). Therefore, the total volume of these sets is at least . For a given , we set . Consider the closure and of the sets and , respectively:

and . The total volume of the closed sets is at least as large as the volume of the set , and so these sets cannot be disjoint: there are and such that , that is, and . As is centrally symmetric, we find that . As is convex, we also have . Hence . On the other hand, the difference of two lattice points lies in .

Minkowski's theorem is sharp. For, let be an arbitrary positive number, and let be the lattice of points with integer coordinates in . Let be the set of vectors for which (). Then is bounded, closed, convex, centrally symmetric with respect to the origin, its volume is , yet .

**Corollary 5.54 **
*Let be a lattice in . Then has a lattice vector whose length is at most .*

**Proof. **Let be the following centrally symmetric cube with side length :

The volume of the cube is exactly , and so it contains a non-zero lattice vector. However, the vectors in have length at most .

We remark that, for , we can find an even shorter lattice vector, if we replace the cube in the proof of the previous assertion by a suitable ball.

Our goal is to design an algorithm that finds a non-zero short vector in a given lattice. In this section we consider this problem for two-dimensional lattices, which is the simplest non-trivial case. Then there is an elegant, instructive, and efficient algorithm that finds short lattice vectors. This algorithm also serves as a basis for the higher-dimensional cases. Let be a lattice with basis in .

*
Gauss(
)
*

1 23`FOREVER`

the shortest lattice vector on the line 4`DO`

5`IF`

6`THEN`

`ELSE`

`RETURN`

In order to analyse the procedure, the following facts will be useful.

**Lemma 5.55 **
*Suppose that and are two linearly independent vectors in the plane , and let be the lattice generated by them. The vector is a shortest non-zero vector of on the line if and only if*

**Proof. **We write as the sum of a vector parallel to and a vector orthogonal to :

Then, as the vectors and are orthogonal,

This quantity takes its smallest value for the integer that is the closest to the number . Hence gives the minimal value if and only if (5.10) holds.

**Lemma 5.56 **
*Suppose that the linearly independent vectors and form a basis for a lattice and that inequality (5.10) holds. Assume, further, that*

*Write , as in (5.11), as the sum of the vector , which is parallel to , and the vector , which is orthogonal to . Then*

*Further, either or is a shortest non-zero vector in .*

Rearranging the last displayed line, we obtain .

The length of a vector can be computed as

which implies whenever . If and , then . Similarly, and gives . It remains to consider the case when and . As , we may assume that . In this case, however, is of the form (), and, by Lemma 5.55, the vector is a shortest lattice vector on this line.

**Theorem 5.57 **
*Let be a shortest non-zero lattice vector in . Then Gauss' algorithm terminates after iterations, and the resulting vector is a shortest non-zero vector in .*

**Proof. **First we verify that, during the course of the algorithm, the vectors and will always form a basis for the lattice . If, in line 3, we replace by a vector of the form , then, as , the pair remains a basis of . The swap in line 5 only concerns the order of the basis vectors. Thus and is always a basis of , as we claimed.

By Lemma 5.55, inequality (5.10) holds after the first step (line 3) in the loop, and so we may apply Lemma 5.56 to the scenario before lines 4–5. This shows that if none of and is shortest, then . Thus, except perhaps for the last execution of the loop, after each swap in line 5, the length of is decreased by a factor of at least . Thus we obtain the bound for the number of executions of the loop. Lemma 5.56 implies also that the vector at the end is a shortest non-zero vector in .

Gauss' algorithm gives an efficient polynomial time method for computing a shortest vector in the lattice . The analysis of the algorithm gives the following interesting theoretical consequence.

**Corollary 5.58 **
*Let be a lattice in , and let be a shortest non-zero lattice vector in . Then .*

**Proof. **Let be a vector in such that is linearly independent of and (5.10) holds. Then

which yields . The area of the fundamental parallelogram can be computed using the well-known formula

and so . The number can now be bounded by the previous inequality.

Let be a linearly independent collection of vectors in . For an index with , we let denote the component of that is orthogonal to the subspace spanned by . That is,

where

Clearly . The vectors span the same subspace as the vectors , and so, with suitable coefficients , we may write

and

By the latter equations, the vectors form an orthogonal system, and so

The set of the vectors is said to be the **Gram-Schmidt orthogonalisation** of the vectors .

**Lemma 5.59 **
*Let be a lattice with basis . Then*

**Proof. **Set and , if . Then , and so

that is, where and are the Gram matrices of the collections and , respectively, and is the matrix with entries . The matrix is a lower triangular matrix with ones in the main diagonal, and so . As is a diagonal matrix, we obtain .

**Corollary 5.60 **
**(Hadamard inequality)**
*
.*

**Proof. **The vector can be written as the sum of the vector and a vector orthogonal to , and hence .

The vector is the component of orthogonal to the subspace spanned by the vectors . Thus does not change if we subtract a linear combination of the vectors from . If, in this linear combination, the coefficients are integers, then the new sequence will be a basis of the same lattice as the original. Similarly to the first step of the loop in Gauss' algorithm, we can make the numbers in (5.15) small. The input of the following procedure is a basis of a lattice .

*
Weak-Reduction
*

1`FOR`

2`DOWNTO`

`DO`

`FOR`

3 where is the integer nearest the number 4`TO`

`RETURN`

**Definition 5.61 **
**(Weakly reduced basis)**
*A basis of a lattice is said to be weakly reduced if the coefficients in (5.15) satisfy*

**Lemma 5.62 **
*The basis given by the procedure
*

`Weak-Reduction`

is weakly reduced.
**Proof. **By the remark preceding the algorithm, we obtain that the vectors never change. Indeed, we only subtract linear combinations of vectors with index less than from . Hence the inner instruction does not change the value of with . The values of the do not change for either. On the other hand, the instruction achieves, with the new , that the inequality holds:

By the observations above, this inequality remains valid during the execution of the procedure.

First we define, in an arbitrary dimension, a property of the bases that usually turns out to be useful. The definition will be of a technical nature. Later we will see that these bases are interesting, in the sense that they consist of short vectors. This property will make them widely applicable.

**Definition 5.63 **
*A basis of a lattice is said to be (Lovász-)reduced if *

*
it is weakly reduced, *

*and, using the notation introduced for the Gram-Schmidt orthogonalisation, *

*
for all .*

Let us observe the analogy of the conditions above to the inequalities that we have seen when investigating Gauss' algorithm. For , and , being weakly reduced ensures that is a shortest vector on the line . The second condition is equivalent to the inequality , but here it is expressed in terms of the Gram-Schmidt basis. For a general index , the same is true, if plays the role of the vector , and plays the role of the component of the vector that is orthogonal to the subspace spanned by .

*
Lovász-Reduction
*

12`FOREVER`

`DO`

3 find an index for which the second condition of being reduced is violated 4`Weak-Reduction`

there is such an 5`IF`

6`THEN`

`ELSE`

`RETURN`

**Theorem 5.64 **
*Suppose that in the lattice each of the pairs of the lattice vectors has an integer scalar product. Then the swap in the 5th line of the
*

`Lovász-Reduction`

occurs at most times where is the upper left -subdeterminant of the Gram matrix of the initial basis .
**Proof. **The determinant is the determinant of the Gram matrix of , and, by the observations we made at the discussion of the Gram-Schmidt orthogonalisation, . This, of course, implies that for . By the above, the procedure *
Weak-Reduction
* cannot change the vectors , and so it does not change the product either. Assume, in line 5 of the procedure, that a swap takes place. Observe that, unless , the sets do not change, and neither do the determinants . The role of the vector is taken over by the vector , whose length, because of the conditions of the swap, is at most times the length of . That is, the new is at most times the old. By the observation above, the new value of will also be at most times the old one. Then the assertion follows from the fact that the quantity remains a positive integer.

**Corollary 5.65 **
*Under the conditions of the previous theorem, the cost of the procedure
*

`Lovász-Reduction`

is at most arithmetic operations with rational numbers where is the maximum of and the quantities with .
**Proof. **It follows from the Hadamard inequality that

Hence and . By the previous theorem, this is the number of iterations in the algorithm. The cost of the Gram-Schmidt orthogonalisation is operations, and the cost of weak reduction is scalar product computations, each of which can be performed using operations (provided the vectors are represented by their coordinates in an orthogonal basis).

One can show that the length of the integers that occur during the run of the algorithm (including the numerators and the denominators of the fractions in the Gram-Schmidt orthogonalisation) will be below a polynomial bound.

Theorem 5.67 of this section gives a summary of the properties of reduced bases that turn out to be useful in their applications. We will find that a reduced basis consists of relatively short vectors. More precisely, will approximate, within a constant factor depending only on the dimension, the length of a shortest non-zero lattice vector.

**Lemma 5.66 **
*Let us assume that the vectors form a reduced basis of a lattice . Then, for ,*

*In particular,*

**Proof. **Substituting , , Lemma 5.56 gives, for all , tha

Thus, inequality (5.16) follows by induction.

Now we can formulate the fundamental theorem of reduced bases.

**Theorem 5.67 **
*Assume that the vectors form a reduced basis of a lattice . Then *

*(i) .*

*(ii) for all lattice vectors . In particular, the length of is not greater than times the length of a shortest non-zero lattice vector.*

*(iii) .*

**Proof. **(i) Using inequality (5.17),

and so assertion (i) holds.

(ii) Let with be a lattice vector. Assume that is the last non-zero coefficient and write where is a linear combination of the vectors . Hence where lies in the subspace spanned by . As is orthogonal to this subspace,

and so assertion (ii) is valid.

(iii) First we show that . This inequality is obvious if , and so we assume that . Using the decomposition (5.14) of the vector and the fact that the basis is weakly reduced, we obtain that

Multiplying these inequalities for ,

which is precisely the inequality in (iii).

It is interesting to compare assertion (i) in the previous theorem and Corollary 5.54 after Minkowski's theorem. Here we obtain a weaker bound for the length of , but this vector can be obtained by an efficient algorithm. Essentially, the existence of the basis that satisfies assertion (iii) was first shown by Hermite using the tools in the proofs of Theorems 5.48 and 5.67. Using a Lovász-reduced basis, the cost of finding a shortest vector in a lattice with dimension is at most polynomial in the input size and in ; see Exercise 5.4-4.

**Exercises**

5.4-1 *The triangular lattice is optimal.* Show that the bound in Corollary 5.58 is sharp. More precisely, let be a full lattice and let be a shortest vector in . Verify that the inequality holds if and only if is similar to the triangular lattice.

5.4-2 *The denominators of the Gram-Schmidt numbers.* Let us assume that the Gram matrix of a basis has only integer entries. Show that the numbers in (5.15) can be written in the form where the are integers and is the determinant of the Gram matrix of the vectors .

5.4-3 *The length of the vectors in a reduced basis.* Let be a reduced basis of a lattice and let us assume that the numbers are integers. Give an upper bound depending only on and for the length of the vectors . More precisely, prove that

5.4-4 *The coordinates of a shortest lattice vector.* Let be a reduced basis of a lattice . Show that each of the shortest vectors in is of the form where and . Consequently, for a bounded , one can find a shortest non-zero lattice vector in polynomial time.

*Hint.* Assume, for some lattice vector , that . Let us write in the basis :

It follows from the assumption that each of the components of (in the orthogonal basis) is at most as long as :

Use then the inequalities and (5.17).