First, in this chapter, we will discuss some of the basic concepts of algebra, such as fields, vector spaces and polynomials (Section 5.1). Our main focus will be the study of polynomial rings in one variable. These polynomial rings play a very important role in *constructive applications*. After this, we will outline the theory of finite fields, putting a strong emphasis on the problem of constructing them (Section 5.2) and on the problem of factoring polynomials over such fields (Section 5.3). Then we will study lattices and discuss the Lenstra-Lenstra-Lovász algorithm which can be used to find short lattice vectors (Section 5.4). We will present a polynomial time algorithm for the factorisation of polynomials with rational coefficients; this was the first notable application of the Lenstra-Lenstra-Lovász algorithm (Section 5.5).

In this section we will overview some important concepts related to rings and polynomials.

We recall some definitions introduced in Chapters 31–33 of the textbook *Introduction to Algorithms*. In the sequel all cross references to Chapters 31–33 refer to results in that book.

A set with at least two elements is called a **ring**, if it has two binary operations, the addition, denoted by the sign, and the multiplication, denoted by the sign. The elements of form an Abelian group with respect to the addition, and they form a monoid (that is, a semigroup with an identity), whose identity element is denoted by 1, with respect to the multiplication. We assume that . Further, the distributive properties also hold: for arbitrary elements we have

Being an Abelian group with respect to the addition means that the operation is associative, commutative, it has an identity element (denoted by 0), and every element has an inverse with respect to this identity. More precisely, these requirements are the following:

**associative property:** for all triples we have ;

**commutative property:** for all pairs we have ;

**existence of the identity element:** for the zero element of and for all elements of , we have ;

**existence of the additive inverse:** for all there exists , such that .

It is easy to show that each of the elements in has a unique inverse. We usually denote the inverse of an element by .

Concerning the multiplication, we require that it must be associative and that the multiplicative identity should exist. The **identity** of a ring is the multiplicative identity of . The usual name of the additive identity is **zero**. We usually omit the sign when writing the multiplication, for example we usually write instead of .

(i) The set of integers with the usual operations and .

(ii) The set of residue classes modulo with respect to the addition and multiplication modulo .

(iii) The set of -matrices with real entries with respect to the addition and multiplication of matrices.

Let and be rings. A map is said to be a **homomorphism**, if preserves the operations, in the sense that and holds for all pairs . A homomorphism is called an **isomorphism**, if is a one-to-one correspondence, and the inverse is also a homomorphism. We say that the rings and are **isomorphic**, if there is an isomorphism between them. If and are isomorphic rings, then we write . From an algebraic point of view, isomorphic rings can be viewed as identical.

For example the map which maps an integer to its residue modulo 6 is a homomorphism: , etc.

A useful and important ring theoretic construction is the **direct sum**. The direct sum of the rings and is denoted by . The underlying set of the direct sum is , that is, the set of ordered pairs where . The operations are defined componentwise: for we let

Easy calculation shows that is a ring with respect to the operations above. This construction can easily be generalised to more than two rings. In this case, the elements of the direct sum are the -tuples, where is the number of rings in the direct sum, and the operations are defined componentwise.

A ring is said to be a **field**, if its non-zero elements form an Abelian group with respect to the multiplication. The multiplicative inverse of a non-zero element is usually denoted .

The best-known examples of fields are the the sets of rational numbers, real numbers, and complex numbers with respect to the usual operations. We usually denote these fields by , respectively.

Another important class of fields consists of the fields of -elements where is a prime number. The elements of are the residue classes modulo , and the operations are the addition and the multiplication defined on the residue classes. The distributive property can easily be derived from the distributivity of the integer operations. By Theorem 33.12, is a group with respect to the addition, and, by Theorem 33.13, the set of non-zero elements of is a group with respect to the multiplication. In order to prove this latter claim, we need to use that is a prime number.

In an arbitrary field, we may consider the set of elements of the form , that is, the set of elements that can be written as the sum of copies of the multiplicative identity where is a positive integer. Clearly, one of the two possibilities must hold:

(a) none of the elements is zero;

(b) is zero for some .

In case (a) we say that is a field with **characteristic zero**. In case (b) the **characteristic** of is the smallest such that . In this case, the number must be a prime, for, if , then , and so either or .

Suppose that denotes the smallest subfield of that contains . Then is said to be the **prime field** of . In case (a) the subfield consists of the elements where is an integer and is a positive integer. In this case, is isomorphic to the field of rational numbers. The identification is obvious: .

In case (b) the characteristic is a prime number, and is the set of elements where . In this case, is isomorphic to the field of residue classes modulo .

Let be a field. An additively written Abelian group is said to be a **vector space** over , or simply an -vector space, if for all elements and , an element is defined (in other words, acts on ) and the following hold:

Here are arbitrary elements of , the elements are arbitrary in , and the element 1 is the multiplicative identity of .

The space of -matrices over is an important example of vector spaces. Their properties are studied in Chapter 31.

A vector space over a field is said to be **finite-dimensional** if there is a collection of finitely many elements in such that each of the elements can be written as a **linear combination**
for some . Such a set is called a **generating set** of . The cardinality of the smallest generating set of is referred to as the **dimension** of over , denoted . In a finite-dimensional vector space, a generating system containing elements is said to be a **basis**.

A set of elements of a vector space is said to be **linearly independent**, if, for , the equation implies . It is easy to show that a basis in is a linearly independent set. An important property of linearly independent sets is that such a set can be extended to a basis of the vector space. The dimension of a vector space coincides with the cardinality of its largest linearly independent set.

A non-empty subset of a vector space is said to be a **subspace** of , if it is an (additive) subgroup of , and holds for all and . It is obvious that a subspace can be viewed as a vector space.

The concept of homomorphisms can be defined for vector spaces, but in this context we usually refer to them as **linear maps**. Let and be vector spaces over a common field . A map is said to be linear, if, for all and , we have

The linear mapping is an **isomorphism** if is a one-to-one correspondence and its inverse is also a homomorphism. Two vector spaces are said to be isomorphic if there is an isomorphism between them.

**Lemma 5.1 **
*Suppose that is a linear mapping. Then is a subspace in . If is one-to-one, then . If, in this case, , then and the mapping is an isomorphism.*

we obtain that is a subspace. Further, it is clear that the images of the elements of a generating set of form a generating set for . Let us now suppose that is one-to-one. In this case, the image of a linearly independent subset of is linearly independent in . It easily follows from these observations that the image of a basis of is a basis of , and so . If we assume, in addition, that , then a basis of is also a basis of , as it is a linearly independent set, and so it can be extended to a basis of . Thus and the mapping must be a one-to-one correspondence. It is easy to see, and is left to the reader, that is a linear mapping.

The **direct sum** of vector spaces can be defined similarly to the direct sum of rings. The direct sum of the vector spaces and is denoted by . The underlying set of the direct sum is , and the addition and the action of the field are defined componentwise. It is easy to see that

Let be a field and let be a finite multiplicative subgroup of . That is, the set contains finitely many elements of , each of which is non-zero, is closed under multiplication, and the multiplicative inverse of an element of also lies in . We aim to show that the group is cyclic, that is, can be generated by a single element. The main concepts related to cyclic groups can be found in Section 33.3.4. of an element is the smallest positive integer such that .

The cyclic group generated by an element is denoted by . Clearly, , and an element generates the group if and only if and are relatively prime. Hence the group has exactly generators where is Euler's totient function (see Subsection 33.3.2).

The following identity is valid for an arbitrary integer :

Here the summation index runs through all positive divisors of . In order to verify this identity, consider all the rational numbers with . The number of these is exactly . After simplifying these fractions, they will be of the form where is a positive divisor of . A fixed denominator will occur exactly times.

**Theorem 5.2 **
*Suppose that is a field and let be a finite multiplicative subgroup of . Then there exists an element such that .*

**Proof. **Suppose that . Lagrange's theorem (Theorem 33.15) implies that the order of an element is a divisor of . We claim, for an arbitrary , that there are at most elements in with order . The elements with order are roots of the polynomial . If has an element with order , then, by Lemma 5.5, (the lemma will be verified later). Therefore all the elements of with order are contained in the group , which, in turn, contains exactly elements of order .

If had no element of order , then the order of each of the elements of would be a proper divisor of . In this case, however, using the identity above and the fact that , we obtain

which is a contradiction.

Suppose that is a field and that are elements of . Recall that an expression of the form

where is an indeterminate, is said to be a **polynomial** over (see Chapter 32). The scalars are the **coefficients** of the polynomial . The degree of the zero polynomial is zero, while the **degree** of a non-zero polynomial is the largest index such that . The degree of is denoted by .

The set of all polynomials over in the indeterminate is denoted by . If

and

are polynomials with degree not larger than , then their sum is defined as the polynomial

whose coefficients are .

The product of the polynomials and is defined as the polynomial

with degree at most whose coefficients are given by the equations . On the right-hand side of these equations, the coefficients with index greater than are considered zero. Easy computation shows that is a commutative ring with respect to these operations. It is also straightforward to show that has no **zero divisors**, that is, whenever , then either or .

The ring of polynomials over is quite similar, in many ways, to the ring of integers. One of their similar features is that the procedure of division with remainder can be performed in both rings.

**Lemma 5.3 **
*Let be polynomials such that . Then there there exist polynomials and such that*

*and either or . Moreover, the polynomials and are uniquely determined by these conditions.*

**Proof. **We verify the claim about the existence of the polynomials and by induction on the degree of . If or , then the assertion clearly holds. Let us suppose, therefore, that . Then subtracting a suitable multiple of from , we obtain that the degree of is smaller than . Then, by the induction hypothesis, there exist polynomials and such that

and either or . It is easy to see that, in this case, the polynomials and are as required.

It remains to show that the polynomials and are unique. Let and be polynomials, possibly different from and , satisfying the assertions of the lemma. That is, , and so . If the polynomial on the left-hand side is non-zero, then its degree is at least , while the degree of the polynomial on the right-hand side is smaller than . This, however, is not possible.

Let be a commutative ring with a multiplicative identity and without zero divisors, and set . The ring is said to be a **Euclidean ring** if there is a function such that , for all ; and, further, if , , then there are elements such that , and if , then . The previous lemma shows that is a Euclidean ring where the role of the function is played by the degree function.

The concept of **divisibility** in can be defined similarly to the definition of the corresponding concept in the ring of integers. A polynomial is said to be a **divisor** of a polynomial (the notation is ), if there is a polynomial such that . The non-zero elements of , which are clearly divisors of each of the polynomials, are called the **trivial divisors** or **units**. A non-zero polynomial is said to be **irreducible**, if whenever with , then either or is a unit.

Two polynomials are called **associates**, if there is some such that .

Using Lemma 5.3, one can easily prove the unique factorisation theorem in the ring of polynomials following the argument of the proof of the corresponding theorem in the ring of integers (see Section 33.1). The role of the absolute value of integers is played by the degree of polynomials.

**Theorem 5.4 **
*An arbitrary polynomial can be written in the form*

*where is a unit, the polynomials are pairwise non-associate and irreducible, and, further, the numbers are positive integers. Furthermore, this decomposition is essentially unique in the sense that whenever*

*is another such decomposition, then , and, after possibly reordering the factors , the polynomials and are associates, and moreover for all .*

Two polynomials are said to be **relatively prime**, if they have no common irreducible divisors.

A scalar is a **root** of a polynomial , if . Here the value is obtained by substituting into the place of in .

**Lemma 5.5 **
*Suppose that is a root of a polynomial . Then there exists a polynomial such that . Hence the polynomial may have at most roots.*

**Proof. **By Lemma 5.3, there exists and such that . Substituting for , we find that . The second assertion now follows by induction on from the fact that the roots of are also roots of .

Suppose that are polynomials of degree at most . Then the polynomials can obviously be computed using field operations. The product can be obtained, using its definition, by field operations. If the Fast Fourier Transform can be performed over , then the multiplication can be computed using only field operations (see Theorem 32.2). For general fields, the cost of the fastest known multiplication algorithms for polynomials (for instance the Schönhage–Strassen-method) is , that is, field operations.

The division with remainder, that is, determining the polynomials and for which and either or , can be performed using field operations following the straightforward method outlined in the proof of Lemma 5.3. There is, however, an algorithm (the Sieveking–Kung algorithm) for the same problem using only steps. The details of this algorithm are, however, not discussed here.

Let with , and let . We say that is **congruent** to modulo , or simply , if divides the polynomial . This concept of congruence is similar to the corresponding concept introduced in the ring of integers (see 33.3.2). It is easy to see from the definition that the relation is an equivalence relation on the set . Let (or simply if is clear from the context) denote the equivalence class containing . From Lemma 5.3 we obtain immediately, for each , that there is a unique such that , and either (if divides ) or . This polynomial is called the **representative** of the class . The set of equivalence classes is traditionally denoted by .

**Lemma 5.6 **
*Let and let . Suppose that and . Then*

*and*

**Proof. **The first congruence is valid, as

and the right-hand side of this is clearly divisible by . The second and the third congruences follow similarly from the identities

and

respectively.

The previous lemma makes it possible to define the sum and the product of two congruence classes and as and , respectively. The lemma claims that the sum and the product are independent of the choice of the congruence class representatives. The same way, we may define the action of on the set of congruence classes: we set .

**Theorem 5.7 **
*Suppose that and that .*

*(i) The set of residue classes is a commutative ring with an identity under the operations and defined above.*

*(ii) The ring contains the field as a subring, and it is an -dimensional vector space over . Further, the residue classes form a basis of .*

*(iii) If is an irreducible polynomial in , then is a field.*

**Proof. **(i) The fact that is a ring follows easily from the fact that is a ring. Let us, for instance, verify the distributive property:

The zero element of is the class , the additive inverse of the class is the class , while the multiplicative identity element is the class . The details are left to the reader.

(ii) The set is a subring isomorphic to . The correspondence is obvious: . By part (i), is an additive Abelian group, and the action of satisfies the vector space axioms. This follows from the fact that the polynomial ring is itself a vector space over . Let us, for example, verify the distributive property:

The other properties are left to the reader.

We claim that the classes are linearly independent. For, if

then , as the zero polynomial is the unique polynomial with degree less than that is divisible by . On the other hand, for a polynomial , the degree of the class representative of is less than . Thus the class can be expressed as a linear combination of the classes . Hence the classes form a basis of , and so .

(iii) Suppose that is irreducible. First we show that has no zero divisors. If , then divides , and so divides either or . That is, either or . Suppose now that with . We claim that the classes are linearly independent. Indeed, an equation implies , and, in turn, it also yields that . Therefore the classes form a basis of . Hence there exist coefficients for which

Thus we find that the class has a multiplicative inverse, and so is a field, as required.

We note that the converse of part (iii) of the previous theorem is also true, and its proof is left to the reader (Exercise 5.1-1).

**Example 5.2 **We usually represent the elements of the residue class ring by their representatives, which are polynomials with degree less than .

1. Suppose that is the field of two elements, and let . Then the ring has 8 elements, namely

Practically speaking, the addition between the classes is the is addition of polynomials. For instance

When computing the product, we compute the product of the representatives, and substitute it (or reduce it) with its remainder after dividing by . For instance,

The polynomial is irreducible over , since it has degree 3, and has no roots. Hence the residue class ring is a field.

2. Let and let . The elements of the residue class ring are the classes of the form where . The ring is not a field, since is not irreducible. For instance, .

**Lemma 5.8 **
*Let be a field containing a field and let .*

*(i) If is finite-dimensional as a vector space over , then there is a non-zero polynomial such that is a root of .*

*(ii) Assume that there is a polynomial with , and let be such a polynomial with minimal degree. Then the polynomial is irreducible in . Further, if with then is a divisor of .*

**Proof. **(i) For a sufficiently large , the elements are linearly dependent over . A linear dependence gives a polynomial such that .

(ii) If , then, as , the element is a root of either or . As was chosen to have minimal degree, one of the polynomials is a unit, and so is irreducible. Finally, let such that . Let be the polynomials as in Lemma 5.3 for which . Substituting for into the last equation, we obtain , which is only possible if .

**Definition 5.9 **
*The polynomial in the last lemma is said to be a minimal polynomial of .*

It follows from the previous lemma that the minimal polynomial is unique up to a scalar multiple. It will often be helpful to assume that the leading coefficient (the coefficient of the term with the highest degree) of the minimal polynomial is 1.

**Corollary 5.10 **
*Let be a field containing , and let . Suppose that is irreducible and that . Then is a minimal polynomial of .*

**Proof. **Suppose that is a minimal polynomial of . By the previous lemma, and is irreducible. This is only possible if the polynomials and are associates.

Let be a field containing and let . Let denote the smallest subfield of that contains and .

**Theorem 5.11 **
*Let be a field containing and let . Suppose that is a minimal polynomial of . Then the field is isomorphic to the field . More precisely, there exists an isomorphism such that , for all , and . The map is also an isomorphism of vector spaces over , and so .*

**Proof. **Let us consider the map , which maps a polynomial into . This is clearly a ring homomorphism, and . We claim that if and only if . Indeed, holds if and only if , that is, if , which, by Lemma 5.8, is equivalent to , and this amounts to saying that . Suppose that is the map induced by , that is, . By the argument above, the map is one-to-one. Routine computation shows that is a ring, and also a vector space, homomorphism. As is a field, its homomorphic image is also a field. The field contains and , and so necessarily .

Let be polynomials such that . Set , and define the polynomials and using division with reminder as follows:

Note that if then is smaller than . We form this sequence of polynomials until we obtain that . By Lemma 5.3, this defines a finite process. Let be the maximum of and . As, in each step, we decrease the degree of the polynomials, we have . The computation outlined above is usually referred to as the **Euclidean algorithm**. A version of this algorithm for the ring of integers is described in Section 33.2.

We say that the polynomial is the **greatest common divisor** of the polynomials and , if , , and, if a polynomial is a divisor of and , then is a divisor of . The usual notation for the greatest common divisor of and is . It follows from Theorem 5.4 that exists and it is unique up to a scalar multiple.

**Theorem 5.12 **
*Suppose that are polynomials, that , and let be the maximum of and . Assume, further, that the number and the polynomial are defined by the procedure above. Then*

*(i) .*

*(ii) There are polynomials with degree at most such that*

*(iii) With a given input , the polynomials can be computed using field operations in .*

**Proof. **(i) Going backwards in the Euclidean algorithm, it is easy to see that the polynomial divides each of the , and so it divides both and . The same way, if a polynomial divides and , then it divides , for all , and, in particular, it divides . Thus .

(ii) The claim is obvious if , and so we may assume without loss of generality that . Starting at the beginning of the Euclidean sequence, it is easy to see that there are polynomials such that

We observe that (5.2) also holds if we substitute by its remainder after dividing by and substitute by its remainder after dividing by . In order to see this, we compute

and notice that the degree of the polynomials on both sides of this congruence is smaller than . This gives

(iii) Once we determined the polynomials , , and , the polynomials , and can be obtained using field operations in . Initially we have and . As , the claim follows.

*Remark.* Traditionally, the Euclidean algorithm is only used to compute the greatest common divisor. The version that also computes the polynomials and in (5.1) is usually called the extended Euclidean algorithm. In Chapter 6 the reader can find a discussion of the Euclidean algorithm for polynomials. It is relatively easy to see that the polynomials , , and in (5.1) can, in fact, be computed using field operations. The cost of the asymptotically best method is .

The derivative of a polynomial is often useful when investigating multiple factors. The **derivative** of the polynomial

is the polynomial

It follows immediately from the definition that the map is an -linear mapping . Further, for and , the equations and hold. The derivative of a product can be computed using the **Leibniz rule:** for all we have . As the derivation is a linear map, in order to show that the Leibniz rule is valid, it is enough to verify it for polynomials of the form and . It is easy to see that, for such polynomials, the Leibniz rule is valid.

The derivative is sensitive to multiple factors in the irreducible factorisation of .

**Lemma 5.13 **
*Let be an arbitrary field, and assume that and where . Then divides the derivative of the polynomial .*

**Proof. **Using induction on and the Leibniz rule, we find . Thus, applying the Leibniz rule again, . Hence .

In many cases the converse of the last lemma also holds.

**Lemma 5.14 **
*Let be an arbitrary field, and assume that and where the polynomials and are relatively prime. Suppose further that (for instance has characteristic and is non-constant). Then the derivative is not divisible by .*

**Proof. **By the Leibniz rule, . Since is smaller than , we obtain that is not divisible by , and neither is the product , as and are relatively prime.

Using the following theorem, the ring can be assembled from rings of the form where .

**Theorem 5.15 **
**(Chinese remainder theorem for polynomials)**
*Let pairwise relatively prime polynomials with positive degree and set . Then the rings and are isomorphic. The mapping realizing the isomorphism is*

**Proof. **First we note that the map is well-defined. If , then , which implies that and give the same remainder after division by the polynomial , that is, .

The mapping is clearly a ring homomorphism, and it is also a linear mapping between two vector spaces over . The mapping is one-to-one; for, if , then , that is, (), which gives and .

The dimensions of the vector spaces and coincide: indeed, both spaces have dimension . Lemma 5.1 implies that is an isomorphism between vector spaces. It only remains to show that preserves the multiplication; this, however, is left to the reader.

**Exercises**

5.1-1 Let be polynomial. Show that the residue class ring has no zero divisors if and only if is irreducible.

5.1-2 Let be a commutative ring with an identity. A subset is said to be an **ideal**, if is an additive subgroup, and , imply . Show that is a field if and only if its ideals are exactly and .

5.1-3 Let . Let denote the smallest ideal in that contains the elements . Show that always exists, and it consists of the elements of the form where .

5.1-4 A commutative ring with an identity and without zero divisors is said to be a **principal ideal domain** if, for each ideal of , there is an element such that (using the notation of the previous exercise) . Show that and where is a field, are principal ideal domains.

5.1-5 Suppose that is a commutative ring with an identity, that an ideal in , and that . Define a relation on as follows: if and only if . Verify the following:

*a.)* The relation is an equivalence relation on .

*b.)* Let denote the equivalence class containing an element , and let denote the set of equivalence classes. Set , and . Show that, with respect to these operations, is a commutative ring with an identity.

*Hint.* Follow the argument in the proof of Theorem 5.7.

5.1-6 Let be a field and let such that . Show that there exists a polynomial such that .

*Hint.* Use the Euclidean algorithm.