Chapter 6. Computer Algebra

Computer systems performing various mathematical computations are inevitable in modern science and technology. We are able to compute the orbits of planets and stars, command nuclear reactors, describe and model many of the natural forces. These computations can be numerical and symbolical.

Although numerical computations may involve not only elementary arithmetical operations (addition, subtraction, multiplication, division) but also more sophisticated calculations, like computing numerical values of mathematical functions, finding roots of polynomials or computing numerical eigenvalues of matrices, these operations can only be carried out on numbers. Furthermore, in most cases these numbers are not exact. Their degree of precision depends on the floating-point arithmetic of the given computer hardware architecture.

Unlike numerical calculations, symbolic and algebraic computations operate on symbols that represent mathematical objects. These objects may be numbers such as integers, rational numbers, real and complex numbers, but may also be polynomials, rational and trigonometric functions, equations, algebraic structures such as groups, rings, ideals, algebras or elements of them, or even sets, lists, tables.

Computer systems with the ability to handle symbolic computations are called computer algebra systems or symbolic and algebraic systems or formula manipulation systems. In most cases, these systems are able to handle both numerical and graphical computations. The word “symbolic” emphasises that, during the problem-solving procedure, the objects are represented by symbols, and the adjective “algebraic” refers to the algebraic origin of the operations on these symbolic objects.

To characterise the notion “computer algebra”, one can describe it as a collection of computer programs developed basically to perform

On the other hand, computer algebra can be viewed as a discipline which has been developed in order to invent, analyse and implement efficient mathematical algorithms based on exact arithmetic for scientific research and applications.

Since computer algebra systems are able to perform error-free computations with arbitrary precision, first we have to clarify the data structures assigned to the various objects. Subsection 6.1 deals with the problems of representing mathematical objects. Furthermore, we describe the symbolic algorithms which are indispensable in modern science and practice.

The problems of natural sciences are mainly expressed in terms of mathematical equations. Research in solving symbolic linear systems is based on the well-known elimination methods. To find the solutions of non-linear systems, first we analyse different versions of the Euclidean algorithm and the method of resultants. In the mid-sixties of the last century, Bruno Buchberger presented a method in his PhD thesis for solving multivariate polynomial equations of arbitrary degree. This method is known as the Gröbner basis method. At that time , the mathematical community paid little attention to his work, but since then it became the basis of a powerful set of tools for computing with higher degree polynomial equations. This topic is discussed in Subsections 6.2 and 6.3.

The next area to be introduced is the field of symbolic integration. Although the nature of the problem was understood long ago (Liouville's principle), it was only in that Robert Risch invented an algorithm to solve the following: given an elementary function of a real variable , decide whether the indefinite integral is also an elementary function, and if so, compute the integral. We describe the method in Subsection 6.4.

At the end of this section, we offer a brief survey of the theoretical and practical relations of symbolic algorithms in Subsection 6.5, devoting an independent part to the present computer algebra systems.

6.1. 6.1 Data representation

In computer algebra, one encounters mathematical objects of different kinds. In order to be able to manipulate these objects on a computer, one first has to represent and store them in the memory of that computer. This can cause several theoretical and practical difficulties. We examine these questions in this subsection.

Consider the integers. We know from our studies that the set of integers is countable, but computers can only store finitely many of them. The range of values for such a single-precision integer is limited by the number of distinct encodings that can be made in the computer word, which is typically or bits in length. Hence, one cannot directly use the computer's integers to represent the mathematical integers, but must be prepared to write programs to handle “arbitrarily” large integers represented by several computer integers. The term arbitrarily large does not mean infinitely large since some architectural constraints or the memory size limits in any case. Moreover, one has to construct data structures over which efficient operations can be built. In fact, there are two standard ways of performing such a representation.

  • Radix notation (a generalisation of conventional decimal notation), in which is represented as , where the digits are single precision integers. These integers can be chosen from the canonical digit set or from the symmetrical digit set , where base can be, in principle, any positive integer greater than . For efficiency, is chosen so that is representable in a single computer word. The length of the linear list used to represent a multiprecision integer may be dynamic (i.e. chosen approximately for the particular integer being represented) or static (i.e. pre-specified fixed length), depending on whether the linear list is implemented using linked list allocation or using array (sequential) notation. The sign of is stored within the list, possibly as the sign of or one or more of the other entries.

  • Modular notation, in which is represented by its value modulo a sufficient number of large (but representable in one computer word) primes. From the images one can reconstruct using the Chinese remainder algorithm.

The modular form is fast for addition, subtraction and multiplication but is much slower for divisibility tasks. Hence, the choice of representation influences the algorithms that will be chosen. Indeed, not only the choice of representation influences the algorithms to be used but also the algorithms influence the choice of representation.

Example 6.1 For the sake of simplicity, in the next example we work only with natural numbers. Suppose that we have a computer architecture with machine word bits in length, i.e. our computer is able to perform integer arithmetic with the integers in range . Using this arithmetic, we carry out a new arithmetic by which we are able to perform integer arithmetic with the integers in range .

Using radix representation let , and let

Then,

where the sum and the product were computed using radix notation.

Switching to modular representation we have to choose pairwise relatively prime integers from the interval such that their product is greater than . Let, for example, the primes be

where . Then, an integer from the interval can be represented by a -tuple from the interval . Therefore,

furthermore, . Hence

where addition and multiplication were carried out using modular arithmetic.

More generally, concerning the choice of representation of other mathematical objects, it is worth distinguishing three levels of abstraction:

  1. Object level. This is the level where the objects are considered as formal mathematical objects. For example , and are all representations of the integer . On the object level, the polynomials and are considered equal.

  2. Form level. On this level, one has to distinguish between different representations of an object. For example and are considered different representations of the same polynomial, namely the former is a product, a latter is a sum.

  3. Data structure level. On this level, one has to consider different ways of representing an object in a computer memory. For example, we distinguish between representations of the polynomial as

    • an array ,

    • a linked list .

In order to represent objects in a computer algebra system, one has to make choices on both the form and the data structure level. Clearly, various representations are possible for many objects. The problem of “how to represent an object” becomes even more difficult when one takes into consideration other criteria, such as memory space, computation time, or readability. Let us see an example. For the polynomial

the product form is more comprehensive, but the second one is more suitable to know the coefficient of, say, . Two other illustrative examples are

  • and ,

  • and .

It is very hard to find any good strategy to represent mathematical objects satisfying several criteria. In practice, one object may have several different representations. This, however, gives rise to the problem of detecting equality when different representations of the same object are encountered. In addition, one has to be able to convert a given representation to others and simplify the representations.

Consider the integers. In the form level, one can represent the integers using base representation, while at the data structure level they can be represented by a linked list or as an array.

Rational numbers can be represented by two integers, a numerator and a denominator. Considering memory constraints, one needs to ensure that rational numbers are in lowest terms and also that the denominator is positive (although other choices, such as positive numerator, are also possible). This implies that a greatest common divisor computation has to be performed. Since the ring of integers is a Euclidean domain, this can be easily computed using the Euclidean algorithm. The uniqueness of the representation follows from the choice of the denominator's sign.

Multivariate polynomials (elements of , where is an integral domain) can be represented in the form , where and for , one can write for . In the form level, one can consider the following levels of abstraction:

  1. Expanded or factored representation, where the products are multiplied out or the expression is in product form. Compare

    • , and

    • .

  2. Recursive or distributive representation (only for multivariate polynomials). In the bivariate case, the polynomial can be viewed as an element of the domain , or . Compare

    • ,

    • , and

    • .

At the data structure level, there can be dense or sparse representation. Either all terms are considered, or only those having non-zero coefficients. Compare and . In practice, multivariate polynomials are represented mainly in the sparse way.

The traditional approach of representing power series of the form is to truncate at some specified point, and then to regard them as univariate polynomials. However, this is not a real representation, since many power series can have the same representation. To overcome this disadvantage, there exists a technique of representing power series by a procedure generating all coefficients (rather than by any finite list of coefficients). The generating function is a computable function such that . To perform an operation with power series, it is enough to know how to compute the coefficients of the resulting series from the coefficients of the operands. For example, the coefficients of the product of the power series and can be computed as . In that way, the coefficients are computed when they are needed. This technique is called lazy evaluation.

Since computer algebra programs compute in a symbolic way with arbitrary accuracy, in addition to examining time complexity of the algorithms it is also important to examine their space complexity.

Footnote. We consider the running time as the number of operations executed, according to the RAM-model. Considering the Turing-machine model, and using machine words with constant length, we do not have this problem, since in this case space is always bounded by the time.

Consider the simple problem of solving a linear system having equations an unknowns with integer coefficients which require computer word of storage. Using Gaussian elimination, it is easy to see that each coefficient of the reduced linear system may need computer words of storage. In other words, Gaussian elimination suffers from exponential growth in the size of the coefficients. Note that if we applied the same method to linear systems having polynomial coefficients, we would have exponential growth both in the size of the numerical coefficients of the polynomials and in the degrees of the polynomials themselves. In spite of the observed exponential growth, the final result of the Gaussian elimination will always be of reasonable size because by Cramer's rule we know that each component of the solution to such a linear system is a ratio of two determinants, each of which requires approximately computer words. The phenomenon described above is called intermediate expression swell. This often appears in computer algebra algorithms.

Example 6.2 Using only integer arithmetic we solve the following system of linear equations:

First, we eliminate variable from the second equation. We multiply the first row by , the second by and take their sum. If we apply this strategy for the third equation to eliminate variable , we get the following system.

Now, we eliminate variable multiplying the second equation by , the third one by , then taking their sum. The result is

Continuing this process of eliminating variables, we get the following system:

After some simplification, we get that . If we apply greatest common divisor computations in each elimination step, the coefficient growth will be less drastic.

In order to avoid the intermediate expression swell phenomenon, one uses modular techniques. Instead of performing the operations in the base structure (e.g. Euclidean ring), they are performed in some factor structure, and then, the result is transformed back to (Figure 6.1). In general, modular computations can be performed efficiently, and the reconstruction steps can be made with some interpolation strategy.

Figure 6.1.  The general scheme of modular computations.

The general scheme of modular computations.