So far in the chapter we tried to illustrate the algorithm design problems of computer algebra through the presentation of a few important symbolic algorithms. Below the interested reader will find an overview of the wider universe of the research of symbolic algorithms.
Besides the resultant method and the theory of Gröbner-bases presented in this chapter, there also exist algorithms for finding real symbolic roots of non-linear equations and inequalities. (Collins).
There are some remarkable algorithms in the area of symbolic solution of differential equations. There exists a decision procedure similar to the Risch algorithm for the computation of solutions in closed form of a homogeneous ordinary differential equation of second degree with rational function coefficients. In case of higher degree linear equations, Abramov's procedure gives closed rational solutions of an equation with polynomial coefficients, while Bronstein's algorithm gives solutions of the form . In the case of partial differential equations Lie's symmetry methods can be used. There also exists an algorithm for the factorisation of linear differential operators over formal power series and rational functions.
Procedures based on factorisation are of great importance in the research of computer algebra algorithms. They are so important that many consider the birth of the entire research field with Berlekamp's publication on an effective algorithm for the factorisation of polynomials of one variable over finite fields of small characteristic . Later, Berlekamp extended his results for larger characteristic. In order to have similarly good running times, he introduced probabilistic elements into the algorithm. Today's computer algebra systems use Berlekamp's procedure even for large finite fields as a routine, perhaps without most of the users knowing about the probabilistic origin of the algorithm. The method will be presented in another chapter of the book. We note that there are numerous algorithms for the factorisation of polynomials over finite fields.
Not much time after polynomial factorisation over finite fields was solved, Zassenhaus, taking van der Waerden's book Moderne Algebra from 1936 as a base, used Hensel's lemma for the arithmetic of -adic numbers to extend factorisation. “Hensel-lifting” – as his procedure is now called – is a general approach for the reconstruction of factors from their modular images. Unlike interpolation, which needs multiple points from the image, Hensel-lifting only needs one point from the image. The Berlekamp–Zassenhaus-algorithm for the factorisation of polynomials with integer coefficients is of fundamental importance but it has two hidden pitfalls. First, for a certain type of polynomial the running time is exponential. Unfortunately, many “bad” polynomials appear in the case of factorisation over algebraic number fields. Second, a representation problem appears for multivariable polynomials, similar to what we have encountered at the Gauss-elimination of sparse matrices. The first problem was solved by a Diophantine optimisation based on the geometry of numbers, a so-called lattice reduction algorithm by Lenstra-Lenstra-Lovász [ 161 ]; it is used together with Berlekamp's method. This polynomial algorithm is completed by a procedure which ensures that the Hensel-lifting will start from a “good” modular image and that it will end “in time”. Solutions have been found for the mentioned representation problem of the factorisation of multivariable polynomials as well. This is the second area where randomisation plays a crucial role in the design of effective algorithms. We note that in practice, the Berlekamp-Zassenhaus-Hensel-algorithm proves more effective than the Lenstra-Lenstra-Lovász-procedure. As a contrast, the problem of polynomial factorisation can be solved in polynomial time, while the best proved algorithmic bound for the factorisation of the integer is (Pollard and Strassen) in the deterministic case and (Lenstra and Pomerance) in the probabilistic case, where .
In fact, a new theory of heuristic or probabilistic methods in computer algebra is being born to avoid computational or memory explosion and to make algorithms with deterministically large running times more effective. In the case of probabilistic algorithms, the probability of inappropriate operations can be positive, which may result in an incorrect answer (Monte Carlo algorithms) or—although we always get the correct answer (Las Vegas algorithms)—we may not get anything in polynomial time. Beside the above examples, nice results have been achieved in testing polynomial identity, irreducibility of polynomials, determining matrix normal forms (Frobenius, Hilbert, Smith), etc. Their role is likely to increase in the future.
So far in the chapter we gave an overview of the most important symbolic algorithms. We mentioned in the introduction that most computer algebra systems are also able to perform numeric computations: unlike traditional systems, the precision can be set by the user. In many cases, it is useful to combine the symbolic and numeric computations. Let us consider for example the symbolically computed power series solution of a differential equation. After truncation, evaluating the power series with the usual floating point arithmetics in certain points, we get a numerical approximation of the solution. When the problem is an approximation of a physical problem, the attractivity of the symbolic computation is often lost, simply because they are too complicated or too slow and they are not necessary or useful, since we are looking for a numerical solution. In other cases, when the problem cannot be dealt with using symbolic computation, the only way is the numerical approximation. This may be the case when the existing symbolic algorithm does not find a closed solution (e.g. the integral of non-elementary functions, etc.), or when a symbolic algorithm for the specified problem does not exist. Although more and more numerical algorithms have symbolic equivalents, numerical procedures play an important role in computer algebra. Let us think of differentiation and integration: sometimes traditional algorithms—integral transformation, power series approximation, perturbation methods—can be the most effective.
In the design of computer algebra algorithms, parallel architectures will play an increasing role in the future. Although many existing algorithms can be parallelised easily, it is not obvious that good sequential algorithms will perform optimally on parallel architectures as well: the optimal performance might be achieved by a completely different method.
The development of computer algebra systems is linked with the development of computer science and algorithmic mathematics. In the early period of computers, researchers of different fields began the development of the first computer algebra systems to facilitate and accelerate their symbolic computations; these systems, reconstructed and continuously updated, are present in their manieth versions today. General purpose computer algebra systems appeared in the seventies, and provide a wide range of built-in data structures, mathematical functions and algorithms, trying to cover a wide area of users. Because of their large need of computational resources, their expansion became explosive in the beginning of the eighties when microprocessor-based workstations appeared. Better hardware environments, more effective resource management, the use of system-independent high-level languages and, last but not least social-economic demands gradually transformed general purpose computer algebra systems into market products, which also resulted in a better user interface and document preparation.
Below we list the most widely known general and special purpose computer algebra systems and libraries.
General purpose computer algebra systems:
AXIOM, DERIVE, FORM, GNU-CALC, JACAL, MACSYMA, MAXIMA, MAPLE, DISTRIBUTED MAPLE, MATHCAD, MATLAB SYMBOLIC MATH TOOLBOX, SCILAB, MAS, MATHEMATICA, MATHVIEW, MOCK-MMA, MUPAD, REDUCE, RISA
Algebra and number theory:
BERGMAN, COCOA, FELIX, FERMAT, GRB, KAN, MACAULAY, MAGMA, NUMBERS, PARI, SIMATH, SINGULAR
GAP, LIE, MAGMA, SCHUR
CARTAN, FEYNCALC, GRG, GRTENSOR, MATHTENSOR, REDTEN, RICCI, TTC
Computer algebra libraries:
APFLOAT, BIGNUM, GNU MP, KANT, LIDIA, NTL, SACLIB, UBASIC, WEYL, ZEN
Most general purpose computer algebra systems are characterised by
knowledge of mathematical facts,
a high-level, declarative
programming language with the possibility of functional programming and the knowledge of mathematical objects,
expansibility towards the operational system and other programs,
integration of symbolic and numeric computations,
automatic (optimised) C and Fortran code generation,
graphical user interface,
2- and 3-dimensional graphics and animation,
possibility of editing text and automatic LaTeX conversion,
Computer algebra systems are also called mathematical expert systems. Today we can see an astonishing development of general purpose computer algebra systems, mainly because of their knowledge and wide application area. But it would be a mistake to underestimate special systems, which play a very important role in many research fields, besides, in many cases are easier to use and more effective due to their system of notation and the low-level programming language implementation of their algorithms. It is essential that we choose the most appropriate computer algebra system to solve a specific problem.
Footnote. Declarative programming languages specify the desired result unlike imperative languages, which describe how to get the result.
Generate two pseudorandom polynomials of degree in with coefficients of decimal digits. Perform a single Euclidean division in (in ) and compute the ratio of the maximal coefficients of the remainder and the original polynomial (determined by the function ). Repeat the computation times and compute the average. What is the result? Repeat the experiment with .
Simulation of the Modular-Gcd-Smallprimes algorithm
Using simulation, give an estimation for the optimal value of the variable in the
algorithm. Use random polynomials of different degrees and coefficient magnitudes.
Modified pseudo-euclidean division
Let . Modify the pseudo-euclidean division in such a way that in the equation
instead of the exponent put the smallest value such that . Replace the procedures and in the
algorithm by the obtained procedures and . Compare the amount of memory space required by the algorithms.
Construction of reduced Gröbner basis
Design an algorithm that computes a reduced Gröbner basis from a given Gröbner-basis .
Implementation of Hermite-reduction
Implement Hermite-reduction in a chosen computer algebra language.
Integration of rational functions
Write a program for the integration of rational functions.
for non-negative integers are described in [
]. A natural continuation of the theory of resultants leads to subresultants, which can help in reducing the growth of the coefficients in the
algorithm (see e.g. [
Gröbner bases were introduced by B. Buchberger in 1965 [ 35 ]. Several authors examined polynomial ideals before this. The most well-known is perhaps Hironaka, who used bases of ideals of power series to resolve singularities over . He was rewarded a Fields-medal for his work. His method was not constructive, however. Gröbner bases have been generalised for many algebraic structures in the last two decades.
The bases of differential algebra have been layed by J. F. Ritt in 1948 [
]. The square-free factorisation algorithm used in symbolic integration can be found for example in the books [
]. The importance of choosing the smallest possible extension degree in Hermite-reduction is illustrated by Example 11.11 in [
], where the splitting field has very large degree but the integral can be expressed in an extension of degree 2. The proof of the Rothstein-Trager integration algorithm can be found in [
] (Theorem 22.8). We note that the algorithm was found independently by Rothstein and Trager. The proof of the correctness of the Lazard-Rioboo-Trager formula, the analysis of the running time of the
algorithm, an overview of the procedures that deal with the difficulties of algebraic extension steps, the determination of the hyperexponential integral (if exists) of a hyperexponential element over , the proof of Liouville's principle and the proofs of the statements connected to the Risch algorithm can be found in the book [
There are many books and publications available on computer algebra and related topics. The interested reader will find mathematical description in the following general works: Caviness [ 42 ], Davenport et al. [ 60 ], von zur Gathen et al. [ 82 ], Geddes et al. [ 88 ], Knuth [ 144 ], [ 145 ], [ 146 ], Mignotte [ 181 ], Mishra [ 184 ], Pavelle et al. [ 201 ], Winkler [ 276 ].
AAECC, DISCO, EUROCAL, EUROSAM, ISSAC
Computer algebra journals: Journal of Symbolic Computation—Academic Press, Applicable Algebra in Engineering, Communication and Computing—Springer-Verlag, Sigsam Bulletin—ACM Press.