So far in the chapter we tried to illustrate the algorithm design problems of computer algebra through the presentation of a few important symbolic algorithms. Below the interested reader will find an overview of the wider universe of the research of symbolic algorithms.
Besides the resultant method and the theory of Gröbnerbases presented in this chapter, there also exist algorithms for finding real symbolic roots of nonlinear equations and inequalities. (Collins).
There are some remarkable algorithms in the area of symbolic solution of differential equations. There exists a decision procedure similar to the Risch algorithm for the computation of solutions in closed form of a homogeneous ordinary differential equation of second degree with rational function coefficients. In case of higher degree linear equations, Abramov's procedure gives closed rational solutions of an equation with polynomial coefficients, while Bronstein's algorithm gives solutions of the form . In the case of partial differential equations Lie's symmetry methods can be used. There also exists an algorithm for the factorisation of linear differential operators over formal power series and rational functions.
Procedures based on factorisation are of great importance in the research of computer algebra algorithms. They are so important that many consider the birth of the entire research field with Berlekamp's publication on an effective algorithm for the factorisation of polynomials of one variable over finite fields of small characteristic . Later, Berlekamp extended his results for larger characteristic. In order to have similarly good running times, he introduced probabilistic elements into the algorithm. Today's computer algebra systems use Berlekamp's procedure even for large finite fields as a routine, perhaps without most of the users knowing about the probabilistic origin of the algorithm. The method will be presented in another chapter of the book. We note that there are numerous algorithms for the factorisation of polynomials over finite fields.
Not much time after polynomial factorisation over finite fields was solved, Zassenhaus, taking van der Waerden's book Moderne Algebra from 1936 as a base, used Hensel's lemma for the arithmetic of adic numbers to extend factorisation. “Hensellifting” – as his procedure is now called – is a general approach for the reconstruction of factors from their modular images. Unlike interpolation, which needs multiple points from the image, Hensellifting only needs one point from the image. The Berlekamp–Zassenhausalgorithm for the factorisation of polynomials with integer coefficients is of fundamental importance but it has two hidden pitfalls. First, for a certain type of polynomial the running time is exponential. Unfortunately, many “bad” polynomials appear in the case of factorisation over algebraic number fields. Second, a representation problem appears for multivariable polynomials, similar to what we have encountered at the Gausselimination of sparse matrices. The first problem was solved by a Diophantine optimisation based on the geometry of numbers, a socalled lattice reduction algorithm by LenstraLenstraLovász [ 161 ]; it is used together with Berlekamp's method. This polynomial algorithm is completed by a procedure which ensures that the Hensellifting will start from a “good” modular image and that it will end “in time”. Solutions have been found for the mentioned representation problem of the factorisation of multivariable polynomials as well. This is the second area where randomisation plays a crucial role in the design of effective algorithms. We note that in practice, the BerlekampZassenhausHenselalgorithm proves more effective than the LenstraLenstraLovászprocedure. As a contrast, the problem of polynomial factorisation can be solved in polynomial time, while the best proved algorithmic bound for the factorisation of the integer is (Pollard and Strassen) in the deterministic case and (Lenstra and Pomerance) in the probabilistic case, where .
In fact, a new theory of heuristic or probabilistic methods in computer algebra is being born to avoid computational or memory explosion and to make algorithms with deterministically large running times more effective. In the case of probabilistic algorithms, the probability of inappropriate operations can be positive, which may result in an incorrect answer (Monte Carlo algorithms) or—although we always get the correct answer (Las Vegas algorithms)—we may not get anything in polynomial time. Beside the above examples, nice results have been achieved in testing polynomial identity, irreducibility of polynomials, determining matrix normal forms (Frobenius, Hilbert, Smith), etc. Their role is likely to increase in the future.
So far in the chapter we gave an overview of the most important symbolic algorithms. We mentioned in the introduction that most computer algebra systems are also able to perform numeric computations: unlike traditional systems, the precision can be set by the user. In many cases, it is useful to combine the symbolic and numeric computations. Let us consider for example the symbolically computed power series solution of a differential equation. After truncation, evaluating the power series with the usual floating point arithmetics in certain points, we get a numerical approximation of the solution. When the problem is an approximation of a physical problem, the attractivity of the symbolic computation is often lost, simply because they are too complicated or too slow and they are not necessary or useful, since we are looking for a numerical solution. In other cases, when the problem cannot be dealt with using symbolic computation, the only way is the numerical approximation. This may be the case when the existing symbolic algorithm does not find a closed solution (e.g. the integral of nonelementary functions, etc.), or when a symbolic algorithm for the specified problem does not exist. Although more and more numerical algorithms have symbolic equivalents, numerical procedures play an important role in computer algebra. Let us think of differentiation and integration: sometimes traditional algorithms—integral transformation, power series approximation, perturbation methods—can be the most effective.
In the design of computer algebra algorithms, parallel architectures will play an increasing role in the future. Although many existing algorithms can be parallelised easily, it is not obvious that good sequential algorithms will perform optimally on parallel architectures as well: the optimal performance might be achieved by a completely different method.
The development of computer algebra systems is linked with the development of computer science and algorithmic mathematics. In the early period of computers, researchers of different fields began the development of the first computer algebra systems to facilitate and accelerate their symbolic computations; these systems, reconstructed and continuously updated, are present in their manieth versions today. General purpose computer algebra systems appeared in the seventies, and provide a wide range of builtin data structures, mathematical functions and algorithms, trying to cover a wide area of users. Because of their large need of computational resources, their expansion became explosive in the beginning of the eighties when microprocessorbased workstations appeared. Better hardware environments, more effective resource management, the use of systemindependent highlevel languages and, last but not least socialeconomic demands gradually transformed general purpose computer algebra systems into market products, which also resulted in a better user interface and document preparation.
Below we list the most widely known general and special purpose computer algebra systems and libraries.
General purpose computer algebra systems:
AXIOM, DERIVE, FORM, GNUCALC, JACAL, MACSYMA, MAXIMA, MAPLE, DISTRIBUTED MAPLE, MATHCAD, MATLAB SYMBOLIC MATH TOOLBOX, SCILAB, MAS, MATHEMATICA, MATHVIEW, MOCKMMA, MUPAD, REDUCE, RISA
.
Algebra and number theory:
BERGMAN, COCOA, FELIX, FERMAT, GRB, KAN, MACAULAY, MAGMA, NUMBERS, PARI, SIMATH, SINGULAR
.
Algebraic geometry:
CASA, GANITH
.
Group theory:
GAP, LIE, MAGMA, SCHUR
.
Tensor analysis:
CARTAN, FEYNCALC, GRG, GRTENSOR, MATHTENSOR, REDTEN, RICCI, TTC
.
Computer algebra libraries:
APFLOAT, BIGNUM, GNU MP, KANT, LIDIA, NTL, SACLIB, UBASIC, WEYL, ZEN
.
Most general purpose computer algebra systems are characterised by
interactivity,
knowledge of mathematical facts,
a highlevel, declarative
programming language with the possibility of functional programming and the knowledge of mathematical objects,
expansibility towards the operational system and other programs,
integration of symbolic and numeric computations,
automatic (optimised) C and Fortran code generation,
graphical user interface,
2 and 3dimensional graphics and animation,
possibility of editing text and automatic LaTeX conversion,
online help.
Computer algebra systems are also called mathematical expert systems. Today we can see an astonishing development of general purpose computer algebra systems, mainly because of their knowledge and wide application area. But it would be a mistake to underestimate special systems, which play a very important role in many research fields, besides, in many cases are easier to use and more effective due to their system of notation and the lowlevel programming language implementation of their algorithms. It is essential that we choose the most appropriate computer algebra system to solve a specific problem.
Footnote. Declarative programming languages specify the desired result unlike imperative languages, which describe how to get the result.
PROBLEMS

61
The length of coefficients in the series of remainders in a Euclidean division
Generate two pseudorandom polynomials of degree in with coefficients of decimal digits. Perform a single Euclidean division in (in ) and compute the ratio of the maximal coefficients of the remainder and the original polynomial (determined by the function ). Repeat the computation times and compute the average. What is the result? Repeat the experiment with .
62
Simulation of the ModularGcdSmallprimes algorithm
Using simulation, give an estimation for the optimal value of the variable in the
ModularGcdSmallprimes
algorithm. Use random polynomials of different degrees and coefficient magnitudes.
63
Modified pseudoeuclidean division
Let . Modify the pseudoeuclidean division in such a way that in the equation
instead of the exponent put the smallest value such that . Replace the procedures and in the
PrimitiveEuclidean
algorithm by the obtained procedures and . Compare the amount of memory space required by the algorithms.
64
Construction of reduced Gröbner basis
Design an algorithm that computes a reduced Gröbner basis from a given Gröbnerbasis .
65
Implementation of Hermitereduction
Implement Hermitereduction in a chosen computer algebra language.
66
Integration of rational functions
Write a program for the integration of rational functions.
CHAPTER NOTES

The algorithms
ClassicalEuclidean
and
ExtendedEuclidean
for nonnegative integers are described in [
54
]. A natural continuation of the theory of resultants leads to subresultants, which can help in reducing the growth of the coefficients in the
ExtendedEuclidean
algorithm (see e.g. [
82
], [
88
]).
Gröbner bases were introduced by B. Buchberger in 1965 [ 35 ]. Several authors examined polynomial ideals before this. The most wellknown is perhaps Hironaka, who used bases of ideals of power series to resolve singularities over . He was rewarded a Fieldsmedal for his work. His method was not constructive, however. Gröbner bases have been generalised for many algebraic structures in the last two decades.
The bases of differential algebra have been layed by J. F. Ritt in 1948 [
213
]. The squarefree factorisation algorithm used in symbolic integration can be found for example in the books [
82
], [
88
]. The importance of choosing the smallest possible extension degree in Hermitereduction is illustrated by Example 11.11 in [
88
], where the splitting field has very large degree but the integral can be expressed in an extension of degree 2. The proof of the RothsteinTrager integration algorithm can be found in [
82
] (Theorem 22.8). We note that the algorithm was found independently by Rothstein and Trager. The proof of the correctness of the LazardRiobooTrager formula, the analysis of the running time of the
IntegrateLogarithmicPart
algorithm, an overview of the procedures that deal with the difficulties of algebraic extension steps, the determination of the hyperexponential integral (if exists) of a hyperexponential element over , the proof of Liouville's principle and the proofs of the statements connected to the Risch algorithm can be found in the book [
82
].
There are many books and publications available on computer algebra and related topics. The interested reader will find mathematical description in the following general works: Caviness [ 42 ], Davenport et al. [ 60 ], von zur Gathen et al. [ 82 ], Geddes et al. [ 88 ], Knuth [ 144 ], [ 145 ], [ 146 ], Mignotte [ 181 ], Mishra [ 184 ], Pavelle et al. [ 201 ], Winkler [ 276 ].
The computeroriented reader will find further information on computer algebra in Christensen [ 44 ], Gonnet and Gruntz [ 96 ], Harper et al. [ 107 ] and on the world wide web.
A wide range of books and articles deal with applications, e.g. Akritas [ 10 ], Cohen et al. (eds.) [ 50 ], [ 49 ], Grossman (ed.) [ 101 ], Hearn (ed.) [ 110 ], Kovács [ 148 ] and Odlyzko [ 195 ].
For the role of computer algebra systems in education see for example the works of Karian [ 138 ] and Uhl [ 260 ].
Conference proceedings:
AAECC, DISCO, EUROCAL, EUROSAM, ISSAC
and
SYMSAC
.
Computer algebra journals: Journal of Symbolic Computation—Academic Press, Applicable Algebra in Engineering, Communication and Computing—SpringerVerlag, Sigsam Bulletin—ACM Press.
The Department of Computer Algebra of the Eötvös Loránd University, Budapest takes works [ 82 ], [ 88 ], [ 181 ], [ 276 ] as a base in education.