We have plenty of devices and tools that support efficient coding and implementation of numerical algorithms. One aim of such developments is to free the programmers from writing the programs of frequently occurring problems. This is usually done by writing safe, reliable and standardised routines that can be downloaded from (public) program libraries. We just mention the LINPACK, EISPACK, LAPACK, VISUAL NUMERICS (former IMSL) and NAG libraries. Another way of developments is to produce software that work as a programming language and makes the programming very easy. Such software systems are the MATLAB and the SciLab.
The main purpose of the BLAS (Basic Linear Algebra Subprograms) programs is the standardisation and efficient implementation the most frequent matrixvector operations. Although the BLAS routines were published in FORTRAN they can be accessed in optimised machine code form as well. The BLAS routines have three levels:
 BLAS 1 (1979),
 BLAS 2 (1988),
 BLAS 3 (1989).
These levels corresponds to the computation cost of the implemented matrix operations. The BLAS routines are considered as the best implementations of the given matrix operations. The selection of the levels and individual BLAS routines strongly influence the efficiency of the program. A sparse version of BLAS also exists.
We note that the BLAS 3 routines were developed mainly for block parallel algorithms. The standard linear algebra packages LINPACK, EISPACK and LAPACK are built from BLAS routines. The parallel versions can be found in the SCALAPACK package. These programs can be found in the public NETLIB library:
Let , . The BLAS 1 routines are the programs of the most important vector operations (, , ), the computation of , the swapping of variables, rotations and the saxpy operation which is defined by
The word saxpy means that “scalar alpha plus ”. The saxpy operation is implemented in the following way.
Saxpy(
)
1 2FOR
to 3DO
4RETURN
The saxpy is a software driven operation. The cost of BLAS 1 routines is flops.
The matrixvector operations of BLAS 2 requires flops. These operations are , , , , and their variants. Certain operations work only with triangular matrices. We analyse two operations in detail. The “outer or dyadic product” update
can be implemented in two ways.
The rowwise or “ ” variant:
OuterProductUpdateVersion”ij” (
)
1 2FOR
TO
3DO
4RETURN
The notation “:” denotes all allowed indices. In our case this means the indices . Thus denotes the row of matrix .
The columnwise or “ ” variant :
OuterProductUpdateVersion“ji” (
)
1 2FOR
TO
3DO
4RETURN
Here denotes the column of matrix . Observe that both variants are based on the saxpy operation.
The gaxpy operation is defined by
The word gaxpy means that “general plus ”. The gaxpy operation is also software driven and implemented in the following way:
Gaxpy(
)
1 2 3FOR
TO
4DO
5RETURN
Observe that the computation is done columnwise and the gaxpy operation is essentially a generalised saxpy.
These routines are the implementations of matrixmatrix and matrixvector operations such as the operations , , ( is upper triangular) and their variants. BLAS 3 operations can be implemented in several forms. For example, the matrix product can be implemented at least in three ways. Let , .
MatrixProductDotVersion(
)
1 2 3 4 5FOR
TO
6DO
FOR
TO
7DO
FOR
TO
8DO
9RETURN
This algorithm computes as the dot (inner) product of the row of and the column of . This corresponds to the original definition of matrix products.
Now let and be partitioned columnwise as follows
Then we can write as the linear combination of the columns of , that is
Hence the product can be implemented with saxpy operations.
MatrixProductGaxpyVariant(
)
1 2 3 4 5FOR
TO
6DO
FOR
TO
7DO
FOR
TO
8DO
9RETURN
The following equivalent form of the “ ”algorithm shows that it is indeed a gaxpy based process.
MatrixProductwithGaxpyCall(
)
1 2 3 4FOR
TO
5DO
6RETURN
Consider now the partitions () and
Then .
MatrixProductOuterProductVariant(
)
1 2 3 4 5FOR
TO
6DO
FOR
TO
7DO
FOR
TO
8DO
9RETURN
The inner loop realizes a saxpy operation: it gives the multiple of to the column of matrix .
These are those programming tools that help easy programming in concise (possibly mathematical) form within an integrated program development system. Such systems were developed primarily for solving mathematical problems. By now they have been extended so that they can be applied in many other fields. For example, Nokia uses MATLAB in the testing and quality control of mobile phones. We give a short review on MATLAB in the next section. We also mention the widely used MAPLE, DERIVE and MATEMATICA systems.
The MATLAB software was named after the expression MATrix LABoratory. The name indicates that the matrix operations are very easy to make. The initial versions of MATLAB had only one data type: the complex matrix. In the later versions high dimension arrays, cells, records and objects also appeared. The MATLAB can be learned quite easily and even a beginner can write programs for relatively complicated problems.
The coding of matrix operations is similar to their standard mathematical form. For example if and are two matrices of the same size, then their sum is given by the command . As a programming language the MATLAB contains only four control structures known from other programming languages:
– the simple statement expression,  the if statement of the form
expression, commands { commands} ,
– the for loop of the form
the values of the loop variable, commands
– the while loop of the form
expression, commands .
The MATLAB has an extremely large number of built in functions that help efficient programming. We mention the following ones as a sample.
– selects the maximum element in every column of ,
– returns the approximate eigenvalues and eigenvectors of ,
– The command returns the numerical solution of the linear system .
The entrywise operations and partitioning of matrices can be done very efficiently in MATLAB. For example, the statement
exchange the second and third rows of while it takes the reciprocal of each element.
The above examples only illustrate the possibilities and easy programming of MATLAB. These examples require much more programming effort in other languages, say e.g. in PASCAL. The built in functions of MATLAB can be easily supplemented by other programs.
The higher number versions of MATLAB include more and more functions and special libraries (tool boxes) to solve special problems such as optimisation, statistics and so on.
There is a built in automatic technique to store and handle sparse matrices that makes the MATLAB competitive in solving large computational problems. The recent versions of MATLAB offer very rich graphic capabilities as well. There is an extra interval arithmetic package that can be downloaded from the WEB site
There is a possibility to build certain C and FORTRAN programs into MATLAB. Finally we mention that the system has an extremely well written help system.
PROBLEMS

121
Without overflow
Write a MATLAB program that computes the norm without overflow in all cases when the result does not make overflow. It is also required that the error of the final result can not be greater than that of the original formula.
122
Estimate
Equation has the solution . The perturbed equation has the solutions . Give an estimate for the perturbation .
123
Double word length
Consider an arithmetic system that has double word length such that every number represented with digits are stored in two digit word. Assume that the computer can only add numbers with digits. Furthermore assume that the machine can recognise overflow.
(i) Find an algorithm that add two positive numbers of digit length.
(ii) If the representation of numbers requires the sign digit for all numbers, then modify algorithm (i) so that it can add negative and positive numbers both of the same sign. We can assume that the sum does not overflow.
124
Auchmuty theorem
Write a MATLAB program for the Auchmuty error estimate (see Theorem 12.22) and perform the following numerical testing.
(i) Solve the linear systems , where is a given matrix, , () are random vectors such that . Compare the true errors (), and the estimated errors , where is the approximate solution of . What is the minimum, maximum and average of numbers ? Use graphic for the presentation of the results. Suggested values are , and .
(ii) Analyse the effect of condition number and size.
(iii) Repeat problems (i) and (ii) using LINPACK and BLAS.
125
Hilbert matrix
Consider the linear system , where and is the fourth order Hilbert matrix, that is . is illconditioned. The inverse of is approximated by
Thus an approximation of the true solution is given by . Although the true solution is also integer is not an acceptable approximation. Apply the iterative refinement with instead of to find an acceptable integer solution.
126
Consistent norm
Let be a consistent norm and consider the linear system
(i) Prove that if is singular, then .
(ii) Show that for the 2norm equality holds in (i), if and .
(iii) Using the result of (i) give a lower bound to , if
127
Choleskymethod
Use the Choleskymethod to solve the linear system , where
Also give the exact Choleskydecomposition and the true solution of . The approximate Choleskyfactor satisfies the relation . It can proved that in a floating point arithmetic with digit mantissa and base the entries of satisfy the inequality , where
Give a bound for the relative error of the approximate solution , if and (IBM3033).
128
BauerFike theorem
Let
(i) Analyze the perturbation of the eigenvalues for .
(ii) Compare the estimate of BauerFike theorem to the matrix .
129
Eigenvalues
Using the MATLAB eig routine compute the eigenvalues of for various (random) matrices and order . Also compute the eigenvalues of the perturbed matrices , where are random matrices with entries from the interval (). What is the maximum perturbation of the eigenvalues? How precise is the BauerFike estimate? Suggested values are and . How do the results depend on the condition number and the order ? Display the maximum perturbations and the BauerFike estimates graphically.
CHAPTER NOTES

The a posteriori error estimates of linear algebraic systems are not completely reliable. Demmel, Diament és Malajovich [ 61 ] showed that for the number estimators there are always cases when the estimate is unreliable (the error of the estimate exceeds a given order). The first appearance of the iterative improvement is due to Fox, Goodwin, Turing and Wilkinson (1946). The experiences show that the decrease of the residual error is not monotone.
Young [ 279 ], Hageman and Young [ 103 ] give an excellent survey of the theory and application of iterative methods. Barett, Berry et al. [ 23 ] give a software oriented survey of the subject. Frommer [ 78 ] concentrates on the parallel computations.
The convergence of the method is a delicate matter. It is analyzed in great depth and much better results than Theorem 12.34 exist in the literature. There are like methods that involve double shifting. Batterson [ 24 ] showed that there exists a Hessenberg matrix with complex eigenvalues such that convergence cannot be achieved even with multiple shifting.
Several other methods are known for solving the eigenvalue problems (see, e.g. [ 269 ], [ 273 ]). The method is one of the best known ones. It is very effective on positive definite Hermitian matrices. The method computes the Choleskydecomposition and sets .