Presentation on theme: "Lecture on Numerical Analysis Dr.-Ing. Michael Dumbser"— Presentation transcript:
1Lecture on Numerical Analysis Dr.-Ing. Michael Dumbser 01 / 12 / 2008
2Linear AlgebraA standard task in linear algera is finding the solution of a system of linear equations of the formThe solution to this task is given by the so-called Gauß-algorithm:Define a new matrix C asNow the matrix C is modified by a sequence of operations on its rows to transform its leftpart into the unit matrix.The admissible row-operations are:(1) addition / subtraction of multiples of rows(2) multiplication / division of a row by a constant factor(3) exchange of rowsWhen the left part of the matrix C has been transformed into the unit matrix, the right part containsthe solution X of the equation system.
3The Gauß-Algorithm IIn detail, the Gauß-algorithm for the N x N matrix A and the N x M matrix B proceeds as follows:Part 1 is a forward loop, which transforms the matrix A into a normalized upper triangular form.Loop over all rows. The loop index i runs from 1 to N(1) check the coefficient C(i,i) on the diagonal of the row.(2) IF it is zero, look for the next row j with row index j > i which has a nonzero element in column i IF there is no such row, then the matrix is singular. EXIT ELSE exchange rows i and j in the matrix C.(3) Divide the whole row i by C(i,i). Its first non-zero entry is now equal to 1.(4) Subtract a suitable multiple of row i from all rows j > i in order to eliminate all entries in column i of those rows j.end loop
4The Gauß-Algorithm IIIn detail, the Gauß-algorithm for the N x N matrix A and the N x M matrix B proceeds as follows:Part 2 is a very simple backward loop, which transforms the matrix A into the unit matrix.Loop over all rows. The loop index i runs from N to 1Subtract a suitable multiple of row i from all rows j < i in order to eliminate all entries in column i of those rows j.end loopAll operations have been performed on the augmented matrix C. The type of row-operations is completely determined by the left part if matrix C, i.e. by the original matrix A. The right part, i.e. matrix B, is transformed with the same operations in a passive way.At the end of the Gauß-algorithm, the right part of the matrix C, i.e. where B was located initially, we find the solution X of the equation system.To compute the inverse of a matrix using the Gauß-algorithm, the right hand side B must be simply set to the the N x N unit matrix, since we have:
5Exercise 1Write a MATLAB function gauss.m that accepts a general N x N matrix A and a general N x M matrix B as input argument and that returns the N x M matrix X that solves the matrix equation (1) Use the Gauß algorithm to solve the equation system(2) Use the Gauß algorithm to compute the inverse of a random 5 x 5 matrix A generated with the MATLAB command rand as follows: A = rand(5).Hint: to generate a 5x5 unit matrix, use the following MATLAB command: B = eye(5).
6A Special Case of Gauß-Eliminiation: The Thomas-Algorithm In the case where the matrix A is tridiagonal, i.e. with the special structurethere is a very fast and efficient special case of the Gauß-algorithm, called the Thomas-algorithm.Since it is a variation of the Gauß-algorith, the Thomas algorithm is a direct method to solve generallinear tridiagonal systems. As the original Gauß-algorithm, it proceeds in two stages, one forward elimination and one back-substitution.
7The Thomas-Algorithm Part I: Forward elimination. Part II: Back substitution
8Exercise 2Write a MATLAB function Thomas.m that realizes the Thomas algorithm and that accepts as input four vectors of equal length a, b, c and d, where a contains the lower diagonal elements, b contains the diagonal elements, c contains the upper diagonal elements and the vector d contains the right hand side of the linear equation system Ax = d. The output of Thomas.m is the solution vector x that satisfies the above-mentioned system.To test your code, proceed in the following steps:(1) Generate the vectors a,b,c and d randomly using the MATLAB command rand.(2) Assemble the full matrix A using the vectors a, b and c.(3) Solve the system Ax = d directly using MATLAB.(4) Solve the sytem Ax = d using the function call x = thomas(a,b,c,d).
9The Conjugate Gradient Method For large matrices A, the number of operations of the Gauß algorithm grows with N3. For large butsparse matrices (many entries are zero), it may be more efficient to use an iterative scheme for thesolution of the system. For symmetric, positive definite matrices A, a very efficient algorithm can beconstructed, which is the so-called conjugate gradient method, which we have already seen for nonlinear, multi-variate optimization.We start with some definitions:(1) A matrix is symmetric positive definite, ifand(2) Two vectors u and v are called conjugate with respect to matrix A if(3) If we knew N conjugate directions pj (j = 1 .. N), we could write the solution x of asThe coefficients a are given by:(4) The term is called the residual
10The Conjugate Gradient Method The idea of the CG method is to minimize the functionalIts gradient is:Using the steepest descent method in direction p, we obtain the minimum of g starting from xk as follows:
11The Conjugate Gradient Method A preliminary version of the conjugate gradient method now reads asFrom definition (3) we know that the algorithm will terminatewith the exact solution after at most N iterationsDO k = 0... N - 11D minimization along direction pkMove point x into the minimum along direction pkCompute new residual ( = negative gradient = steepest descent direction)Do not move in the steepest descent direction, but compute a new conjugate direction pk+1 that takes into account all the previous searchdirections. As in nonlinear minimization, this guarantees faster convergence.ENDDO
12The Conjugate Gradient Method After some algebra, the final conjugate gradient method is given as followsDO k = 0... N - 1ENDDO
13The GMRES Method of Saad and Schultz (1986) The GMRES method of Saad and Schultz minimizes the residual normfor j = [1:N]i=1...jfor i = [1:j-1]endif( gj+1 > tol ) (see next page)
14The GMRES Method of Saad and Schultz (1986) ...contiuation of the previous pagefor j=[1:N]if(gj+1 > tol)elsefor i=[j:-1:1]endendend
15Exercise 3 Write a MATLAB function CG.m that solves the system The function should check if A verifies the necessary symmetry property for the CG method.Solve the system with
16The Linear Least-Squares Method In the case where we have more equations than unknowns in the linear systemwe call the equation system an overdetermined system. Typical applications for such overdetermined systems can be found in data analysis (linear regression) where so-called best-fit curves have to be computed e.g. from observations or experimental data.In this case, the matrix is no longer a N x N square matrix, but a M x N rectangular matrix with M > N. In general, such systems do not have a solution in the original sense of theequation. However, according to ideas of Adrien Marie Legendre and Carl Friedrich Gauß, we can try to find the best possible solution of the system by minimizing the following goalfunction:
17The Linear Least-Squares Method The minimum of g(x) can be computed analytically using the differential criterion for a minimum:The least-squares solution of the overdetermined equation system can therefore be found by solvingthe so-called normal equationis called the pseudo – inverse of the N x M matrix A.
19Exercise 4Write a MATLAB script LSQ.m that computes the coefficients of the parabola for an oblique throw from position (x0,y0) with horizontal and vertical speed (vx,vy). The gravitation constant is g = 9.81.The „measurements“ are available in form of points (xi,yi) that are generated from the physical parabola of the throw plus a random (positive and negative) perturbation of amplitude A.Use the linear Least-Squares Method to solve the problem, solving directly the normal equation of the problem.Plot the „measured data“ as well as the parabola obtained from the least squares method in the same figure.Compare the exact coefficients of the parabola obtained from the physics of an oblique throw with the coefficients obtained by the least squares method in function of the number of points in the measurements.
20Carl Friedrich Gauß German mathematician, astronomer and physicist 30/04/1777 – 23/02/1855Significant contributions toLinear algebra (Gauß algorithm, Least Squares Method)Statistics (Gaussian normal probability distribution)Potential theory and differential calculus (Gauß divergence theorem)Numerical analysis (Gaussian quadrature rules)Already during his life, he was called the „prince of mathematics“.He produced more than 250 scientific contributions to his research fields.