Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.

Slides:



Advertisements
Similar presentations
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Advertisements

Systems of Linear Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
CISE301_Topic3KFUPM1 SE301: Numerical Methods Topic 3: Solution of Systems of Linear Equations Lectures 12-17: KFUPM Read Chapter 9 of the textbook.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Part 3 Chapter 9 Gauss Elimination
Chapter 9 Gauss Elimination The Islamic University of Gaza
Major: All Engineering Majors Author(s): Autar Kaw
Linear Algebraic Equations
Linear Systems of Equations Ax = b Marco Lattuada Swiss Federal Institute of Technology - ETH Institut für Chemie und Bioingenieurwissenschaften ETH Hönggerberg/
Solution of Simultaneous Linear Algebraic Equations: Lecture (I)
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 17 Solution of Systems of Equations.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 14 Elimination Methods.
Dr. Jie Zou PHY Chapter 3 Solution of Simultaneous Linear Algebraic Equations: Lecture (II) Note: Besides the main textbook, also see Ref: Applied.
Chapter 2 Matrices Definition of a matrix.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 15 Solution of Systems of Equations.
ECIV 520 Structural Analysis II Review of Matrix Algebra.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
Part 3 Chapter 8 Linear Algebraic Equations and Matrices PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Part 31 Chapter.
Chapter 5 Determinants.
Major: All Engineering Majors Author(s): Autar Kaw
Chapter 8 Objectives Understanding matrix notation.
Chapter 10 Review: Matrix Algebra
Engineering Analysis ENG 3420 Fall 2009
Spring 2013 Solving a System of Linear Equations Matrix inverse Simultaneous equations Cramer’s rule Second-order Conditions Lecture 7.
ECON 1150 Matrix Operations Special Matrices
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Part 31 Linear.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Rev.S08 MAC 1140 Module 10 System of Equations and Inequalities II.
Slide Chapter 7 Systems and Matrices 7.1 Solving Systems of Two Equations.
MECN 3500 Inter - Bayamon Lecture 7 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
ΑΡΙΘΜΗΤΙΚΕΣ ΜΕΘΟΔΟΙ ΜΟΝΤΕΛΟΠΟΙΗΣΗΣ 4. Αριθμητική Επίλυση Συστημάτων Γραμμικών Εξισώσεων Gaussian elimination Gauss - Jordan 1.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Chemical Engineering Majors Author(s): Autar Kaw
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3 - Chapter 9 Linear Systems of Equations: Gauss Elimination.
Gaussian Elimination Electrical Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for.
ES 240: Scientific and Engineering Computation. Chapter 8 Chapter 8: Linear Algebraic Equations and Matrices Uchechukwu Ofoegbu Temple University.
ME 142 Engineering Computation I Matrix Operations in Excel.
Chapter 9 Gauss Elimination The Islamic University of Gaza
Gaussian Elimination Industrial Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for.
Gaussian Elimination Mechanical Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for.
Chapter 2 Determinants. With each square matrix it is possible to associate a real number called the determinant of the matrix. The value of this number.
Part 3 Chapter 9 Gauss Elimination PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The McGraw-Hill Companies,
LEARNING OUTCOMES At the end of this topic, student should be able to :  D efination of matrix  Identify the different types of matrices such as rectangular,
Gaussian Elimination Civil Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for STEM.
Section 2.1 Determinants by Cofactor Expansion. THE DETERMINANT Recall from algebra, that the function f (x) = x 2 is a function from the real numbers.
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
Linear Algebra Review Tuesday, September 7, 2010.
Autar Kaw Benjamin Rigsby Transforming Numerical Methods Education for STEM Undergraduates.
Matrices. Variety of engineering problems lead to the need to solve systems of linear equations matrixcolumn vectors.
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
CHAPTER 7 Determinant s. Outline - Permutation - Definition of the Determinant - Properties of Determinants - Evaluation of Determinants by Elementary.
ROOTS OF EQUATIONS. Graphical Methods A simple method for obtaining the estimate of the root of the equation f(x)=0 is to make a plot of the function.
1 Numerical Methods Solution of Systems of Linear Equations.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3 - Chapter 9.
Part 3 Chapter 9 Gauss Elimination
Linear Algebraic Equations and Matrices
ECE 3301 General Electrical Engineering
Linear Algebraic Equations and Matrices
Linear independence and matrix rank
Engineering Analysis ENG 3420 Fall 2009
Lecture 11 Matrices and Linear Algebra with MATLAB
Chapter 4 Systems of Linear Equations; Matrices
Major: All Engineering Majors Author(s): Autar Kaw
Simultaneous Equations
Chapter 4 Systems of Linear Equations; Matrices
Presentation transcript:

Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00

22 Lecture 13 Last time:  Problem solving in preparation for the quiz  Linear Algebra Concepts Vector Spaces, Linear Independence Orthogonal Vectors, Bases Matrices Today  Solving systems of linear equations (Chapter 9) Graphical methods Next Time  Gauss elimination

Solving systems of linear equations Matrices provide a concise notation for representing and solving simultaneous linear equations:

Solving systems of linear equations in Matlab Two ways to solve systems of linear algebraic equations [A]{x}={b}:  Left-division x = A\b  Matrix inversion x = inv(A)*b Matrix inversion only works for square, non-singular systems; it is less efficient than left-division.

Solving graphically systems of linear equations For small sets of simultaneous equations, graphing them and determining the location of the intersection of the straight line representing each equation provides a solution. There is no guarantee that one can find the solution of system of linear equations: a) No solution exists b) Infinite solutions exist c) System is ill-conditioned

Determinant of the square matrix A=[a ij ] Here the coefficient A ij of a ij is called the cofactor of A A cofactor is a polynomial in the remaining rows of A and can be described as the partial derivative of A. The cofactor polynomial contains only entries from an (n-1)x (n-1) matrix M ij called a “minor” obtained from A by eliminating row i and column j.

Determinants of several matrices Determinants for 1x1, 2x2, 3x3 matrices are: Determinants for square matrices larger than 3 x 3 are more complicated.

Properties of the determinants If we permute two rows of the rectangular matrix A then the sign of the determinant det(A) changes. The determinant of the transpose of a matrix A is equal to the determinant of the original matrix. If two rows of A are identical then |A|=0

Cramer’s Rule Consider the system of linear equations: [A]{x}={b} Each unknown in a system of linear algebraic equations may be expressed as a fraction of two determinants with denominator D and with the numerator obtained from D by replacing the column of coefficients of the unknown in question by the vector b consisting of constants b 1, b 2, …, b n.

Example of the Cramer’s Rule Find x 2 in the following system of equations: Find the determinant D Find determinant D 2 by replacing D’s second column with b Divide

Gauss Elimination Gauss elimination  a sequential process of removing unknowns from equations using forward elimination followed by back substitution. “Naïve” Gauss elimination  the process does not check for potential problems resulting from division by zero.

Naïve Gauss Elimination (cont) Forward elimination  Starting with the first row, add or subtract multiples of that row to eliminate the first coefficient from the second row and beyond.  Continue this process with the second row to remove the second coefficient from the third row and beyond.  Stop when an upper triangular matrix remains. Back substitution  Starting with the last row, solve for the unknown, then substitute that value into the next highest row.  Because of the upper-triangular nature of the matrix, each row will contain only one more unknown.

function x=GaussNaive(A,b) ExA=[A b]; [m,n]=size(A); q=size(b); if (m~=n) fprintf ('Error: input matrix is not square; n = %3.0f, m=%3.0f \n', n,m); End if (n~=q) fprintf ('Error: vector b has a different dimension than n; q = %2.0f \n', q); end n1=n+1; for k=1:n-1 for i=k+1:n factor=ExA(i,k)/ExA(k,k); ExA(i,k:n1)= ExA(i,k:n1)-factor*ExA(k,k:n1); End x=zeros(n,1); x(n)=ExA(n,n1)/ExA(n,n); for i=n-1:-1:1 x(i) = (ExA(i,n1)-ExA(i,i+1:n)*x(i+1:n))/ExA(i,i); end

>> C=[ ; ; ] C = >> d= [588.6; 686.7;784.8] d = >> x = GaussNaive(C,d) x =

>> A=[ ; A=[ ; ; ; ; ; ] A = b = >> b=b' b = >> x = GaussNaive(A,b) x = NaN

x=A\b x = >> x=inv(A)*b x =

Complexity of Gauss elimination To solve an n x n system of linear equations by Gauss elimination we carry out the following number of operations: Flops  floating-point operations. Mflops/sec number of floating point operation executed by a processor per second. Conclusions:  As the system gets larger, the computation time increases greatly.  Most of the effort is incurred in the elimination step.

Pivoting If a coefficient along the diagonal is 0 (problem: division by 0) or close to 0 (problem: round-off error) then the Gauss elimination causes problems. Partial pivoting  determine the coefficient with the largest absolute value in the column below the pivot element. The rows can then be switched so that the largest element is the pivot element. Complete pivoting  check also the rows to the right of the pivot element are also checked and switch columns.

Partial Pivoting Program

Tridiagonal systems of linear equations A tridiagonal system of linear equations  a banded system with a bandwidth of 3: Can be solved using the same method as Gauss elimination, but with much less effort because most of the matrix elements are already 0.

Tridiagonal system solver