Finite Element Method. History Application Consider the two point boundary value problem.

Slides:



Advertisements
Similar presentations
Finite Difference Discretization of Elliptic Equations: 1D Problem Lectures 2 and 3.
Advertisements

10.4 Complex Vector Spaces.
5.4 Basis And Dimension.
5.1 Real Vector Spaces.
One-phase Solidification Problem: FEM approach via Parabolic Variational Inequalities Ali Etaati May 14 th 2008.
SOLUTION OF STATE EQUATION
Point-wise Discretization Errors in Boundary Element Method for Elasticity Problem Bart F. Zalewski Case Western Reserve University Robert L. Mullen Case.
Vector Spaces & Subspaces Kristi Schmit. Definitions A subset W of vector space V is called a subspace of V iff a.The zero vector of V is in W. b.W is.
Autar Kaw Humberto Isaza
Some Ideas Behind Finite Element Analysis
BVP Weak Formulation Weak Formulation ( variational formulation) where Multiply equation (1) by and then integrate over the domain Green’s theorem gives.
Signal , Weight Vector Spaces and Linear Transformations
Signal , Weight Vector Spaces and Linear Transformations
THE DIMENSION OF A VECTOR SPACE
Finite Element Method Introduction General Principle
Ch 7.9: Nonhomogeneous Linear Systems
Weak Formulation ( variational formulation)
Ch 3.3: Linear Independence and the Wronskian
Orthogonality and Least Squares
4 4.6 © 2012 Pearson Education, Inc. Vector Spaces RANK.
Copyright © Cengage Learning. All rights reserved. 5 Integrals.
6 6.3 © 2012 Pearson Education, Inc. Orthogonality and Least Squares ORTHOGONAL PROJECTIONS.
Index FAQ Numerical Approximations of Definite Integrals Riemann Sums Numerical Approximation of Definite Integrals Formulae for Approximations Properties.
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Linear Algebra Chapter 4 Vector Spaces.
Ch 5.3: Series Solutions Near an Ordinary Point, Part II A function p is analytic at x 0 if it has a Taylor series expansion that converges to p in some.
ME 1202: Linear Algebra & Ordinary Differential Equations (ODEs)
4.1 Vector Spaces and Subspaces 4.2 Null Spaces, Column Spaces, and Linear Transformations 4.3 Linearly Independent Sets; Bases 4.4 Coordinate systems.
4 4.4 © 2012 Pearson Education, Inc. Vector Spaces COORDINATE SYSTEMS.
Linear Equations in Linear Algebra
AN ORTHOGONAL PROJECTION
1 1.7 © 2016 Pearson Education, Inc. Linear Equations in Linear Algebra LINEAR INDEPENDENCE.
1 Variational and Weighted Residual Methods. 2 The Weighted Residual Method The governing equation for 1-D heat conduction A solution to this equation.
Integrals  In Chapter 2, we used the tangent and velocity problems to introduce the derivative—the central idea in differential calculus.  In much the.
Copyright © Cengage Learning. All rights reserved. 4 Integrals.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Integral Transform Method CHAPTER 15. Ch15_2 Contents  15.1 Error Function 15.1 Error Function  15.2Applications of the Laplace Transform 15.2Applications.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
AGC DSP AGC DSP Professor A G Constantinides©1 Hilbert Spaces Linear Transformations and Least Squares: Hilbert Spaces.
Elementary Linear Algebra Anton & Rorres, 9th Edition
4 4.6 © 2012 Pearson Education, Inc. Vector Spaces RANK.
Section 2.3 Properties of Solution Sets
CHAPTER 3 NUMERICAL METHODS
4 © 2012 Pearson Education, Inc. Vector Spaces 4.4 COORDINATE SYSTEMS.
Chap. 4 Vector Spaces 4.1 Vectors in Rn 4.2 Vector Spaces
Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the.
Variational and Weighted Residual Methods
1 MAC 2103 Module 8 General Vector Spaces I. 2 Rev.F09 Learning Objectives Upon completing this module, you should be able to: 1. Recognize from the standard.
In Chapters 6 and 8, we will see how to use the integral to solve problems concerning:  Volumes  Lengths of curves  Population predictions  Cardiac.
Boyce/DiPrima 9 th ed, Ch 11.3: Non- Homogeneous Boundary Value Problems Elementary Differential Equations and Boundary Value Problems, 9 th edition, by.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Solving Scalar Linear Systems A Little Theory For Jacobi Iteration
5.4 Basis and Dimension The equations can be written by using MathType
4 4.5 © 2016 Pearson Education, Inc. Vector Spaces THE DIMENSION OF A VECTOR SPACE.
1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 25.
Chapter 5 Chapter Content 1. Real Vector Spaces 2. Subspaces 3. Linear Independence 4. Basis and Dimension 5. Row Space, Column Space, and Nullspace 6.
Matrices, Vectors, Determinants.
Boyce/DiPrima 9 th ed, Ch 5.3: Series Solutions Near an Ordinary Point, Part II Elementary Differential Equations and Boundary Value Problems, 9 th edition,
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
Copyright © Cengage Learning. All rights reserved.
Copyright © Cengage Learning. All rights reserved.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 4-5, Tuesday 18th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN.
Linear Algebra Lecture 20.
Vector Spaces, Subspaces
Eigenvalues and Eigenvectors
Vector Spaces RANK © 2012 Pearson Education, Inc..
THE DIMENSION OF A VECTOR SPACE
Null Spaces, Column Spaces, and Linear Transformations
Linear Equations in Linear Algebra
Approximation of Functions
Presentation transcript:

Finite Element Method

History

Application

Consider the two point boundary value problem

Procedure in Solving the Problem Numerically 1. Obtain the Variational Formulation where V = {v:v continuous on [0,1], v’ piecewise continuous, bounded on [0,1], v(0)=v(1)=0}

Procedure in Solving the Problem Numerically 2. Discretize the variational formulation. This means that for a chosen M € N, we subdivide [0,1] into M +1 subintervals each of length h = 1/(M+1) and get the formulation (V M ): where V M is the span of the set of hat functions {Φ 1, Φ 2,…, Φ M }

Procedure in Solving the Problem Numerically 3. From the discrete variational formulation obtained previously, obtain the matrix equation and

Procedure in Solving the Problem Numerically 4. Solve the matrix equation A = b. If then the approximate solution

Steps Variational Formulation Uniqueness of Solution Hat Functions Discretization of the Variational Formulation Existence of A -1 Convergence of the Approximate Solution uM to the Exact Solution u

Variational Formulation Suppose u is a solution of (D). Then Take any.

Variational Formulation Integrating the left hand side, we get

Variational Formulation The given boundary conditions lead to

Variational Formulation Since v is an arbitrary element of V, we conclude that any solution u of (D) is also a solution of v.

Variational Formulation

The equation can be written as

Variational Formulation Let us prove the reverse. Suppose that u is a solution of (V). Then

Variational Formulation So,

Variational Formulation is continuous and bounded in the open interval (0,1)

Variational Formulation Since

Variational Formulation If are continuous in (0,1), then is also continuous in (0,1). So, u is also a solution of (D).

Uniqueness of Solution If are two solutions of (V), then for any,

Uniqueness of Solution Subtracting the two equations, we get

Uniqueness of Solution Since it is true for any it is true for So,

Uniqueness of Solution So. Moreover, in (0,1), where f is continuous on [0,1].

Uniqueness of Solution But So

The Hat Functions Consider the interval [0,1]. For a chosen, we subdivide [0,1] into M +1 subintervals. Choose the subintervals to be of length

The Hat Functions Including the end points 0 and 1, we consider the node points where

The Hat Functions For j = 1,…,M, we define the hat function to be linear in the intervals and with but for.

The Hat Functions The hat function is also defined to be zero outside the open interval

The Subspace of ss Define the subset of to be the collection of all functions in such that is linear on each subinterval

The Subspace of ss Consider the nodes Let

The Subspace of ss So, any function is uniquely determined by its values at the nodes Similarly, any is a unique linear combination of the hat functions

The Subspace of ss Consider the hat functions Recall the span of HM to be the set of all possible linear combinations of hat functions in HM.

The Subspace of ss But is also contained in the vector space So

Discretization of the Variational Formulation To solve the variational problem numerically is to solve its discretized form: Now, we have shown earlier that for some vector

Discretization of the Variational Formulation The equation holds if is the hat function so for we have Then

Discretization of the Variational Formulation which can be written as This yields a system of M linear equations with M unknowns The are precisely the values of at the nodes. The system is as follows:

Discretization of the Variational Formulation which can also be written as In matrix form, we write

Discretization of the Variational Formulation The stiffness matrix A has entries and the load vector b has components

The Existence of A -1 Note that A is a symmetric matrix since To show that A is nonsingular, we will show that A is positive definite. In other words, we will show that for every nonzero vector in

The Existence of A -1 Let where the zero vector in It is possible for to have some components that are zero but not all. Then,

The Existence of A -1

Thus for any nonzero vector we have to be strictly positive to prove that A is positive definite. So we proceed further by noticing that some component – of is nonzero. So We have shown that A is positive definite, hence A is nonsingular.

Convergence of the approximate solution to the exact solution Theorem: If is an approximate solution of then for every we have where is the minimum value of over the whole closed interval [0,1].

Convergence of the approximate solution to the exact solution Note that exists since is continuous on [0,1] so that the Extreme-Value Theorem applies. So as M grows bigger, we can expect the error to shrink to zero.

Example Problem Consider the following problem: