Numerical Methods and Analysis

Slides:



Advertisements
Similar presentations
CSE 330: Numerical Methods
Advertisements

Computer Programming (TKK-2144) 13/14 Semester 1 Instructor: Rama Oktavian Office Hr.: M.13-15, W Th , F
Second Term 05/061 Roots of Equations Bracketing Methods.
Roots of Equations Bracketing Methods.
Lectures on Numerical Methods 1 Numerical Methods Charudatt Kadolkar Copyright 2000 © Charudatt Kadolkar.
NUMERICAL METHODS WITH C++ PROGRAMMING
FP1: Chapter 2 Numerical Solutions of Equations
Fin500J: Mathematical Foundations in Finance Topic 3: Numerical Methods for Solving Non-linear Equations Philip H. Dybvig Reference: Numerical Methods.
Section 5.5 – The Real Zeros of a Rational Function
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
The Real Zeros of a Polynomial Function
MATH 175: NUMERICAL ANALYSIS II Lecturer: Jomar Fajardo Rabajante IMSP, UPLB 2 nd Semester AY
Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.
Techniques of Integration
Solving Non-Linear Equations (Root Finding)
Continuity ( Section 1.8) Alex Karassev. Definition A function f is continuous at a number a if Thus, we can use direct substitution to compute the limit.
Chapter 6 Finding the Roots of Equations
Lecture 8 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Lecture 3 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
1 Nonlinear Equations Jyun-Ming Chen. 2 Contents Bisection False Position Newton Quasi-Newton Inverse Interpolation Method Comparison.
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Chapter 3 Roots of Equations. Objectives Understanding what roots problems are and where they occur in engineering and science Knowing how to determine.
Chapter 8 Integration Techniques. 8.1 Integration by Parts.
Numerical Methods for Engineering MECN 3500
Numerical Methods.
Newton’s Method, Root Finding with MATLAB and Excel
Solving Non-Linear Equations (Root Finding)
Numerical Methods Solution of Equation.
4 Numerical Methods Root Finding Secant Method Modified Secant
1 © 2010 Pearson Education, Inc. All rights reserved © 2010 Pearson Education, Inc. All rights reserved Chapter 3 Polynomial and Rational Functions.
CSE 330: Numerical Methods. What is true error? True error is the difference between the true value (also called the exact value) and the approximate.
Lecture 4 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Numerical Methods and Optimization C o u r s e 1 By Bharat R Chaudhari.
Solution of Nonlinear Equations ( Root Finding Problems ) Definitions Classification of Methods  Analytical Solutions  Graphical Methods  Numerical.
1 M 277 (60 h) Mathematics for Computer Sciences Bibliography  Discrete Mathematics and its applications, Kenneth H. Rosen  Numerical Analysis, Richard.
MATH342: Numerical Analysis Sunjae Kim.
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
P2 Chapter 8 CIE Centre A-level Pure Maths © Adam Gibson.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 / Chapter 5.
Solution of Nonlinear Equations ( Root Finding Problems )
Trigonometric Identities
CHAPTER 3 NUMERICAL METHODS
Lesson 14 – Working with Polynomials – FTA, Inequalities, Multiplicity & Roots of Unity HL1 Math - Santowski 6/16/2018 HL1 Math - Santowski.
Numerical Methods.
Copyright © Cengage Learning. All rights reserved.
Dividing Polynomials.
Copyright © Cengage Learning. All rights reserved.
Sullivan Algebra and Trigonometry: Section 5
Advanced Numerical Methods (S. A. Sahu) Code: AMC 51151
Trigonometric Identities
Copyright © 2014, 2010, 2007 Pearson Education, Inc.
Read Chapters 5 and 6 of the textbook
Numerical Methods.
MATH 2140 Numerical Methods
Chapter 6.
Solution of Equations by Iteration
Numerical Analysis Lecture 7.
Intermediate Value Theorem
Intermediate Value Theorem
5.5 Real Zeros of Polynomial Functions
Continuity Alex Karassev.
FP1: Chapter 2 Numerical Solutions of Equations
Sec:5.2 The Bisection Method.
Some Comments on Root finding
Ch 3.1: Solving Quadratic Equations
Chapter 6.
Comp 208 Computers in Engineering Yi Lin Winter, 2007
Copyright © Cengage Learning. All rights reserved.
1 Newton’s Method.
Solutions for Nonlinear Equations
Presentation transcript:

Numerical Methods and Analysis An Introduction To Numerical Methods and Analysis

Solution of Nonlinear Equations

Introduction

In this chapter we study one of the fundamental problems of numerical analysis, namely the numerical solution of nonlinear equations. Most equations arising in practice are nonlinear and are rarely of a form which allows the roots to be determined exactly.

Consequently, numerical methods are used to solve nonlinear algebraic equations when the equations prove intractable to ordinary mathematical techniques. These numerical methods are all iterative, and they may be used for equations that contain one or several variables.

Important Points

I. A nonlinear equation in this chapter may be considered any one of the following types: 1. An equation may be an algebraic equation (a polynomial equation of degree n) expressible in the form: where are constants. For example, the following equations are nonlinear.

2. The power of the unknown variable (not a positive integer number) involved in the equation must be difficult to manipulate. For example, the following non-polynomial equations are nonlinear

3. An equation may be a transcendental equation, the equation which involves the trigonometric functions, exponential functions and logarithmic functions. For example, all the following transcendental equations are nonlinear

II. Given nonlinear equation must be put in the following form f(x) = 0, where f(x) must be nonlinear function.

III. There may be many roots of the given nonlinear equation but we will seek the approximation of only one of its real root α lies in the given interval [a, b], that is f(α) = 0, where α ∈ [a, b].

IV. If f(x) is continuous function in a interval [a, b] and f(x) has opposite signs at the end points of the interval, then there must be a root of nonlinear equation f(x) = 0 in [a, b].

V. Root of a nonlinear equation may be simple (not repeating) or multiple (repeating). Simple root means For example, are the simple roots of the nonlinear equation

For the multiple root, we mean For example, are the multiple roots of the nonlinear equation

VI. The methods we will consider in this chapter are iterative methods and they are, Bisection method, fixed-point method, Newton method (also called, Newton-Raphson method) and secant method which give us the approximation of single (or simple) root of the nonlinear equation.

For the multiple roots of the nonlinear equation we will use other iterative methods, called, the first modified Newton’s method (also called the Schroeder’s method) and the second modified Newton’s method. The iterative methods for the approximation of simple root can be use also for the approximation of the multiple roots but they are very slow.

All the numerical methods described in this chapter are applicable to general nonlinear functions. The iterative methods we will discuss in this chapter are basically of two types: one in which the convergence is guaranteed and the other in which the convergence depends on the initial approximation.

VII. Remember that the best method for the approximation of the simple root of nonlinear equation is Newton’s method (called quadratic convergent method) and for multiple root of nonlinear equation is modified Newton’s method (called quadratic convergent method). Newton’s method for multiple root of nonlinear root is called a linear convergent method.

Definition (Root of an Nonlinear Equation) Assume that f(x) is a continuous function. A number α for which f(α) = 0 is called a root of the equation f(x) = 0 or a zero of the function f(x).

First, we shall discuss the numerical iterative methods for simple root of nonlinear equations in a single variable. The problem here can be simply written down as: f(x) = 0.

Method of Bisection

This is one of the simplest iterative technique for determining roots of f(x) = 0 and it needs two initial approximations to start. It is based on the Intermediate Value Theorem.

This method is also called the interval-halving method because the strategy is to bisect or halve the interval from one endpoint of the interval to the other endpoint and then retain the half interval whose end still bracket the root.

It is also referred to a bracketing method or sometimes called the Bolzano’s method. The fact that the function is required to change sign only once gives us a way to determine which half interval to retain; we keep the half on which f(x) changes sign or became zero.

The basis for this method can be easily illustrated by considering a function y = f(x). Our object is to find an x value for which y is zero.

Using this method, we begin by supposing f(x) is a continuous function defined on the interval [a, b] and then by evaluation the function at two x values, say, a and b, such that f(a)f(b) < 0.

The implication is that one of the values is negative and the other is positive. These conditions can be easily satisfied by sketching the function,

the function is negative at one endpoint a of the interval and positive at other endpoint b and is continuous on a ≤ x ≤b. Therefore the root must lies between a and b (by Intermediate Value Theorem) and a new approximation to the root α be calculated as

() The iterative formula () is known as the bisection method.

If f(c) ≈ 0, then c ≈ α is the desired root, and, if not, then there are two possibilities. Firstly, if f(a) . f(c) < 0, then f(x) has a zero between point a and point c. The process can then be repeated on the new interval [a, c].

Secondly, if f(a) . f(c) > 0 it follows that f(b) . f(c) < 0 since it is known that f(b) and f(c) have opposite signs. Hence, f(x) has zero between point c and point b and the process can be repeated with [c, b].

We see that after one step of the process, we have found either a zero or a new bracketing interval which is precisely half the length of the original one. The process continue until the desired accuracy is achieved. We use the bisection process in the following example.

لتحميل برنامج Program Code لتحميل Turbo C++ http://www.4shared.com/rar/uBzQ2hqoce/Borland_Turbo_C_45.html? لتحميل برنامج Program Code http://faculty.ksu.edu.sa/dr.abhari/DocLib1/%D8%B1%D9%8A%D8%B6%20254%20%D8%A7%D9%84%D8%B7%D8%B1%D8%A7%D8%A6%D9%82%20%D8%A7%D9%84%D8%B9%D8%AF%D8%AF%D9%8A%D8%A9/%D8%A7%D9%84%D9%88%D8%AD%D8%AF%D8%A9%20%D8%A7%D9%84%D8%A3%D9%88%D9%84%D9%89/bisectio.cpp لتحميل برنامج رسم الدوال http://www.4shared.com/rar/r6farDfLce/SetupGraph-442.html

Example 1

Example Use the bisection method to find the approximation to the root of the equation that is located in the interval accurate to Within

لتحميل برنامج رسم الدوال http://www.4shared.com/rar/r6farDfLce/SetupGraph-442.html

b=2; for(i=1;i<=6;i++) { c=(a+b)/2; printf("%.6f\t",c); #include<stdio.h> #include<math.h> float a,b,c; int i; float f(float x) { float y; y=pow(x,3)-2*x-1; return y; } main() i=1; a=1.5; b=2; for(i=1;i<=6;i++) { c=(a+b)/2; printf("%.6f\t",c); printf("%.7f\n",f(c)); if(f(a)*f(c)<0) b=c; else a=c; } return 0;

Example 2

Example 3