Solving Systems of Equations. Rule of Thumb: More equations than unknowns  system is unlikely to have a solution. Same number of equations as unknowns.

Slides:



Advertisements
Similar presentations
Technique of nondimensionalization Aim: –To remove physical dimensions –To reduce the number of parameters –To balance or distinguish different terms in.
Advertisements

ESSENTIAL CALCULUS CH11 Partial derivatives
Solving Systems of Equations. Rule of Thumb: More equations than unknowns  system is unlikely to have a solution. Same number of equations as unknowns.
Ch 5.2: Series Solutions Near an Ordinary Point, Part I
8 TECHNIQUES OF INTEGRATION. In defining a definite integral, we dealt with a function f defined on a finite interval [a, b] and we assumed that f does.
Chapter 1 Systems of Linear Equations
Notes, part 4 Arclength, sequences, and improper integrals.
Math for CSLecture 71 Constrained Optimization Lagrange Multipliers ____________________________________________ Ordinary Differential equations.
College Algebra Fifth Edition James Stewart Lothar Redlin Saleem Watson.
Linear Functions.
Mathematics for Business (Finance)
Zeroing in on the Implicit Function Theorem The Implicit Function Theorem for several equations in several unknowns.
Section 11.3 Partial Derivatives
Objective - To graph linear equations using x-y charts. One Variable Equations Two Variable Equations 2x - 3 = x = 14 x = 7 One Solution.
Solving Non-Linear Equations (Root Finding)
Chapter 1 Systems of Linear Equations and Matrices
Solving Systems of Equations. Rule of Thumb: More equations than unknowns  system is unlikely to have a solution. Same number of equations as unknowns.
ME 2304: 3D Geometry & Vector Calculus Dr. Faraz Junejo Double Integrals.
Announcements Topics: -finish section 4.2; work on sections 4.3, 4.4, and 4.5 * Read these sections and study solved examples in your textbook! Work On:
MA/CS 375 Fall MA/CS 375 Fall 2002 Lecture 31.
2.5 Implicit Differentiation
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
Boyce/DiPrima 9 th ed, Ch 3.2: Fundamental Solutions of Linear Homogeneous Equations Elementary Differential Equations and Boundary Value Problems, 9 th.
In section 11.9, we were able to find power series representations for a certain restricted class of functions. Here, we investigate more general problems.
Zeroing in on the Implicit Function Theorem Real Analysis II Spring, 2007.
Numerical Methods.
CHAPTER 3 NUMERICAL METHODS
Solution of. Linear Differential Equations The first special case of first order differential equations that we will look is the linear first order differential.
Differential Equations Chapter 1. A differential equation in x and y is an equation that involves x, y, and derivatives of y. A mathematical model often.
linear  2.3 Newton’s Method ( Newton-Raphson Method ) 1/12 Chapter 2 Solutions of Equations in One Variable – Newton’s Method Idea: Linearize a nonlinear.
AUGUST 2. MATH 104 Calculus I Review of previous material…. …methods and applications of integration, differential equations ………..
Solving Non-Linear Equations (Root Finding)
Iteration Methods “Mini-Lecture” on a method to solve problems by iteration Ch. 4: (Nonlinear Oscillations & Chaos). Some nonlinear problems are solved.
Local Linear Approximation for Functions of Several Variables.
WEEK 6 Day 2. Progress report Thursday the 11 th.
Using Systems of Equations to Solve Problems A Write and/or solve a system of linear equations (including problem situations) using graphing,
Derivatives A Physics 100 Tutorial. Why Do We Need Derivatives? In physics, and life too, things are constantly “changing.” Specifically, what we’ll be.
APPLICATIONS OF DIFFERENTIATION Antiderivatives In this section, we will learn about: Antiderivatives and how they are useful in solving certain.
Rearranging the equation f(x)=0 into the form x=g(x)
Function Optimization
CHAPTER 3 NUMERICAL METHODS
Lecture 7 Constrained Optimization Lagrange Multipliers
A power series with center c is an infinite series where x is a variable. For example, is a power series with center c = 2.
Complex Variables. Complex Variables Open Disks or Neighborhoods Definition. The set of all points z which satisfy the inequality |z – z0|
Systems of Linear Equations
Systems of First Order Linear Equations
Trigonometric Identities
Chapter 1 Systems of Linear Equations and Matrices
Boundary-Value Problems for ODE )בעיות הגבול(
Solution of Equations by Iteration
The Implicit Function Theorem---Part 1
Ch 5.2: Series Solutions Near an Ordinary Point, Part I
Sequences and Series in the Complex Plane
Equations of Straight Lines
Isoclines and Direction Fields; Existence and Uniqueness
Ch 3.2: Fundamental Solutions of Linear Homogeneous Equations
Copyright © Cengage Learning. All rights reserved.
TECHNIQUES OF INTEGRATION
Arab Open University Faculty of Computer Studies Dr
Systems of Linear Equations
Inequalities Some problems in algebra lead to inequalities instead of equations. An inequality looks just like an equation, except that in the place of.
Derivatives A Physics 100 Tutorial.
Graph the function. 1. f(x) = (x - 5) g(x) = (x + 1)2 + 7
APPLICATIONS OF DIFFERENTIATION
1 Newton’s Method.
Linear Equations in Linear Algebra
Lesson 66 – Improper Integrals
Derivatives.
Presentation transcript:

Solving Systems of Equations

Rule of Thumb: More equations than unknowns  system is unlikely to have a solution. Same number of equations as unknowns  system is likely to have a unique solution. Fewer equations than unknowns  the system is likely to have infinitely many solutions. Solving a System with Several Equations in Several Unknowns We are interested in this last situation!

Rule of Thumb: More equations than unknowns  system is unlikely to have a solution. Same number of equations as unknowns  system is likely to have a unique solution. Fewer equations than unknowns  the system is likely to have infinitely many solutions. Solving a System with Several Equations in Several Unknowns We expect to have as many “free variables” as the difference between the number of unknowns and the number of equations.

Systems of Equations Linear system. Matrix methods. Non-linear system. Can be solved analytically More difficult!

Linear Systems of Equations We can write this as a matrix/vector equation as follows: We can solve this system using “elementary row operations”

Row Reduce the Matrix

Solving the System Now we can reinterpret this reduced matrix as a solution to the system of equations.

When can we do it? We were able to solve for the variables y 1, y 2, and y 3 in terms of x 1 and x 2 because the sub-matrix... is invertible.

The Solution Function We started with three equations in five unknowns. As expected, our first three unknowns depend on two “free variables,” two being the difference between the number of unknowns and the number of equations.

F:  5 →  3 g :  2 →  3

Non-linear Equations most are impossible to solve analytically! Though we completely understand how to solve systems of linear equations... and we “luck out” with a small number of non-linear systems...

So How do we Proceed? This equation is already too hard…. We’ll start with something a bit simpler. What would you do with this one?

So How do we Proceed? This equation is already too hard…. We’ll start with something a bit simpler. What would you do with this one?

Iterating real functions M:  →  is a continuous function. Suppose that we iterate M beginning with x=x 0 to obtain the sequence Fact: If the iterated map converges, then it converges to a fixed point of M. That is, it converges to a real number x such that M(x) = x. This sequence is called the iterated map on M based at x 0.

Why a Fixed Point? continuity of M Together with the continuity of M these should tell us that: But this is just the “tail” of the sequence that converged to x! Removing the first term won’t affect convergence, so M (x) = x

Graphical Illustration x0x0 x0x0 x0x0 Notice that a fixed point of a function M is an intersection of the graph of M with the line y=x M M

Solving equations Suppose we want to find a root for the equation We can rewrite the equation as and reinterpret the problem as a fixed point problem.

Root finding via iteration When we iterate the function starting at x 0 =0, we get The limit is, indeed, a solution to the equation

Problem... This works nicely when the iterated map converges, but it may not converge. For instance, if we try the same trick with the iterated map goes to infinity. What’s the difference? What’s going on?

Attracting and Repelling Fixed Points x0x0 Sometimes a fixed point “attracts” the iteration and sometimes it “repels” the iteration. The key difference is the derivative of the function at the fixed point. M

The Derivative at the Fixed Point Theorem: Suppose that M is continuously differentiable on [a,b] and that p is a fixed point of M lying in [a,b]. If then there exists some subinterval I of [a,b] containing p such that if x 0 is in I, then Conversely, if then there is some interval I about p such that every iterated map based a point of I (other than p, of course) will eventually leave I and will thus not converge to p.

Let x be a real number and K a non-zero real number. Note that x is a solution to the equation if and only if it is also a solution to the equation Rethinking the scheme So our fixed point root-finding scheme can be applied to any equation of the form Choose the value of K so that the derivative is “favorable”

The best of all possible worlds It is not hard to show that the rate at which the iterated maps on M converge to the fixed point p is governed by the size of | M '(p) |. The closer | M '(p) | is to zero, the faster the convergence rate. While we are making choices, we might as well be ambitious and choose K so that we not only get convergence to the root, but we get really fast convergence! For we want  Works, provided that f '(p)  0.

So in the best of all possible worlds, we want to iterate the function where p is a root of f. Notice that this looks remarkably like the iteration function for Newton’s method: Constant (given p!), and ideally chosen for convergence near p. Evaluated each step, but not far from f’(p) when x is close to p.

Quasi-Newton methods: Iterative ways to find p with f (p) = 0 Must evaluate new f’(x) each step. Requires no “magic knowledge” and converges in a neighborhood of p. Newton’s method “Pretty good” method “Leibnitz” method If we choose D near enough to f’(p) so that |Q’(p)| < 1/2, iterated maps will converge in a neighborhood of p. “Best of all possible worlds”: Since M’(p) = 0, iterated maps converge very quickly in a neighborhood of p.

One Equation in Two Unknowns Can we use this set of ideas to tackle Yes! First we recall that the solutions to this equation are the zeros of the 2-variable function

0-Level Curves The pairs (x,y) that satisfy the equation f(x,y)=0 lie on the intersection of the graph of f and the horizontal plane z = 0. That is, they lie on the 0-level curves of the function f.

Taking a “piece” The 0-level curves of f. Though the points on the 0-level curves of f do not form a function, portions of them do.

x y Consider the contour line f (x,y) = 0 in the xy-plane. Idea: At least in small regions, this curve might be described by a function y = g(x). Our goal: find such a function!

x y (a,b)(a,b)(a,b)(a,b) (a,b)(a,b)(a,b)(a,b) y = g(x) Start with a point (a,b) on the contour line, where the partial with respect to y is not 0: In a small box around (a,b), we can hope to find g(x). Make sure all of the y-partials in this box are close to D.

(a,b)(a,b)(a,b)(a,b) How to construct g(x): For this x, construct function Iterate  x (y) to find its fixed point, where f (x,y) = 0. Let fixed point be g(x). x yz f (x,y) (x fixed) What’s going on here? For our fixed value of x, we find g(x) so that f (x,g(x)) = 0. We are doing this by iteration. g(x)g(x) Fix x.

(a,b)(a,b)(a,b)(a,b) How to construct g(x) Given x, construct function Iterate  x (y) to find its fixed point, where f (x,y) = 0. Let fixed point be g(x). x How do we know this works? For x = a, this is just the ideal “Leibnitz” method for finding g(a) = b. For small enough boxes, this is a “pretty good” method elsewhere. Non-trivial issues: Make sure the iterated map converges Make sure you get the right fixed point. (Don’t leave the box!) b

x y (a,b)(a,b)(a,b)(a,b) (a,b)(a,b)(a,b)(a,b) y = g(x) Staying in the Box?