Functions of Several Variables Local Linear Approximation.

Slides:



Advertisements
Similar presentations
5.1 Real Vector Spaces.
Advertisements

Chapter 4 Euclidean Vector Spaces
Partial Derivatives Definitions : Function of n Independent Variables: Suppose D is a set of n-tuples of real numbers (x 1, x 2, x 3, …, x n ). A real-valued.
Copyright © Cengage Learning. All rights reserved. 14 Partial Derivatives.
Chapter 9: Vector Differential Calculus Vector Functions of One Variable -- a vector, each component of which is a function of the same variable.
ESSENTIAL CALCULUS CH11 Partial derivatives
F(x,y) = x 2 y + y y  f — = f x =  x 2xyx 2 + 3y y ln2 z = f(x,y) = cos(xy) + x cos 2 y – 3  f — = f y =  y  z —(x,y) =  x – y sin(xy)
Tangent lines Recall: tangent line is the limit of secant line The tangent line to the curve y=f(x) at the point P(a,f(a)) is the line through P with slope.
Signal Spaces.
PARTIAL DERIVATIVES 14. PARTIAL DERIVATIVES 14.6 Directional Derivatives and the Gradient Vector In this section, we will learn how to find: The rate.
17 VECTOR CALCULUS.
Copyright © Cengage Learning. All rights reserved. 16 Vector Calculus.
PARTIAL DERIVATIVES PARTIAL DERIVATIVES One of the most important ideas in single-variable calculus is:  As we zoom in toward a point on the graph.
DEPARTMENT OF MATHEMATI CS [ YEAR OF ESTABLISHMENT – 1997 ] DEPARTMENT OF MATHEMATICS, CVRCE.
Mathematics for Business (Finance)
Real Numbers and Their Properties รายวิชา ค ความรู้พื้นฐานสำหรับแคลคูลัส 1 ภาคเรียนที่ 1 ปีการศึกษา 2552.
Calculus and Analytic Geometry I Cloud County Community College Fall, 2012 Instructor: Timothy L. Warkentin.
DERIVATIVES 3. Summary f(x) ≈ f(a) + f’(a)(x – a) L(x) = f(a) + f’(a)(x – a) ∆y = f(x + ∆x) – f(x) dx = ∆x dy = f’(x)dx ∆ y≈ dy.
Compiled By Raj G. Tiwari
1 Preliminaries Precalculus Review I Precalculus Review II
Review of Vector Analysis
1 Chapter 2 Vector Calculus 1.Elementary 2.Vector Product 3.Differentiation of Vectors 4.Integration of Vectors 5.Del Operator or Nabla (Symbol  ) 6.Polar.
Ch. 10 Vector Integral Calculus.
Linear Algebra Chapter 4 Vector Spaces.
ME 1202: Linear Algebra & Ordinary Differential Equations (ODEs)
Elementary Linear Algebra Anton & Rorres, 9th Edition
Chapter 5 General Vector Spaces.
Section 4.1 Vectors in ℝ n. ℝ n Vectors Vector addition Scalar multiplication.
2 2.1 © 2016 Pearson Education, Inc. Matrix Algebra MATRIX OPERATIONS.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Appendix A. Mathematical Background EE692 Parallel and Distribution Computation.
Statistics and Linear Algebra (the real thing). Vector A vector is a rectangular arrangement of number in several rows and one column. A vector is denoted.
Chapter Content Real Vector Spaces Subspaces Linear Independence
Teorema Stokes. STOKES’ VS. GREEN’S THEOREM Stokes’ Theorem can be regarded as a higher-dimensional version of Green’s Theorem. – Green’s Theorem relates.
ME 2304: 3D Geometry & Vector Calculus Dr. Faraz Junejo Gradient of a Scalar field & Directional Derivative.
MA Day 25- February 11, 2013 Review of last week’s material Section 11.5: The Chain Rule Section 11.6: The Directional Derivative.
Chapter 1-The Basics Calculus, 2ed, by Blank & Krantz, Copyright 2011 by John Wiley & Sons, Inc, All Rights Reserved.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Matrix Differential Calculus By Dr. Md. Nurul Haque Mollah, Professor, Dept. of Statistics, University of Rajshahi, Bangladesh Dr. M. N. H. MOLLAH.
SECTION 13.8 STOKES ’ THEOREM. P2P213.8 STOKES ’ VS. GREEN ’ S THEOREM  Stokes ’ Theorem can be regarded as a higher- dimensional version of Green ’
Elementary Linear Algebra Anton & Rorres, 9th Edition
Differentiability for Functions of Two (or more!) Variables Local Linearity.
Sec 15.6 Directional Derivatives and the Gradient Vector
Chap. 5 Inner Product Spaces 5.1 Length and Dot Product in R n 5.2 Inner Product Spaces 5.3 Orthonormal Bases: Gram-Schmidt Process 5.4 Mathematical Models.
Example Ex. Find Sol. So. Example Ex. Find (1) (2) (3) Sol. (1) (2) (3)
2 2.1 © 2012 Pearson Education, Inc. Matrix Algebra MATRIX OPERATIONS.
Directional Derivatives and Gradients
Local Linear Approximation for Functions of Several Variables.
Tangent Planes and Normal Lines
CS 376 Introduction to Computer Graphics 04 / 25 / 2007 Instructor: Michael Eckmann.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
In chapter 1, we talked about parametric equations. Parametric equations can be used to describe motion that is not a function. If f and g have derivatives.
Copyright © Cengage Learning. All rights reserved. 16 Vector Calculus.
Basic Theory (for curve 01). 1.1 Points and Vectors  Real life methods for constructing curves and surfaces often start with points and vectors, which.
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
Local Linear Approximation for Functions of Several Variables.
Section 14.3 Local Linearity and the Differential.
Operators in scalar and vector fields
Copyright © Cengage Learning. All rights reserved.
Chapter 14 Partial Derivatives
Review of Matrix Operations
Complex Variables. Complex Variables Open Disks or Neighborhoods Definition. The set of all points z which satisfy the inequality |z – z0|
Tangent Vectors and Normal Vectors
Directional derivative
Copyright © Cengage Learning. All rights reserved.
On a small neighborhood The function is approximately linear
Chapter 1 – Linear Relations and Functions
The Burning Question What is Calculus?
Linear Algebra Lecture 20.
Arab Open University Faculty of Computer Studies Dr
1st semester a.y. 2018/2019 – November 22, 2018
Presentation transcript:

Functions of Several Variables Local Linear Approximation

Real Variables In our studies we have looked in depth at functions f : X → Y where X and Y are arbitrary metric spaces, real-valued functions f : X →  where X is an arbitrary metric space, and functions f :  → . Now we want to look at functions f :  n →  m.

Scalar Fields Scalar Field A function F:  n →  is called a Scalar Field because it assigns to each vector in  n a scalar in . Scalar Fields: Think of the domain as a "field" in which each point is "tagged" with a number. Example: Each point in a room can be associated with a temperature in degrees Celsius.

Vector Fields Vector Field A function F:  n →  m (m > 1) is called a Vector Field because it assigns to each vector in  n a vector in  m. Vector Fields: Think of the domain as a "field" in which each point is "tagged" with a vector. Example: domain is the surface of a river, we can associate each point with a current, which has both magnitude and direction and is therefore a vector.

Vector and Scalar Fields Let F:  n →  m (m > 1) be a vector field. Then there are scalar fields F 1, F 2,..., F m from  n →  such that F(x) = (F 1 (x), F 2 (x),..., F m (x) ) The functions F 1, F 2,..., F m are called the coordinate functions of F. For example:

The Space  n : Linear Algebra meets Analysis  n is a Linear space (or vector space)---each element of  n is a vector. Vectors can be added together and any vector multiplied by a scalar (number) is also a vector.  n is Normed---Every element x in  n has a norm ||x||, which is a non-negative real number and which you can think of as the “magnitude” of the vector.

The Space  n : Linear Algebra meets Analysis The norm of ||x|| is defined to be the (usual) distance in  n from x to 0. x The norm in  is analogous to the absolute value in  :

Differentiability---1 variable When we zoom in on a “sufficiently nice” function of one variable, we see a straight line.

Zooming and Differentiability We expressed this view of differentiability by saying that f is differentiable at p if there exists a real number f ’ (p) such that provided that x is “close” to p. More precisely, if for all x, whereas x→p. In other words, if f is locally linear at p.

Functions of two Variables

When we zoom in on a “sufficiently nice” function of two variables, we see a plane.

(a,b)(a,b) Describing the Tangent Plane To describe a tangent line we need a single number--- the slope. What information do we need to describe this plane? Besides the point (a,b), we need two numbers: the partials of f in the x- and y- directions. Equation?

Describing the Tangent Plane We can also write this equation in vector form. Write x = (x,y), p = (a,b), and Dot product! Gradient Vector!

For a general function F:  n →  m and for a point p in  n, we want to find a linear function A p :  n →  m such that General Linear Approximations The function A p is linear in the linear algebraic sense. In the expression we can think of the gradient as a linear function on  2. (It assigns a vector to each point in  2.)

For a general function F:  n →  m and for a point p in  n, we want to find a linear function A p :  n →  m such that General Linear Approximations Note that the expression A p (x-p) is not a product. It is the function A p acting on the vector (x-p). In the expression we can think of the gradient as a linear function on  2. (It assigns a vector to each point in  2.)

Differentiability To understand Differentiability We need to understand Linear Functions

A function A is said to be linear provided that Note that A (0) = 0, since A(x) = A (x+0) = A(x)+A(0). For a function A:  n →  m, these requirements are very prescriptive.

Linear Functions It is not difficult to show that if A:  n →  m is linear, then A is of the form: where the a ij ’s are real numbers for i = 1, 2,... m and j =1, 2,..., n.

Linear Functions Or to write this another way... In other words, every linear function A acts just like left- multiplication by a matrix. Thus we cheerfully confuse the function A with the matrix that represents it!

Linear Algebra and Analysis The requirement that A:  n →  m be linear is very prescriptive in other ways, too. Let A be an m  n matrix and the associated linear function. Then A is Lipschitz with In particular, when ||x||  1, That is, A is bounded on the closed unit ball of  n.

Norm of a Linear Function We can thus define the norm of A:  n →  m by ||A|| = sup{ ||Ax|| : ||x||  1}. Properties of this norm: ||Ax||  ||A|| ||x|| for all x   n. ||aA||=|a| ||A|| where a is a real number. ||BA||  ||B|| ||A|| (where B is an k  m matrix and A is an m  n matrix.) BA represents the composition of B:  m →  k and A:  n →  m.

Norm of a Linear Function Further properties of this norm: ||A+B||  ||A|| + ||B|| (where A and B are m  n matrices.) ||aA||=|a| ||A|| where a is a real number. These two things imply that ||A-B|| is a distance function that measures the distances between linear functions from  n to  m. In other words, this norm on linear functions from  n to  m acts pretty much like the norm on  n. To “first order” the properties that we associate with absolute values hold for this norm.

Local Linear Approximation For all x, we have F(x)=A p (x-p)+F(p)+E(x) Where E(x) is the error committed by L p (x)= A p (x-p)+F(p) in approximating F(x)

Local Linear Approximation What can we say about the relationship between the matrix A p and the coordinate functions F 1, F 2, F 3,..., F m ? Quite a lot, actually... Fact: Suppose that F:  n →  m is given by coordinate functions F=(F 1, F 2,..., F m ) and all the partial derivatives of F exist “near” p   n and are continuous at p, then... there is some matrix A p such that F can be approximated locally near p by

We Just Compute First, I ask you to believe that if A p = (A 1, A 2,..., A n ) for all i and j with 1  i  n and 1  j  m This should not be too hard. Why? Think about tangent lines, think about tangent planes. Considering now the matrix formulation, what is the partial of A j with respect to x i ? (Note: A j (x) = a j 1 x 1 +a j 2 x a jn x n )

The Derivative of F at p (sometimes called the Jacobian Matrix of F at p)

Some Useful Derivatives Guess, then verify!Identify the derivative of each vector field at a point p. Guess, then verify! The constant function F (x) = v. The identity function F(x) = x. The Linear function A(x)=A x. A  F (Where A is a linear function and F is diff’able) F+G (assuming both F and G are diff’able) aF (where a   and F is diff’able)

Continuity of the Derivative? Theorem: Suppose that all of the partial derivatives of F:  n →  m exist in a neighborhood around the point p and that they are all continuous at p. Then for every  > 0 there exists  > 0 such that if d (z, p) < , then ||F’(z)- F’(p)|| < . In other words, if the partials exist and are continuous near p then the Jacobian matrix for p is “close” to the Jacobian matrix for any “nearby” point.

Mean Value Theorem for Vector Fields? Theorem: Let E be an open subset of  n and let F: E →  m. Suppose that a, b and the entire line segment joining them are in E. If F is differentiable at every point on the line segment between a and b (including the endpoints) then there exists c on the segment between a and b such that Note: does not hold if m >1! Even if n = 1 and m = 2.

Example? Standard way to interpret F:  →  2 is to picture a (parametric) curve in the plane. Picture a fly flying around the curve. It’s velocity (a vector!) at any point is the derivative of the parametric curve at that point. What would mean for a closed curve?

The “Take Home Message” The set of Linear Functions on  n is normed and that norm behaves pretty much like the absolute value function. There is a multi-variable version of the Mean Value Theorem than involves inequalities in the norms. If the partials exist and are continuous, the Jacobian matrices corresponding to nearby points are “close” under the norm. I will remind you of these results when we need them.