Download presentation

Presentation is loading. Please wait.

Published byJason Bedwell Modified over 2 years ago

1
**Calculus Concepts 2/e LaTorre, Kenelly, Fetta, Harris, and Carpenter**

Chapter 10 Analyzing Multivariable Change: Optimization Copyright © by Houghton Mifflin Company, All rights reserved.

2
**Copyright © by Houghton Mifflin Company, All rights reserved.**

Chapter 10 Key Concepts Multivariable Critical Points Multivariable Optimization Constrained Optimization The Method of Least Squares Copyright © by Houghton Mifflin Company, All rights reserved.

3
**Multivariable Critical Points**

Relative minimum: an output value smaller than any of those near it Relative maximum: an output value larger than any of those near it Absolute minimum: an output value smaller than all of those in the region Absolute maximum: an output value larger than all of those in the region Saddle point: an output that is a maximum of one cross section and a minimum of another Copyright © by Houghton Mifflin Company, All rights reserved.

4
**Multivariable Critical Points: Example**

Copyright © by Houghton Mifflin Company, All rights reserved.

5
**Multivariable Critical Points: Example**

Contour curves near a saddle point curve away from the point. Copyright © by Houghton Mifflin Company, All rights reserved.

6
**Multivariable Critical Points: Example**

The percentage of sugar converted to olestra when the temperature is 150°C is shown below as a function of processing time and the ratio of peanut oil to sugar. The saddle point occurs when time is 11 hours and the ratio is 11:1 Copyright © by Houghton Mifflin Company, All rights reserved.

7
**Critical Points: Exercise 10.1 #5**

Make a dot on the contour plot at the approximate location of all critical points. Label points as relative maximum, relative minimum, or saddle point. Copyright © by Houghton Mifflin Company, All rights reserved.

8
**Multivariable Optimization**

Determinant Test Let f be a continuous multivariable function with two input variables x and y. Let (a,b) be a point at which the first partial derivatives of f are both 0. The determinant of the second partials matrix evaluated at the point (a,b) is If D(a,b) > 0 and fxx < 0 at (a,b), f(a,b) is a maximum. If D(a,b) > 0 and fxx > 0 at (a,b), f(a,b) is a minimum. If D(a,b) < 0 at (a,b), f has a saddle point at (a,b). If D(a,b) = 0, the test fails. Copyright © by Houghton Mifflin Company, All rights reserved.

9
**Multivariable Optimization: Example**

The volume index for cake batter is given by V(L, t) = -3.1L L - 0.1t t when L grams of leavening is used and the cake is baked at 177°C for t minutes. Find the maximum volume possible and the conditions needed to achieve that volume. VL = -6.2L = 0 L 3.6 grams of leavening Vt = -0.2t = 0 t 26.5 minutes baking time V(3.6, 26.5) Note: V = 100 is the uncooked batter volume. Copyright © by Houghton Mifflin Company, All rights reserved.

10
**Multivariable Optimization: Example**

The cross sections of V at L = 3.6 and t = 26.5 are both concave down which suggests the critical point is a maximum. Copyright © by Houghton Mifflin Company, All rights reserved.

11
**Multivariable Optimization: Example**

We must verify the volume is a maximum. At (3.6, 26.5, 110.7), VLL = -6.2 Vtt = -0.2 VLt = 0 VtL= 0 D(3.6, 26.5) > 0 and VLL < 0 so (3.6, 26.5, 110.7) is a maximum. Copyright © by Houghton Mifflin Company, All rights reserved.

12
**Optimization: Exercise 10.2 #9**

A restaurant mixes ground beef that costs $b per pound with pork sausage that costs $p per pound to make a meat mixture used on its pizza. The quarterly revenue from its pizza sales is given by R(b,p) = 14b - 3b2 - bp - 2p2 + 12p thousand dollars. Find the prices at which the restaurant should try to purchase the beef and sausage in order to maximize quarterly revenue. Rb = b - p = 0 Rp = -b - 4p + 12 = 0 Solving the system of equations gives b $1.91 and p $ The critical point is (1.91, 2.52, 28.52). Copyright © by Houghton Mifflin Company, All rights reserved.

13
**Optimization: Exercise 10.2 #9**

Rbb = -6 Rbp = -1 Rpb = -1 Rpp = -4 D(1.91, 2.52) > 0 and Rbb < 0 so (1.91, 2.52, 28.52), is a maximum. Copyright © by Houghton Mifflin Company, All rights reserved.

14
**Constrained Optimization**

The minimum (or maximum) output of f subject to g(x,y) = c can be determined by finding the smallest (or largest) value M for which the constraint curve g(x,y) = c and the contour curve f(x,y) = M touch and then determining the point (x0, y0) where these two curves meet. Copyright © by Houghton Mifflin Company, All rights reserved.

15
**Constrained Optimization**

The Lagrange Multiplier To find the optimal solution subject to the constraint we must solve the system of equations: Copyright © by Houghton Mifflin Company, All rights reserved.

16
**Constrained Optimization: Example**

A matrix manufacturing process has the Cobb-Douglas production function of f(L,K) = 48.1L0.6K0.4 mattresses where L represents the number of worker hours (in thousands) and K represents the amount invested in capital (in thousands of dollars). A plant manager has $98,000 to be invested in capital or spent on labor and the current average wage of an employee is $8 per hour. Write the budget constraint equation and find the capital and labor expenditures that will result in the greatest number of mattresses being produced subject to the budget constraint. Copyright © by Houghton Mifflin Company, All rights reserved.

17
**Constrained Optimization: Example**

The budget constraint equation is g(L, K) = 8L + K thousand dollars where L and K are as previously described. The following equations must be satisfied at the point at which maximum production occurs. Copyright © by Houghton Mifflin Company, All rights reserved.

18
**Constrained Optimization: Example**

Solving the system yields K = 39.2 and L = Therefore, labor costs are (7.35 thousand worker hours)($8 per hour) = $58,800. The capital investment is $39,200. The production level is f(7.35, 39.2) = 690 mattresses. To verify that a maximum occurred, we consider nearby points. Observe f(7.3, 39.6) = and f(7.4, 38.8) = which are both less than the extreme value. So a maximum occurred. Copyright © by Houghton Mifflin Company, All rights reserved.

19
**Constrained Optimization: Exercise 10.3 #5**

Find the optimal point of f(r,p) = 2r2 + rp - p2 + p under the constraint g(r,p) = 2r + 3p = 1. Identify the output as a maximum or minimum using close points. Solving the system of equation yields, p = and r = We must determine if this is a maximum or minimum. Copyright © by Houghton Mifflin Company, All rights reserved.

20
**Constrained Optimization: Exercise 10.3 #5**

f( , 0.375) = We select values of r close to and use g(r,p) = 1 to find the associated p value. f(-0.062, ) = f(-0.063, ) = Evaluating f at these points we observe that f has a minimum (subject to the constraint) at r = and p = Copyright © by Houghton Mifflin Company, All rights reserved.

21
**Constrained Optimization: Exercise 10.3 #5**

The contour plot confirms the result. f( , 0.375) = f(-0.062, ) = f(-0.063, ) = Copyright © by Houghton Mifflin Company, All rights reserved.

22
**Method of Least Squares**

The method of least squares is the procedure used to find the best-fitting line based on the criterion that the sum of squared errors is as small as possible. The linear model obtained by this method is called the least-squares line. Copyright © by Houghton Mifflin Company, All rights reserved.

23
**Method of Least Squares: Example**

The number of investment choices for people who invest in 401(k) plans at their place of employment is given by the data. A scatter plot and linear model are also shown. Year 401(k) Plans 1990 1991 1992 1993 3.5 4.0 4.2 4.8 Copyright © by Houghton Mifflin Company, All rights reserved.

24
**Method of Least Squares: Example**

The deviation values (d1, d2, d3, d4) measure the error between the model and the data points. We need to minimize the sum of the squared deviations. Copyright © by Houghton Mifflin Company, All rights reserved.

25
**Method of Least Squares: Example**

Finding fa and fb, setting them equal to zero, and solving the resultant system of equations yields a = 0.41 and b = Since faa(0.41, 3.51) > 0 and the determinant of the second partials matrix is 224 the error is a minimum. Copyright © by Houghton Mifflin Company, All rights reserved.

26
**Method of Least Squares: Exercise 10.4 #3**

Find the linear function that best fits the data. x y 10 20 3 2 1 fa = 1000a + 60b - 80 = 0 fb = 60a + 6b - 12 = 0 Solving the system yields, a = -0.1 and b = 3. Copyright © by Houghton Mifflin Company, All rights reserved.

27
**Method of Least Squares: Exercise 10.4 #3**

y 10 20 3 2 1 So y = -0.1x + 3 Since f(-0.1, 3) = 0, the linear model is a perfect fit. Copyright © by Houghton Mifflin Company, All rights reserved.

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google