Presentation on theme: "Chapter 7 Applications to marketing"— Presentation transcript:
1 Chapter 7 Applications to marketing State Equation:Sale expressed in terms of advertising (which is acontrol variable)Objective:Profit maximizationConstraints:Advertising levels to be nonnegative
2 The Nerlove-Arrow Advertising Model: Let G(t) 0 denote the stock of goodwill at time t .where is the advertising effort at time tmeasured in dollars per unit time. Sale S is given byAssuming the rate of total production costs is c(S), wecan write the total revenue net of production costs asthe revenue net of advertising expenditure is therefore.
3 The firm wants to maximize the present value of net revenue streams discounted at a fixed rate , i.e.,subject to (7.1).Since the only place that p occurs is in the integrand,we can maximize J by first maximizing R with to pricep holding G fixed, and then maximize the result withrespect to u. Thus,which implicitly gives the optimal price
4 Defining as the elasticity of demand with respect to price, we can rewrite condition (7.5) aswhich is the usual price formula for a monopolist,known sometimes as the Amoroso-Robinson relation.In words, the formula means that the marginal revenuemust equal the marginal cost See,e.g., Cohen and Cyert (1965, p.189).Defining , the objective function in(7.4) can be rewritten as
5 For convenience, we assume Z to be a given constant and restate the optimal problem which we have justformulated:
6 Solution by the Maximum Principle The adjoint variable (t) is the shadow priceassociated with the goodwill at time t . Thus, theHamiltonian in (7.8) can be interpreted as the dynamicprofit rate which consist of two terms: (i) the currentnet profit rate and (ii) the valueof the new goodwill created by advertising at rate u.
7 Equation (7.9) corresponding to the usual equilibrium relation for investment in capital goods:see Arrow and Kurz (1970) and Jacquemin (1973). Itstates that the marginal opportunity cost ofinvestment in goodwill should equal the sum of themarginal profit from increased goodwill andthe capital gain:
8 Defining as the elasticity of demand with respect to goodwill and using (7.3), (7.5), and(7.9), we can derive ( see exercise 7.3)We use (3.74) to obtain the optimal long-run stationaryequilibrium or turnpike That is, we obtainfrom (7.8) by using We then setand in (7.9). Finally, from (7.11) and(7.9), or also the singular level can be obtainedas
9 The property of is that the optimal policy is to go to as fast as possible. If , it is optimal to jumpinstantaneously to by applying an appropriateimpulse at t =0 and then set for t >0. If, the optimal control u*(t)=0 until the stock ofgoodwill depreciates to the level , at which timethe control switches to and stays at this levelto maintain the level of goodwill. See Figure 7.1.
10 Figure 7.1: Optimal Policies in the Nerlove-Arrow Model
11 For a time-dependent Z, however, will be a function of time. To maintain this level of , therequired control is If is decreasingsufficiently fast, then may become negative andthus infeasible.If for all t, then the optimal policyis as before. However, suppose is infeasible in theinterval [t1,t2] shown in Figure 7.2. In such a case, it isfeasible to set for t < t1 ; at t = t1 ( which ispoint A in figure 7.2) we can no longer stay on theturnpike and must set u(t)=0 until we hit the turnpikeagain (at point B in figure 7.2). However, such a policyis not necessarily optimal.
12 Figure 7.2: A Case of a Time-Dependent Turnpike and the Nature of Optimal Control
13 For instance, suppose we leave the turnpike at point C anticipating the infeasibility at point A. The new pathCDEB may be better than the old path CAB. Roughlythe reason this may happen is that path CDEB is“closer” to the turnpike than CAB. The picture in Figure7.2 illustrates such a case. The optimal policy is theone that is “closer” to the turnpike. This discussion willbecome clearer in Section 7.2.2, when a similarsituation arises in connection with the Vidale-Wolfemodel. For further details, see Sethi (1977b) andBreakwell (1968).
14 A Nonlinear ExtensionSince , we can invert to solve (7.16) for u asa function of . Thus,
16 Figure 7.3: Phase Diagram of System (7.18) for Problem (7.13)
17 Because of these conditions it is clear that for a given G0 , a choice of 0 such that (0 ,G0 ) is in Regions IIand III, will not lead to a path converging to theturnpike point On the other hand, the choice of(0 ,G0 ) in Region I when or (0 ,G0 ) in RegionIV when , can give a path that converges toFrom a result in Coddington and Levinson(1955), itcan be shown that at least in the neighborhood of ,there exists a locus of optimum starting pointsGiven , we choose 0 on the saddle point path inRegion I of figure 7.3. Clearly, the initial controlu*(0)=f1(0). Furthermore, (t) is increasing and by(7.17), u(t) is increasing, so that in this case theoptimal policy is to advertise at a low rate initially and
18 and gradually increase advertising to the turnpike level . If , it can be shown similarly thatthe optimal policy is to advertise most heavily in thebeginning and gradually decrease it to the turnpikelevel as G approachesNote that the approach to the equilibrium is nolonger via the bang-bang control as in the Nerlove-Arrow model. This, of course, is what one wouldexpect when a model is made nonlinear with respectto the control variable u .
19 The Vidale-Wolfe Advertising Model Now we can rewrite (7.19) as
21 Solution Using Green’s Theorem when Q is Large To make use of Green’s theorem, it is convenient toconsider times and , where , and theproblem:subject toTo change the objective function in (7.24) into a lineintegral along any feasible arc from to in(t,x)-space as shown in figure 7.4, we multiply by dtand obtain the formal relation:
22 which we substitute into the objective function (7.24). Thus,Consider another feasible arc from to lyingabove as shown in figure 7.4. Let ,whereis a simple closed curve traversed in the counter-clockwise direction. That is, goes along in thedirection of its arrow and along in the directionopposite its arrow. We now have
24 Since is a simple closed curve, we can use Green’s theorem to express as an area integral over theregion R enclosed by . Thus, treating x and t asindependent variables, we can writeDenote the term in brackets of the integrand of (7.28)by
25 Note that the sign of the integrand is the same as the sign of I(x).Lemma 7.1(Comparison Lemma). Let and bethe lower and upper feasible arcs as shown in figure7.4. If I(x) 0 for all (x,t)R, then the lower arc is atleast as profitable as the upper arc Analogously, ifI(x) 0 for all (x,t)R, then is at least as profitableas .Proof. If I(x) 0 for all (x,t)R, then 0 from (7.28)and (7.29). Hence from (7.27), The proof ofthe other statement is similar.
26 To make use of this lemma to find the optimal control for the problem stated in (7.23), we need to findregions where I(x) is positive and where it is negative.For this, note first that I(x) is an increasing function of xin [0,1]. Solving I(x)=0 will give that value of x, abovewhich I(x) is positive and below which I(x) is negative.Since I(x) is quadratic in 1/(1-x), we can use thequadratic formula ( See Exercise 7.15) to getTo keep x in the interval [0,1], we must choose thepositive sign before the radical.
27 The optimal x must be nonnegative so we have where the superscript s is used because this will turnout to be a singular trajectory.Since is nonnegative,the controlNote that and if, and only if,Furthermore, the firm is better of with larger and r ,and smaller and . Thus r/( +) represents ameasure of favorable circumstances.
28 Figure 7.5: Optimal Trajectory for Case 1: x0<xs and xs xT
29 Figure 7.6: Optimal Trajectory for Case 2: x0 xs and xs xT
30 Figure 7.7: Optimal Trajectory for Case 3: x0 xs and xs xT
31 Figure 7.8: Optimal Trajectory for Case 4: x0 xs and xs xT
33 Theorem 7.1 Let T be large and let xT be reachable From x0. For the Cases 1-4 of inequalities relating x0and xT to xs , the optimal trajectories are given infigures , respectively.Proof. We give details for Case 1 only. The proofs forthe other cases are similar. Figure 7.9 shows theoptimal trajectory for figure 7.5 together with anarbitrarily chosen feasible trajectory, shown dotted. Itshould be clear that the dotted trajectory cannot crossthe arc x0 to C ,since u=Q on that arc. Similarly thedotted trajectory cannot cross the arc G to xT,becauseu=0 on that arc.
34 We subdivide the interval [0,T ] into subintervals over which the dotted arc is either above, below, or identicalTo the solid arc. In figure 7.9 these sub-intervals are[0,d],[d,e],[e,f], and [f,T]. Because I(x) is positive forand I(x) is negative for , the regions enclosedby the two trajectories have been marked with + or –sign depending on whether I(x) is positive or negativeon the regions, respectively. By Lemma 7.1, the solidarc is better than the dotted arc in the subintervals[0,d],[d,e],and [f,T]; in interval [e,f], they have identicalvalues. Hence the dotted trajectory is inferior to thesolid trajectory. This proof can be extended to any(countable) number of crossing of the trajectories; seeSethi(1977b).
35 Theorem 7.2 Let T be small, i.e., T < t1+t2 , and let xT be reachable from x0 . For the two possible Case1 and2 of inequalities relating x0 to xT and xs , the optimaltrajectories are given in figure 7.10 and 7.11,respectively.Proof. The requirement of feasibility when T is smallrules out cases where x0 and xT are on oppositesides of or equal to xs . The proofs of optimality of thetrajectories shown in figures 7.10 and 7.11 are similarto proofs of the parts of theorem 7.1, and are left asexercise In figures 7.10 and 7.11, it is possible tohave either t1 T or t2 T. Try sketching some ofthese special cases.
36 Figure 7.10: Optimal Trajectory When T is Small in Case 1: x0< xs and xT > xs
37 Figure 7.11: Optimal Trajectory When T is Small in Case 2: x0> xs and xT < xs
38 Figure 7.12: Optimal Trajectory for Case 2 of Theorem 7.1 for Q =
39 Solution When Q is Small where (T) is a constant, as in Row 2 of Table 3.1,that must be determined. Furthermore, the Lagrangemultiplier u in (7.34) must satisfy
40 where From (7.33) we notice that the Hamiltonian is linear in the control. So the optimal control iswhere
41 Solution When T is Infinite We now formulate the infinite horizon version of (7.23)When Q is small, i.e., Q< us , it is not possible to followthe turnpike x = xs , because that would require u = us ,which is not a feasible control. Intuitively, it seemsclear that the “closest” stationary path which we canfollow is the path obtained by setting and u=Q,the largest possible control, in the state equation of(7.39) .
42 This givesby setting u = Q, and in (7.35) and solving for .More specifically, we state the following theoremswhich give the turnpike and optimal control when Q issmall. To prove these theorems we need to define twomore quantities, namely,
43 Theorem 7.3 when Q is small, the following quantities form a turnpike.Proof. We show that the conditions in (3.73) hold for(7.44). The first two are obvious. By exercise 7.28 weknow , which, from definitions (7.42) and (7.43),implies Furthermore , so (7.36) holdsand the third condition of (3.73) also holds. Finallybecause from (7.38) and (7.43), it follows thatW 0 , so the Hamiltonian maximizing condition of(3.73) holds with , and the proof is complete.
46 Theorem 7.4 When Q is small, the optimal control at time is given by:Proof. (a) we set for all and note that satisfies the adjoint equation (7.35) and thetransversality condition (3.70).By Exercise 7.28 and the assumption that , weknow that for all t . the proof that (7.36) and(7.37) hold for all relies on the fact thatand on an argument similar to the proof of the previoustheorem.
47 Figure 7.13 shows the optimal trajectories for and two different starting values x(0), one above andthe other below Note that in this figure we arealways in Case (a) since for all(b) Assume For this case we will show thatthe optimal trajectory is as shown in figure 7.14,which is obtained by applying u=0 until andthereafter. Using this policy we can find thetime t1 at which , by solving the state equationin (7.39) with u = 0. This gives
48 Clearly for t t1 , the policy u=Q is optimal because Case (a) applies. We now consider the interval [0, t1).Let be any time in this interval as shown in figure7.14, and let be the corresponding value of thesate variable. Consider the following two-pointboundary value problem in the intervalIn Exercise 7.31 you are asked to show that theswitching function W(t) defined in (7.38) is negative inthe interval and W(t1)=0.
49 Therefore by (7.37), the policy u=0 used in deriving (7.46) satisfies the maximum principle. This policy“joins” the optimal policy after t1 becauseIn this book the sufficiency of the transversalitycondition (3.70) was stated under the hypothesis thatthe derived Hamiltonian was concave; see Theorem2.1. In the present example, this hypothesis does nothold. However, as mentioned in Section 7.2.3, for thissimple bilinear problem, it can be shown that (3.70) issufficient for optimality. Because of the technicalnature of this issue we omit the details.