Presentation is loading. Please wait.

Presentation is loading. Please wait.

DSGE Models and Optimal Monetary Policy Andrew P. Blake.

Similar presentations


Presentation on theme: "DSGE Models and Optimal Monetary Policy Andrew P. Blake."— Presentation transcript:

1 DSGE Models and Optimal Monetary Policy Andrew P. Blake

2 A framework of analysis Typified by Woodford’s Interest and Prices –Sometimes called DSGE models –Also known as NNS models Strongly micro-founded models Prominent role for monetary policy Optimising agents and policymakers

3 What do we assume? Model is stochastic, linear, time invariant Objective function can be approximated very well by a quadratic That the solutions are certainty equivalent –Not always clear that they are Agents (when they form them) have rational expectations or fixed coefficient extrapolative expectations

4 Linear stochastic model We consider a model in state space form: u is a vector of control instruments, s a vector of endogenous variables, ε is a shock vector The model coefficients are in A, B and C

5 Quadratic objective function Assume the following objective function: Q and R are positive (semi-) definite symmetric matrices of weights 0 < ρ ≤ 1 is the discount factor We take the initial time to be 0

6 How do we solve for the optimal policy? We have two options: –Dynamic programming –Pontryagin’s minimum principle Both are equivalent with non-anticipatory behaviour Very different with rational expectations We will require both to analyse optimal policy

7 Dynamic programming Approach due to Bellman (1957) Formulated the value function: Recognised that it must have the structure:

8 Optimal policy rule First order condition (FOC) for u: Use to solve for policy rule:

9 The Riccati equation Leaves us with an unknown in S Collect terms from the value function: Drop z:

10 Riccati equation (cont.) If we substitute in for F we can obtain: Complicated matrix quadratic in S Solved ‘backwards’ by iteration, perhaps by:

11 Properties of the solution ‘Principle of optimality’ The optimal policy depends on the unknown S S must satisfy the Riccati equation Once you solve for S you can define the policy rule and evaluate the welfare loss S does not depend on s or u only on the model and the objective function The initial values do not affect the optimal control

12 Lagrange multipliers Due to Pontryagin (1957) Formulated a system using constraints as: λ is a vector of Lagrange multipliers: The constrained objective function is:

13 FOCs Differentiate with respect to the three sets of variables:

14 Hamiltonian system Use the FOCs to yield the Hamiltonian system: This system is saddlepath stable Need to eliminate the co-states to determine the solution NB: Now in the form of a (singular) rational expectations model (discussed later)

15 Solutions are equivalent Assume that the solution to the saddlepath problem is Substitute into the FOCs to give:

16 Equivalence (cont.) We can combine these with the model and eliminate s to give: Same solution for S that we had before Pontryagin and Bellman give the same answer Norman (1974, IER) showed them to be stochastically equivalent Kalman (1961) developed certainty equivalence

17 What happens with RE? Modify the model to: Now we have z as predetermined variables and x as jump variables Model has a saddlepath structure on its own Solved using Blanchard-Kahn etc.

18 Bellman’s dedication At the beginning of Bellman’s book Dynamic Programming he dedicates it thus: To Betty-Jo Whose decision processes defy analysis

19 Control with RE How do rational expectations affect the optimal policy? –Somewhat unbelievably - no change –Best policy characterised by the same algebra However, we need to be careful about the jump variables, and Betty-Jo We now obtain pre-determined values for the co- states λ Why?

20 Pre-determined co-states Look at the value function Remember the reaction function is: So the cost can be written as We can minimise the cost by choosing some co-states and letting x jump

21 Pre-determined co-states (cont.) At time 0 this is minimised by: We can rearrange the reaction function to: Where etc

22 Pre-determined co-states (cont.) Alternatively the value function can be written in terms of the x and the z’s as: The loss is:

23 Cost-to-go At time 0, z 0 is predetermined x 0 is not, and can be any value In fact is a function of z 0 (and implicitly u) We can choose the value of λ x at time 0 to minimise cost We choose it to be 0 This minimises the cost-to-go in period 0

24 Time inconsistency This is true at time 0 Time passes, maybe just one period Time 1 ‘becomes time 0’ Same optimality conditions apply We should reset the co-states to 0 The optimal policy is time inconsistent

25 Different to non-RE We established before that the non-RE solution did not depend on the initial conditions (or any z) Now it directly does Can we use the same solution methods? –DP or LM? –Yes, as long as we ‘re-assign’ the co-states However, we are implicitly using the LM solution as it is ‘open-loop’ – the policy depends directly on the initial conditions

26 Where does this fit in? Originally established in 1980s –Clearest statement Currie and Levine (1993) –Re-discovered in recent US literature –Ljungqvist and Sargent Recursive Macroeconomic Theory (2000, and new edition) Compare with Stokey and Lucas

27 How do we deal with time inconsistency? Why not use the ‘principle of optimality’ Start at the end and work back How do we incorporate this into the RE control problem? –Assume expectations about the future are ‘fixed’ in some way –Optimise subject to these expectations

28 A rule for future expectations Assume that: If we substitute this into the model we get:

29 A rule for future expectations The ‘pre-determined’ model is: Using the reaction function for x we get:

30 Dynamic programming solution To calculate the best policy we need to make assumptions about leadership What is the effect on x of changes in u? If we assume no leadership it is zero Otherwise it is K, need to use:

31 Dynamic programming (cont.) FOC for u for leadership: where: This policy must be time consistent Only uses intra-period leadership

32 Dynamic programming (cont.) This is known in the dynamic game literature as feedback Stackelberg Also need to solve for S –Substitute in using relations above Can also assume that x unaffected by u –Feedback Nash equilibrium Developed by Oudiz and Sachs (1985)

33 Dynamic programming (cont.) Key assumption that we condition on a rule for expectations Could condition on a time path (LM) Time consistent by construction –Principle of optimality Many other policies have similar properties Stochastic properties now matter

34 Time consistency Not the only time consistent solutions Could use Lagrange multipliers DP is not only time consistent it is subgame perfect Much stronger requirement –See Blake (2004) for discussion

35 What’s new with DSGE models? Woodford and others have derived welfare loss functions that are quadratic and depend only on the variances of inflation and output These are approximations to the true social utility functions Can apply LQ control as above to these models Parameters of the model appear in the loss function and vice versa (e.g. discount factor)

36 DGSE models in WinSolve Can set up micro-founded models Can set up micro-founded loss functions Can explore optimal monetary policy –Time inconsistent –Time consistent –Taylor-type approximations Let’s do it!


Download ppt "DSGE Models and Optimal Monetary Policy Andrew P. Blake."

Similar presentations


Ads by Google