Presentation is loading. Please wait.

Presentation is loading. Please wait.

Primbs1 Receding Horizon Control for Constrained Portfolio Optimization James A. Primbs Management Science and Engineering Stanford University (with Chang.

Similar presentations


Presentation on theme: "Primbs1 Receding Horizon Control for Constrained Portfolio Optimization James A. Primbs Management Science and Engineering Stanford University (with Chang."— Presentation transcript:

1 Primbs1 Receding Horizon Control for Constrained Portfolio Optimization James A. Primbs Management Science and Engineering Stanford University (with Chang Hwan Sung) Stanford-Tsukuba/WCQF Workshop Stanford University March 8, 2007

2 Primbs2 Motivation and Background Outline Semi-Definite Programming Formulation Conclusions and Future Work Numerical Examples Receding Horizon Control

3 Primbs3 A Motivational Problem from Finance: Index Tracking Let represent the stochastic dynamics of the prices of n stocks, where  i is the expected return per period,  i is the volatility per period, and w i is an iid noise term with mean zero and standard deviation 1. An index is a weighted average of the n stocks: The index tracking problem is to trade l<n of the stocks and a risk free bond in order to track the index as “closely as possible”.

4 Primbs4 A Motivational Problem from Finance: Index Tracking If we let u i (k) denote the dollar amount invested in S i (k) for l<n, and we let r f denote the risk free rate of interest, then the total wealth W(k) of this strategy follows the dynamics: One possible measure of how closely we track the index is given by an infinite horizon discounted quadratic cost: where  <1 is a discount factor.

5 Primbs5 A Motivational Problem from Finance: Index Tracking Limits on short selling:Limits on wealth invested: Value-at-Risk: etc... Finally, it is quite common to require that constraints be satisfied, such as

6 Primbs6 Index Tracking subject to:

7 Primbs7 Linear systems with State and Control Multiplicative Noise: subject to: where

8 Primbs8 The unconstrained version of the problem is known as the Stochastic Linear Quadratic (SLQ) problem. It’s solution has been well studied... Willems and Willems, ‘76 Yao et. al., ‘01 El Ghaoui, ’95 Ait Rami and Zhou, ’00 McLane, ’71 Wonham, ’67, ‘68 Kleinman, ’69 The constrained version has received less attention...

9 Primbs9 Receding Horizon Control (also known as Model Predictive Control) has been quite successful for constrained deterministic systems. Can receding horizon control be a successful tool for (constrained) stochastic control as well? Dynamic resource allocation: (Castanon and Wohletz, ’02) Portfolio Optimization: (Herzog et al, ’06, Herzog ‘06) Dynamic Hedging: (Meindl and Primbs, ’04) Supply Chains: (Seferlis and Giannelos, ’04) Constrained Linear Systems: (van Hessem and Bosgra, ’01,’02,’03,’04) Stoch. Programming Approach: (Felt, ’03) Operations Management: (Chand, Hsu, and Sethi, ’02)

10 Primbs10 Motivation and Background Outline Semi-Definite Programming Formulation Conclusions and Future Work Numerical Examples Receding Horizon Control

11 Primbs11 Ideally, one would solve the optimal infinite horizon problem... solve... resolve... Horizon N solve... resolve... Horizon N implement initial control action Instead, receding horizon control repeatedly solves finite horizon problems...

12 Primbs12 resolve... Horizon N solve... resolve... Horizon N implement initial control action Instead, receding horizon control repeatedly solves finite horizon problems... So, RHC involves 2 steps: 1) Solve finite horizon optimizations on-line 2) Implement the initial control action

13 Primbs13 Motivation and Background Outline Semi-Definite Programming Formulation Conclusions and Future Work Numerical Examples Receding Horizon Control

14 Primbs14 resolve... Horizon N solve... resolve... Horizon N implement initial control action Instead, receding horizon control repeatedly solved finite horizon problems... So, RHC involves 2 steps: 1) Solve finite horizon optimizations on-line 2) Implement the initial control action

15 Primbs15 In receding horizon control, we consider a finite horizon optimal control problem. We will impose an open loop plus linear feedback structure: where and denote the horizon length Horizon N solve... denote the predicted control denote the predicted state Note that: From the current state x(k), let:

16 Primbs16 We will use quadratic expectation constraints (instead of probabilistic constraints) in the on-line optimizations. Probabilistic constraints involve knowing the entire distribution. In general, this is too difficult to calculate. Quadratic expectation constraints involve only the mean and covariance. If one is willing to resort to approximations, probabilistic constraints can be approximated with quadratic expectation constraints. Theoretical results can actually guarantee a form of probabilistic satisfaction, even though we use expectation constraints.

17 Primbs17 Constraints will be in the form of quadratic expectation constraints Note that state-only constraints are not imposed at time i=0. C(x(k),N) We will use quadratic expectation constraints (instead of probabilistic constraints) in the on-line optimizations.

18 Primbs18 Receding Horizon On-Line Optimization: subject to: We denote the the optimizing predicted state and control sequence by The receding horizon control policy for state x(k) is

19 Primbs19 This problem has a linear-quadratic structure to it. Hence, it essentially depends on the mean and variance of the controlled dynamics. For convenience, let mean satisfies: covariance satisfies:

20 Primbs20 Receding Horizon On-Line Optimization as an SDP (for q=1): subject to:

21 Primbs21 By imposing structure in the on-line optimizations we are able to: Formulate them as semi-definite programs. Use closed loop feedback over the prediction horizon. Incorporate constraints in an expectation form.

22 Primbs22 resolve... Horizon N solve... resolve... Horizon N implement initial control action Instead, receding horizon control repeatedly solved finite horizon problems... So, RHC involves 2 steps: 1) Solve finite horizon optimizations on-line 2) Implement the initial control action

23 Primbs23 Theoretical Results Developing theoretical results for Stochastic Receding Horizon Control is an active research area. In particular, important questions concern stability (when appropriate), performance guarantees, and constraint satisfaction. I have developed results on these topics for special formulations of RHC, but I won’t present them here.

24 Primbs24 Motivation and Background Outline Semi-Definite Programming Formulation Conclusions and Future Work Numerical Examples Receding Horizon Control

25 Primbs25 Example Problem: Dynamics Cost Parameters

26 Primbs26 Example Problem: State Constraint Optimal Unconstrained Cost to go: Terminal Constraint Horizon

27 Primbs27 Level Sets of the State and Terminal Constraints.

28 Primbs28 75 Random Simulations from Initial Condition [-0.3,1.2]. Uncontrolled Dynamics

29 Primbs29 75 Random Simulations from Initial Condition [-0.3,1.2]. Optimal Unconstrained Dynamics

30 Primbs30 75 Random Simulations from Initial Condition [-0.3,1.2]. RHC Dynamics

31 Primbs31 Means of the 75 random paths for different approaches Red-RHC, Blue-Optimal Unconstrained, Green-Uncontrolled

32 Primbs32 A Simple Index Tracking Example: Five Stocks: IBM, 3M, Altria, Boeing, AIG Means, variances and covariances estimated from 15 years of weekly data beginning in 1990 from Yahoo! finance. Initial prices and wealth assumed to be $100. Risk free rate of 5% Horizon of N = 5 with time step equal to 1 month.

33 Primbs33 The index is an equally weighted average of the 5 stocks. Initial value of the index is assumed to be $100. The index will be tracked with a portfolio of the first 3 stocks: IBM, 3M, and Altria. A Simple Index Tracking Example: We place a constraint that the fraction invested in 3M cannot exceed 10%. Five Stocks: IBM, 3M, Altria, Boeing, AIG

34 Primbs34 Index – Green, Optimal Unconstrained - Cyan, RHC - Red

35 Primbs35 Optimal Unconstrained Allocations Blue – IBM, Red – 3M, Green - Altria

36 Primbs36 Blue – IBM, Red – 3M, Green - Altria RHC Allocations

37 Primbs37 Motivation and Background Outline Semi-Definite Programming Formulation Conclusions and Future Work Numerical Examples Receding Horizon Control

38 Primbs38 Conclusions By exploiting and imposing problem structure, constrained stochastic receding horizon control can be implemented in a computationally tractable manner for a number of problems. It appears to be a promising approach to solving many constrained portfolio optimization problems. Future Research There are many interesting theoretical questions involving properties of this approach, especially concerning stability and performance. We are pursuing applications to other portfolio optimization problems, dynamic hedging, and coupled with filtering methods.


Download ppt "Primbs1 Receding Horizon Control for Constrained Portfolio Optimization James A. Primbs Management Science and Engineering Stanford University (with Chang."

Similar presentations


Ads by Google