Download presentation

Published byPaige Dixon Modified over 4 years ago

1
**Optimal Adaptive Execution of Portfolio Transactions**

Julian Lorenz Joint work with Robert Almgren (Banc of America Securities, NY)

2
**Execution of Portfolio Transactions**

Sell 100,000 Microsoft shares today! Broker/Trader Fund Manager Problem: Market impact Trading Large Volumes Moves the Price How to optimize the trade schedule over the day?

3
**Market Model … Discrete times Stock price follows random walk**

Sell program for initial position of X shares Execution strategy: s.t. , = shares hold at time i.e. sell shares between t0 and t1 t1 and t2 … Pure sell program:

4
**Market Impact and Cost of a Strategy**

Selling xk-1 – xk shares in [tk-1, tk] at discount to Sk-1 with Linear Temporary Market Impact Benchmark: Pre-Trade Book Value Cost C() = Pre-Trade Book Value – Capture of Trade C() is independent of S0 x X=x0=100 N=10 x

5
**Œ ð Trader‘s Dilemma Random variable! Minimal Risk**

Obviously by immediate liquidation No risk, but high market impact cost Minimal Risk Œ t T x(t) X Linear strategy Minimal Expected Cost But: High exposure to price volatility ð High risk t T x(t) X Optimal trade schedules seek risk-reward balance

6
**Admissible Strategies**

Efficient Strategies Risk-Reward Tradeoff: Mean-Variance Variance as risk measure E-V Plane Minimal variance Œ Admissible Strategies Minimal expected cost Linear Strategy Immediate Sale Efficient Strategies Œ

7
**Almgren/Chriss Deterministic Trading (1/2)**

R. Almgren, N. Chriss: "Optimal execution of portfolio transactions", Journal of Risk (2000). Deterministic trading strategy ð functions of decision variables (x1,…,xN)

8
**Almgren/Chriss Deterministic Trading (2/2)**

Trajectories for some E-V Plane t T X x(t) T=1, =10 Dynamic strategies: xi = xi(1,…,i-1) Almgren/Chriss Trajectories: xi deterministic C() normally distributed ð Straightforward QP ð By dynamic programming Urgency controls curvature Dynamic strategies improve (w.r.t. mean-variance) ! We show:

9
**Definitions Adapted trading strategy: xi may depend on 1…,i-1**

adapted strategies for X shares in N periods with expected cost Admissible trading strategies for expected cost Efficient trading strategies „no other admissible strategy offers lower variance for same level of expected cost“ i.e.

10
**Tail of Efficient Strategies**

Suppose is efficient, Note: deterministic, but may depend on 1 Conditional on 1, define the “tail“ of Lemma: For all outcomes a of 1, the tail is efficient ð Dynamic programming!

11
**Dynamic Programming (1/4)**

Define value function i.e. minimal variance to sell x shares in k periods with and optimal strategies for k-1 periods Optimal Markovian one-step control + and optimal strategies for k periods …ultimately interested in For type “ “ DP is straightforward. Here: in value function & terminal constraint … ?

12
**Dynamic Programming (2/4)**

We want to determine k periods and x shares left Limit for expected cost is c Current stock price S Next price innovation is x ~ N(0,2) Situation: Construct optimal strategy for k periods In current period sell shares at Œ Use efficient strategy for remaining k-1 periods Specify by its expected cost z() ð Note: must be deterministic, but when we begin , outcome of is known, i.e. we may choose depending on

13
**Dynamic Programming (3/4)**

ð Strategy defined by control and control function z() Conditional on : Using the laws of total expectation and variance One-step optimization of and by means of and

14
**Dynamic Programming (4/4)**

Theorem: where Control variable new stock holding (i.e. sell x – x’ in this period) Control function targeted cost as function of next price change ð Solve recursively!

15
**Solving the Dynamic Program**

No closed-form solution Difficulty for numerical treatment: Need to determine a control function Approximation: is piecewise constant ð For fixed determine Nice convexity property Theorem: In each step, the optimization problem is a convex constrained problem in {x‘, z1, … , zk}.

16
**Behavior of Adaptive Strategy**

„Aggressive in the Money“ Theorem: At all times, the control function z() is monotone increasing Recall: z() specifies expected cost for remainder as a function of the next price change High expected cost = sell quickly (low variance) Low expected cost = sell slowly (high variance) ð If price goes up ( > 0), sell faster in remainder Spend part of windfall gains on increased impact costs to reduce total variance

17
Numerical Example Respond only to up/down Discretize state space of

18
**Sample Trajectories of Adaptive Strategy**

Aggressive in the money …

19
**Family of New Efficient Frontiers**

Frontiers are parametrized by Sample cost PDFs: Adaptive strategies Almgren/Chriss deterministic strategy (“market power“) Larger improvement for large portfolios ( i.e ) Almgren/Chriss frontier Distribution plots obtained by Monte Carlo simulation Improved frontiers

20
**Family of New Efficient Frontiers**

Family of frontiers parametrized by size of trade X Sample cost PDFs: Adaptive strategies Almgren/Chriss deterministic strategy Larger improvement for large portfolios (i.e ) Almgren/Chriss frontier Distribution plots obtained by Monte Carlo simulation Improved frontiers

21
**More Cost Distributions**

22
**Extensions Non-linear impact functions**

Multiple securities („basket trading“) Dynamic Programming approach also applicable for other mean-variance problems, e.g. multiperiod portfolio optimization

23
**Thank you very much for your attention! Questions?**

Similar presentations

OK

15. 05. 2007 Observational Learning in Random Networks Julian Lorenz, Martin Marciniszyn, Angelika Steger Institute of Theoretical Computer Science, ETH.

15. 05. 2007 Observational Learning in Random Networks Julian Lorenz, Martin Marciniszyn, Angelika Steger Institute of Theoretical Computer Science, ETH.

© 2018 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google