15.Dynamic Programming Hsu, Lih-Hsing. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization.

Slides:



Advertisements
Similar presentations
Dynamic Programming.
Advertisements

Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Unit 4: Dynamic Programming
CS420 Lecture 9 Dynamic Programming. Optimization Problems In optimization problems a set of choices are to be made to arrive at an optimum, and sub problems.
Introduction to Algorithms
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming (pro-gram)
Dynamic Programming Part 1: intro and the assembly-line scheduling problem.
Steps in DP: Step 1 Think what decision is the “last piece in the puzzle” –Where to place the outermost parentheses in a matrix chain multiplication (A.
Dynamic Programming Lecture 9 Asst. Prof. Dr. İlker Kocabaş 1.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
Dynamic Programming (II)
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming CIS 606 Spring 2010.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming Code
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 4. Dynamic Programming - 1 Dynamic.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
1 Dynamic Programming 2012/11/20. P.2 Dynamic Programming (DP) Dynamic programming Dynamic programming is typically applied to optimization problems.
Dynamic Programming Dynamic programming is a technique for solving problems with a recursive structure with the following characteristics: 1.optimal substructure.
Dynamic Programming UNC Chapel Hill Z. Guo.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
Dynamic Programming Chapter 15 Highlights Charles Tappert Seidenberg School of CSIS, Pace University.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CS 8833 Algorithms Algorithms Dynamic Programming.
Dynamic Programming (Ch. 15) Not a specific algorithm, but a technique (like divide- and-conquer). Developed back in the day when “programming” meant “tabular.
COSC 3101A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken.
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 4: Dynamic Programming Phan Th ị Hà D ươ ng 1.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
CSC5101 Advanced Algorithms Analysis
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
Dynamic Programming 又稱 動態規劃 經常被用來解決最佳化問題. Dynamic Programming2 Rod cutting problem Given a rod of length N inches and a table of prices p[i] for i=1..N,
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Typically applied to optimization problems
Dynamic Programming (DP)
Advanced Algorithms Analysis and Design
Advanced Design and Analysis Techniques
Dynamic Programming Several problems Principle of dynamic programming
Dynamic Programming Comp 122, Fall 2004.
Chapter 15: Dynamic Programming III
Dynamic Programming.
Chapter 15: Dynamic Programming II
Dynamic Programming Comp 122, Fall 2004.
Ch. 15: Dynamic Programming Ming-Te Chi
DYNAMIC PROGRAMMING.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Analysis of Algorithms CS 477/677
Matrix Chain Multiplication
Asst. Prof. Dr. İlker Kocabaş
Analysis of Algorithms CS 477/677
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

15.Dynamic Programming Hsu, Lih-Hsing

Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such problem there can be many solutions. Each solution has a value, and we wish to find a solution with the optimal value.

Computer Theory Lab. Chapter 15P.3 The development of a dynamic programming algorithm can be broken into a sequence of four steps: 1. Characterize the structure of an optimal solution. 2. Recursively define the value of an optimal solution. 3. Compute the value of an optimal solution in a bottom up fashion. 4. Construct an optimal solution from computed information.

Computer Theory Lab. Chapter 15P Rod cutting The rod-cutting problem is the following. Given a rod of length n inches and a table of prices for i=1,2, …,n,determine the maximum revenue obtainable by cutting up the rod and selling the pieces. Note that if the price for a rod of length n is large enough, an optimal sol- tion may require no cutting at all.

Computer Theory Lab. Chapter 15P.5 n= i 1 + i 2 + … + i k = p i1 + p i2 + … + p ik length i price

Computer Theory Lab. Chapter 15P.6 r 1 = 1 from solution 1 = 1 (no cuts), r 2 = 5 from solution 2 = 2 (no cuts), r 3 = 8 from solution 3 = 3 (no cuts), r 4 = 10 from solution 4 = 2 + 2, r 5 = 13 from solution 5 = 2 + 3, r 6 = 17 from solution 6 = 6 (no cuts), r 7 = 18 from solution 7=1+6 or 7=2+2+3, r 8 = 22 from solution 8 = 2 + 6, r 9 = 25 from solution 9 = 3 + 6, r 10 = 30 from solution 10 = 10 (no cuts),

Computer Theory Lab. Chapter 15P.7 More generally, we can frame the values for n 1 in terms of optimal revenues from shorter rods: = max ( p n,r 1 +r n-1,r 2 +r n-2, … r n-1 +r 1 ) Optimal substructure : optimal solutions to a pro - blem incorporate optimal solutions to related subpro- blems, which we may solve independently.

Computer Theory Lab. Chapter 15P.8 Recursive top-down implementation CUT-ROD (p,n) 1 if n==0 2 return 0 3 q = - 4 for i = 1 to n 5 q= max(q, p[i] + CUT-ROD(p,n-i) 6 return q

Computer Theory Lab. Chapter 15P.9 If you were to code up CUT-ROD in your favo - rite programming language and run it on your computer, you would find that once the input size becomes moderately large, your program would take a long time to run.

Computer Theory Lab. Chapter 15P.10

Computer Theory Lab. Chapter 15P.11 Using dynamic programming optimal rod cutting top-down with memoization bottom-up method

Computer Theory Lab. Chapter 15P.12 MEMOIZ-CUT-ROD(p,n) 1 let r[0 … n] be a new array 2 for i = 0 to n 3 r[i] = - 4 return MEMOIZED-CUT-ROD-AUX(p,n,r)

Computer Theory Lab. Chapter 15P.13 MEMOIZED-CUT-ROD-AUX(p,n,r) 1 if r[n] 0 2 return r[n] 3 if n==0 4 q= 0 5 else q=- 6 for i= 1 to n 7 q= max(q,p[i]+MEMOIZED-CUT-AUX(p,n-i,r) 8 r[n]=q 9 return q

Computer Theory Lab. Chapter 15P.14 BOTTOM-UP-CUT-ROD(p,n) 1 let r[0 … n] be a new array 2 r[0] =0 3 for j =1 to n 4 q=- 5 for i =1 to j 6 q=max(q,p[i] + r[j-i]) 7 r[j] = q 8 return r[n]  The running time of procedure BOTTOM- UP-CUT-ROD is

Computer Theory Lab. Chapter 15P.15 Subproblem graphs The subproblem graph for the problem embodies exactly this information.

Computer Theory Lab. Chapter 15P.16 Reconstruction a solution 1 let r[0 … n] and s[0 … n] be new arrays 2 r[0] =0 3 for j =1 to n 4 q= - 5 for i =1 to j 6 if q < p[i] + r[j-i] 7 q = p[i] + r[j-i] 8 s[j] = i 9 r[j] = q 10 return r and s

Computer Theory Lab. Chapter 15P.17 PRINT-CUT-ROD-SOLUTION(p,n) 1 (r,s)= EXTENDED-BOTTOM-UP-CUT-ROD(p,n) 2 while n>0 3 print s[n] 4 n = n – s[n] i r[i] s[i]

Computer Theory Lab. Chapter 15P Assembly-line scheduling An automobile chassis enters each assembly line, has parts added to it at a number of stations, and a finished auto exits at the end of the line. Each assembly line has n stations, numbered j = 1, 2,...,n. We denote the j th station on line j ( where i is 1 or 2) by S i,j. The j th station on line 1 (S 1,j ) performs the same function as the j th station on line 2 (S 2,j ).

Computer Theory Lab. Chapter 15P.19 The stations were built at different times and with different technologies, however, so that the time required at each station varies, even between stations at the same position on the two different lines. We denote the assembly time required at station S i,j by a i,j. As the coming figure shows, a chassis enters station 1 of one of the assembly lines, and it progresses from each station to the next. There is also an entry time e i for the chassis to enter assembly line i and an exit time x i for the completed auto to exit assembly line i.

Computer Theory Lab. Chapter 15P.20 a manufacturing problem to find the fast way through a factory

Computer Theory Lab. Chapter 15P.21 Normally, once a chassis enters an assembly line, it passes through that line only. The time to go from one station to the next within the same assembly line is negligible. Occasionally a special rush order comes in, and the customer wants the automobile to be manufactured as quickly as possible. For the rush orders, the chassis still passes through the n stations in order, but the factory manager may switch the partially- completed auto from one assembly line to the other after any station.

Computer Theory Lab. Chapter 15P.22 The time to transfer a chassis away from assembly line i after having gone through station S ij is t i,j, where i = 1, 2 and j = 1, 2,..., n-1 (since after the n th station, assembly is complete). The problem is to determine which stations to choose from line 1 and which to choose from line 2 in order to minimize the total time through the factory for one auto.

Computer Theory Lab. Chapter 15P.23 An instance of the assembly-line problem with costs

Computer Theory Lab. Chapter 15P.24 Step1 The structure of the fastest way through the factory the fast way through station S 1,j is either the fastest way through Station S 1,j-1 and then directly through station S 1,j, or the fastest way through station S 2,j-1, a transfer from line 2 to line 1, and then through station S 1,j. Using symmetric reasoning, the fastest way through station S 2,j is either the fastest way through station S 2,j-1 and then directly through Station S 2,j, or the fastest way through station S 1,j-1, a transfer from line 1 to line 2, and then through Station S 2,j.

Computer Theory Lab. Chapter 15P.25 Step 2: A recursive solution

Computer Theory Lab. Chapter 15P.26 step 3: computing the fastest times Let r i (j)be the number of references made to f i [j] in a recursive algorithm. r 1 (n)=r 2 (n)=1 r 1 (j) = r 2 (j)=r 1 (j+1)+r 2 (j+1) The total number of references to all f i [j] values is  (2 n ). We can do much better if we compute the f i [j] values in different order from the recursive way. Observe that for j  2, each value of f i [j] depends only on the values of f 1 [j-1] and f 2 [j-1].

Computer Theory Lab. Chapter 15P.27 F ASTEST -W AY procedure F ASTEST -W AY (a, t, e, x, n) 1 f 1 [1]  e 1 + a 1,1 2 f 2 [1]  e 2 + a 2,1 3 for j  2 to n 4do if f 1 [j-1] + a 1,j  f 2 [j-1] + t 2,j-1 +a 1,j 5 then f 1 [j]  f 1 [j-1] + a 1,j 6 l 1 [j]  1 7 else f 1 [j]  f 2 [j-1] + t 2,j-1 +a 1,j 8 l 1 [j]  2 9 if f 2 [j-1] + a 2, j  f 1 [j-1] + t 1,j-1 +a 2,j

Computer Theory Lab. Chapter 15P then f 2 [j]  f 2 [j – 1] + a 2,j 11 l2[j]  2 12 else f 2 [j]  f 1 [j – 1] + t 1,j-1 + a 2,j 13 l 2 [j]  1 14 if f 1 [n] + x 1  f 2 [n] + x 2 15 then f * = f 1 [n] + x 1 16 l * = 1 17else f * = f 2 [n] + x 2 18 l * = 2

Computer Theory Lab. Chapter 15P.29 step 4: constructing the fastest way through the factory P RINT -S TATIONS (l, n) 1i  l* 2print “line” i “,station” n 3for j  n downto 2 4do i  l i [j] 5 print “line” i “,station” j – 1 output line 1, station 6 line 2, station 5 line 2, station 4 line 1, station 3 line 2, station 2 line 1, station 1

Computer Theory Lab. Chapter 15P Matrix-chain multiplication A product of matrices is fully parenthesized if it is either a single matrix, or a product of two fully parenthesized matrix product, surrounded by parentheses.

Computer Theory Lab. Chapter 15P.31 How to compute where is a matrix for every i. Example:

Computer Theory Lab. Chapter 15P.32 MATRIX MULTIPLY MATRIX MULTIPLY(A,B) 1if columns[A] column[B] 2then error “ incompatible dimensions ” 3else for to rows[A] 4do for to columns[B] 5do 6for to columns[A] 7do 8return C

Computer Theory Lab. Chapter 15P.33 Complexity: Let A be a matrix, and B be a matrix. Then the complexity is.

Computer Theory Lab. Chapter 15P.34 Example: is a matrix, is a matrix, and is a matrix. Then takes time. However takes time.

Computer Theory Lab. Chapter 15P.35 The matrix-chain multiplication problem: Given a chain of n matrices, where for i=0,1, …,n, matrix Ai has dimension p i-1  p i, fully parenthesize the product in a way that minimizes the number of scalar multiplications.

Computer Theory Lab. Chapter 15P.36 Counting the number of parenthesizations: [Catalan number]

Computer Theory Lab. Chapter 15P.37 Step 1: The structure of an optimal parenthesization

Computer Theory Lab. Chapter 15P.38 Step 2: A recursive solution Define m[i, j]= minimum number of scalar multiplications needed to compute the matrix goal m[1, n]

Step 3: Computing the optimal costs

Computer Theory Lab. Chapter 15P.40 MATRIX_CHAIN_ORDER MATRIX_CHAIN_ORDER(p) 1 n  length[p] –1 2 for i  1 to n 3do m[i, i]  0 4 for l  2 to n 5do for i  1 to n – l + 1 6do j  i + l – 1 7 m[i, j]   8 for k  i to j – 1 9do q  m[i, k] + m[k+1, j]+ p i-1 p k p j 10 if q < m[i, j] 11 then m[i, j]  q 12 s[i, j]  k 13 return m and s Complexity:

Computer Theory Lab. Chapter 15P.41 Example:

Computer Theory Lab. Chapter 15P.42 the m and s table computed by MATRIX-CHAIN-ORDER for n=6

Computer Theory Lab. Chapter 15P.43 m[2,5]= min{ m[2,2]+m[3,5]+p 1 p 2 p 5 =  15  20=13000, m[2,3]+m[4,5]+p 1 p 3 p 5 =  5  20=7125, m[2,4]+m[5,5]+p 1 p 4 p 5 =  10  20=11374 } =7125

Step 4: Constructing an optimal solution

Computer Theory Lab. Chapter 15P.45 MATRIX_CHAIN_MULTIPLY MATRIX_CHAIN_MULTIPLY(A, s, i, j) 1 if j > i 2 then 3 4return MATRIX-MULTIPLY(X,Y) 5else return A i example:

16.3 Elements of dynamic programming

Computer Theory Lab. Chapter 15P.47 Optimal substructure: We say that a problem exhibits optimal substructure if an optimal solution to the problem contains within its optimal solution to subproblems. Example: Matrix-multiplication problem

Computer Theory Lab. Chapter 15P You show that a solution to the problem consists of making a choice, Making this choice leaves one or more subproblems to be solved. 2. You suppose that for a given problem, you are given the choice that leads to an optimal solution. 3. Given this choice, you determine which subproblems ensue and how to best characterize the resulting space of subproblems. 4. You show that the solutions to the subproblems used within the optimal solution to the problem must themselves be optimal by using a “ cut-and-paste ” technique.

Computer Theory Lab. Chapter 15P.49 Optimal substructure varies across problem domains in two ways: 1. how many subproblems are used in an optimal solutiion to the original problem, and 2. how many choices we have in determining which subproblem(s) to use in an optimal solution.

Computer Theory Lab. Chapter 15P.50 Subtleties One should be careful not to assume that optimal substructure applies when it does not. consider the following two problems in which we are given a directed graph G = ( V, E ) and vertices u, v  V. Unweighted shortest path: Find a path from u to v consisting of the fewest edges. Good for Dynamic programming. Unweighted longest simple path: Find a simple path from u to v consisting of the most edges. Not good for Dynamic programming.

Overlapping subproblems: example: MAXTRIX_CHAIN_ORDER

Computer Theory Lab. Chapter 15P.52 RECURSIVE_MATRIX_CHAIN RECURSIVE_MATRIX_CHAIN(p, i, j) 1 if i = j 2 then return 0 3 m[i, j]   4 for k  i to j – 1 5 do q  RMC(p,i,k) + RMC(p,k+1,j) + p i-1 p k p j 6 if q < m[i, j] 7 then m[i, j]  q 8 return m[i, j]

Computer Theory Lab. Chapter 15P.53 The recursion tree for the computation of RECURSUVE-MATRIX-CHAIN(P, 1, 4)

Computer Theory Lab. Chapter 15P.54 We can prove that T(n) =  (2 n ) using substitution method.

Computer Theory Lab. Chapter 15P.55 Solution: 1. bottom up 2. memorization (memorize the natural, but inefficient)

Computer Theory Lab. Chapter 15P.56 MEMORIZED_MATRIX_CHAIN MEMORIZED_MATRIX_CHAIN(p) 1 n  length[p] –1 2 for i  1 to n 3 do for j  1 to n 4 do m[i, j]   5 return LC (p,1,n)

Computer Theory Lab. Chapter 15P.57 LOOKUP_CHAIN LOOKUP_CHAIN(p, i, j) 1if m[i, j] <  2then return m[i, j] 3if i = j 4then m[i, j]  0 5else for k  i to j – 1 6 do q  LC(p, i, k) +LC(p, k+1, j)+p i-1 p k p j 7 if q < m[i, j] 8 then m[i, j]  q 9return m[i, j] Time Complexity:

Computer Theory Lab. Chapter 15P Longest Common Subsequence X = Y = is a common subsequence of both X and Y. or is the longest common subsequence of X and Y.

Computer Theory Lab. Chapter 15P.59 Longest-common-subsequence problem: We are given two sequences X = and Y = and wish to find a maximum length common subsequence of X and Y. We Define X i =.

Computer Theory Lab. Chapter 15P.60 Theorem (Optimal substructure of LCS) Let X = and Y = be the sequences, and let Z = be any LCS of X and Y. 1. If x m = y n then z k = x m = y n and Z k-1 is an LCS of X m-1 and Y n If x m  y n then z k  x m implies Z is an LCS of X m-1 and Y. 3. If x m  y n then z k  y n implies Z is an LCS of X and Y n-1.

Computer Theory Lab. Chapter 15P.61 A recursive solution to subproblem Define c [i, j] is the length of the LCS of X i and Y j.

Computer Theory Lab. Chapter 15P.62 Computing the length of an LCS LCS_LENGTH(X,Y) 1 m  length[X] 2 n  length[Y] 3 for i  1 to m 4 do c[i, 0]  0 5 for j  1 to n 6 do c[0, j]  0 7 for i  1 to m 8 do for j  1 to n

Computer Theory Lab. Chapter 15P.63 9 do if x i = y j 10 then c[i, j]  c[i-1, j-1]+1 11 b[i, j]  “  ” 12 else if c[i–1, j]  c[i, j-1] 13 then c[i, j]  x[i-1, j] 14 b[i, j]  “  ” 15 else c[i, j]  c[i, j-1] 16 b[i, j]  “  ” 17 return c and b

Computer Theory Lab. Chapter 15P.64 Complexity: O(mn)

Computer Theory Lab. Chapter 15P.65 PRINT_LCS PRINT_LCS(b, X, c, j ) 1if i = 0 or j = 0 2then return 3if b[i, j] = “  ” 4then PRINT_LCS(b, X, i-1, j-1) 5 print x i 6else if b[i, j] = “  ” 7then PRINT_LCS(b, X, i-1, j) 8then PRINT_LCS(b, X, i, j-1) Complexity: O(m+n)

Computer Theory Lab. Chapter 15P Optimal Binary search trees cost:2.80 cost:2.75 optimal!!

Computer Theory Lab. Chapter 15P.67 expected cost the expected cost of a search in T is

Computer Theory Lab. Chapter 15P.68 For a given set of probabilities, our goal is to construct a binary search tree whose expected search is smallest. We call such a tree an optimal binary search tree.

Computer Theory Lab. Chapter 15P.69 Step 1: The structure of an optimal binary search tree Consider any subtree of a binary search tree. It must contain keys in a contiguous range k 1,...,k j, for some 1  i  j  n. In addition, a subtree that contains keys k i,..., k j must also have as its leaves the dummy keys d i-1,..., d j. If an optimal binary search tree T has a subtree T' containing keys k i,..., k j, then this subtree T' must be optimal as well for the subproblem with keys k i,..., k j and dummy keys d i-1,..., d j.

Computer Theory Lab. Chapter 15P.70 Step 2: A recursive solution

Computer Theory Lab. Chapter 15P.71 Step 3:computing the expected search cost of an optimal binary search tree O PTIMAL -BST(p,q,n) 1for i  1 to n + 1 2do e[i, i – 1]  q i-1 3 w[i, i – 1]  q i-1 4for l  1 to n 5do for i  1 to n – l + 1 6do j  i + l – 1 7 e[i, j]   8 w[i, j]  w[i, j – 1] + p j +q j

Computer Theory Lab. Chapter 15P.72 9 for r  i to j 10do t  e[i, r –1]+e[r +1, j]+w[i, j] 11 if t < e[i, j] 12then e[i, j]  t 13 root[i, j]  r 14return e and root the O PTIMAL -BST procedure takes  (n 3 ), just like M ATRIX -C HAIN -O RDER

Computer Theory Lab. Chapter 15P.73 The table e[i,j], w[i,j], and root[i,j] computer by O PTIMAL -BST on the key distribution.

Computer Theory Lab. Chapter 15P.74 Knuth has shown that there are always roots of optimal subtrees such that root[i, j –1]  root[i+1, j] for all 1  i  j  n. We can use this fact to modify Optimal-BST procedure to run in  (n 2 ) time.