Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.

Slides:



Advertisements
Similar presentations
Advanced Algorithm Design and Analysis (Lecture 7) SW5 fall 2004 Simonas Šaltenis E1-215b
Advertisements

 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
RAIK 283: Data Structures & Algorithms
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Warshall’s and Floyd’sAlgorithm Dr. Ying Lu RAIK 283: Data Structures.
Chapter 7 Dynamic Programming 7.
Dynamic Programming Reading Material: Chapter 7..
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Chapter 3 Brute Force Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design technique Dynamic Programming is a.
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Midterm 3 Revision Prof. Sin-Min Lee Department of Computer Science San Jose State University.
Algorithms All pairs shortest path
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design techniqueDynamic Programming is a.
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming – Part 2 Introduction to Algorithms Dynamic Programming – Part 2 CSE 680 Prof. Roger Crawfis.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Chapter 5 Dynamic Programming 2001 년 5 월 24 일 충북대학교 알고리즘연구실.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
MA/CSSE 473 Day 28 Dynamic Programming Binomial Coefficients Warshall's algorithm Student questions?
1 Closures of Relations: Transitive Closure and Partitions Sections 8.4 and 8.5.
The all-pairs shortest path problem (APSP) input: a directed graph G = (V, E) with edge weights goal: find a minimum weight (shortest) path between every.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
12-CRS-0106 REVISED 8 FEB 2013 CSG523/ Desain dan Analisis Algoritma Dynamic Programming Intelligence, Computing, Multimedia (ICM)
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
All-Pairs Shortest Paths
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
MA/CSSE 473 Day 30 B Trees Dynamic Programming Binomial Coefficients Warshall's algorithm No in-class quiz today Student questions?
MA/CSSE 473 Day 27 Dynamic Programming Binomial Coefficients
Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Introduction to the Design and Analysis of Algorithms
Seminar on Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Algorithm Analysis Fall 2017 CS 4306/03
Chapter 8 Dynamic Programming.
Dynamic Programming.
Warshall’s and Floyd’sAlgorithm
Analysis and design of algorithm
Unit 4: Dynamic Programming
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
Chapter 8 Dynamic Programming
ICS 353: Design and Analysis of Algorithms
ICS 353: Design and Analysis of Algorithms
Chapter 3 Brute Force Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Floyd-Warshall Algorithm
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming.
Dynamic Programming.
All pairs shortest path problem
3. Brute Force Selection sort Brute-Force string matching
Dynamic Programming.
Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Seminar on Dynamic Programming.
3. Brute Force Selection sort Brute-Force string matching
Presentation transcript:

Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.

8-1 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated as recurrences with overlapping subinstances Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems and later assimilated by CS Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems and later assimilated by CS “Programming” here means “planning” “Programming” here means “planning” Main idea: Main idea: -set up a recurrence relating a solution to a larger instance to solutions of some smaller instances - solve smaller instances once -record solutions in a table -extract solution to the initial instance from that table

8-2 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Example: Fibonacci numbers Recall definition of Fibonacci numbers: Recall definition of Fibonacci numbers: F(n) = F(n-1) + F(n-2) F(0) = 0 F(1) = 1 Computing the n th Fibonacci number recursively (top-down): F(n) F(n) F(n-1) + F(n-2) F(n-1) + F(n-2) F(n-2) + F(n-3) F(n-3) + F(n-4)......

8-3 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Example: Fibonacci numbers (cont.) Computing the n th Fibonacci number using bottom-up iteration and recording results: F(0) = 0 F(0) = 0 F(1) = 1 F(1) = 1 F(2) = 1+0 = 1 F(2) = 1+0 = 1 … F(n-2) = F(n-2) = F(n-1) = F(n-1) = F(n) = F(n-1) + F(n-2) F(n) = F(n-1) + F(n-2) Efficiency: Efficiency: - time - time - space - space nnnn What if we solve it recursively?

8-4 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Examples of DP algorithms Computing a binomial coefficient Computing a binomial coefficient Longest common subsequence Longest common subsequence Warshall’s algorithm for transitive closure Warshall’s algorithm for transitive closure Floyd’s algorithm for all-pairs shortest paths Floyd’s algorithm for all-pairs shortest paths Constructing an optimal binary search tree Constructing an optimal binary search tree Some instances of difficult discrete optimization problems: Some instances of difficult discrete optimization problems: - traveling salesman - traveling salesman - knapsack - knapsack

8-5 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Computing a binomial coefficient by DP Binomial coefficients are coefficients of the binomial formula: (a + b) n = C(n,0)a n b C(n,k)a n-k b k... + C(n,n)a 0 b n (a + b) n = C(n,0)a n b C(n,k)a n-k b k C(n,n)a 0 b n Recurrence: C(n,k) = C(n-1,k) + C(n-1,k-1) for n > k > 0 C(n,0) = 1, C(n,n) = 1 for n  0 C(n,0) = 1, C(n,n) = 1 for n  0 Value of C(n,k) can be computed by filling a table: k-1 k k-1 k n-1 C(n-1,k-1) C(n-1,k) n-1 C(n-1,k-1) C(n-1,k) n C(n,k) n C(n,k)

8-6 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Computing C(n,k): pseudocode and analysis Time efficiency: Θ (nk) Space efficiency: Θ (nk)

8-7 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Knapsack Problem by DP Given n items of integer weights: w 1 w 2 … w n integer weights: w 1 w 2 … w n values: v 1 v 2 … v n values: v 1 v 2 … v n a knapsack of integer capacity W a knapsack of integer capacity W find most valuable subset of the items that fit into the knapsack Consider instance defined by first i items and capacity j (j  W). Let V[i,j] be optimal value of such an instance. Then max {V[i-1,j], v i + V[i-1,j- w i ]} if j- w i  0 max {V[i-1,j], v i + V[i-1,j- w i ]} if j- w i  0 V[i,j] = V[i-1,j] if j- w i < 0 V[i-1,j] if j- w i < 0 Initial conditions: V[0,j] = 0 and V[i,0] = 0 {

8-8 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Knapsack Problem by DP (example) Example: Knapsack of capacity W = 5 item weight value 1 2 $ $ $ $ $ $ $15 capacity j 4 2 $15 capacity j w 1 = 2, v 1 = 12 1 w 2 = 1, v 2 = 10 2 w 3 = 3, v 3 = 20 3 w 4 = 2, v 4 = 15 4 ? Backtracing finds the actual optimal subset, i.e. solution.

8-9 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Knapsack Problem by DP (pseudocode) Algorithm DPKnapsack(w[1..n], v[1..n], W) var V[0..n,0..W], P[1..n,1..W]: int for j := 0 to W do V[0,j] := 0 for i := 0 to n do for i := 0 to n do V[i,0] := 0 V[i,0] := 0 for i := 1 to n do for i := 1 to n do for j := 1 to W do if w[i]  j and v[i] + V[i-1,j-w[i]] > V[i-1,j] then V[i,j] := v[i] + V[i-1,j-w[i]]; P[i,j] := j-w[i] else V[i,j] := V[i-1,j]; P[i,j] := j return V[n,W] and the optimal subset by backtracing Running time and space: O(nW).

8-10 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Longest Common Subsequence (LCS) b A subsequence of a sequence/string S is obtained by deleting zero or more symbols from S. For example, the following are some subsequences of “president”: pred, sdn, predent. In other words, the letters of a subsequence of S appear in order in S, but they are not required to be consecutive. b The longest common subsequence problem is to find a maximum length common subsequence between two sequences.

8-11 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 LCS For instance, Sequence 1: president Sequence 1: president Sequence 2: providence Sequence 2: providence Its LCS is priden. Its LCS is priden. president providence

8-12 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 LCS Another example: Sequence 1: algorithm Sequence 1: algorithm Sequence 2: alignment Sequence 2: alignment One of its LCS is algm. a l g o r i t h m a l i g n m e n t

8-13 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 How to compute LCS? b Let A=a 1 a 2 …a m and B=b 1 b 2 …b n. b len(i, j): the length of an LCS between a 1 a 2 …a i and b 1 b 2 …b j b With proper initializations, len(i, j) can be computed as follows.

8-14 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8

8-15 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Running time and memory: O(mn) and O(mn).

8-16 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 The backtracing algorithm

8-17 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8

8-18 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Warshall’s Algorithm: Transitive Closure Computes the transitive closure of a relation Computes the transitive closure of a relation Alternatively: existence of all nontrivial paths in a digraph Alternatively: existence of all nontrivial paths in a digraph Example of transitive closure: Example of transitive closure:

8-19 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Warshall’s Algorithm Constructs transitive closure T as the last matrix in the sequence of n-by-n matrices R (0), …,R (k), …,R (n) where Constructs transitive closure T as the last matrix in the sequence of n-by-n matrices R (0), …, R (k), …, R (n) where R (k) [i,j] = 1 iff there is nontrivial path from i to j with only the first k vertices allowed as intermediate Note that R (0) = A (adjacency matrix),R (n) = T (transitive closure) Note that R (0) = A (adjacency matrix), R (n) = T (transitive closure) R (0) R (1) R (2) R (3) R (4)

8-20 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Warshall’s Algorithm (recurrence) On the k-th iteration, the algorithm determines for every pair of vertices i, jif a path exists from i and j with just vertices 1,…,k allowed as intermediate On the k-th iteration, the algorithm determines for every pair of vertices i, j if a path exists from i and j with just vertices 1,…,k allowed as intermediate R (k-1) [i,j] (path using just 1,…,k-1) R (k) [i,j] = or R (k) [i,j] = or R (k-1) [i,k] and R (k-1) [k,j] (path from i to k R (k-1) [i,k] and R (k-1) [k,j] (path from i to k and from k to j and from k to j using just 1,…,k-1) using just 1,…,k-1) i j k { Initial condition?

8-21 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Warshall’s Algorithm (matrix generation) Recurrence relating elements R (k) to elements of R (k-1) is: R (k) [i,j] = R (k-1) [i,j] or (R (k-1) [i,k] and R (k-1) [k,j]) It implies the following rules for generating R (k) from R (k-1) : Rule 1If an element in row i and column j is 1 in R (k-1), it remains 1 in R (k) Rule 1 If an element in row i and column j is 1 in R (k-1), it remains 1 in R (k) Rule 2 If an element in row i and column j is 0 in R (k-1), it has to be changed to 1 in R (k) if and only if the element in its row i and column k and the element in its column j and row k are both 1’s in R (k-1) in its column j and row k are both 1’s in R (k-1)

8-22 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Warshall’s Algorithm (example) R (0) = R (1) = R (2) = R (3) = R (4) =

8-23 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Warshall’s Algorithm (pseudocode and analysis) Time efficiency: Θ (n 3 ) Space efficiency: Matrices can be written over their predecessors (with some care), so it’s Θ(n^2). (with some care), so it’s Θ(n^2).

8-24 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Floyd’s Algorithm: All pairs shortest paths Problem: In a weighted (di)graph, find shortest paths between every pair of vertices Same idea: construct solution through series of matrices D (0), …, D (n) using increasing subsets of the vertices allowed as intermediate Example: ∞ 4 ∞ ∞ ∞ 0 ∞

8-25 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Floyd’s Algorithm (matrix generation) On the k-th iteration, the algorithm determines shortest paths between every pair of vertices i, j that use only vertices among 1,…,k as intermediate D (k) [i,j] = min {D (k-1) [i,j], D (k-1) [i,k] + D (k-1) [k,j]} D (k) [i,j] = min {D (k-1) [i,j], D (k-1) [i,k] + D (k-1) [k,j]} i j k D (k-1) [i,j] D (k-1) [i,k] D (k-1) [k,j] Initial condition?

8-26 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Floyd’s Algorithm (example) 0 ∞ 3 ∞ 2 0 ∞ ∞ ∞ ∞ ∞ 0 D (0) = 0 ∞ 3 ∞ ∞ ∞ ∞ 9 0 D (1) = 0 ∞ 3 ∞ ∞ ∞ 9 0 D (2) = D (3) = D (4) =

8-27 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Floyd’s Algorithm (pseudocode and analysis) Time efficiency: Θ (n 3 ) Space efficiency: Matrices can be written over their predecessors Note: Works on graphs with negative edges but without negative cycles. Shortest paths themselves can be found, too. How? If D[i,k] + D[k,j] < D[i,j] then P[i,j]  k Since the superscripts k or k-1 make no difference to D[i,k] and D[k,j].

8-28 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Optimal Binary Search Trees Problem: Given n keys a 1 < …< a n and probabilities p 1, …, p n searching for them, find a BST with a minimum average number of comparisons in successful search. Since total number of BSTs with n nodes is given by C(2n,n)/(n+1), which grows exponentially, brute force is hopeless. Example: What is an optimal BST for keys A, B, C, and D with search probabilities 0.1, 0.2, 0.4, and 0.3, respectively? Average # of comparisons = 1* *( ) + 3*0.1 = 1.7

8-29 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 DP for Optimal BST Problem Let C[i,j] be minimum average number of comparisons made in T[i,j], optimal BST for keys a i < …< a j,where 1 ≤ i ≤ j ≤ n. Consider optimal BST among all BSTs with some a k (i ≤ k ≤j) as their root; T[i,j] is the best among them. Let C[i,j] be minimum average number of comparisons made in T[i,j], optimal BST for keys a i < …< a j, where 1 ≤ i ≤ j ≤ n. Consider optimal BST among all BSTs with some a k (i ≤ k ≤ j ) as their root; T[i,j] is the best among them. C[i,j] = min {p k · 1 + min {p k · 1 + ∑ p s (level a s in T[i,k-1] +1)+ ∑ p s (level a s in T[i,k-1] +1) + ∑ p s (level a s in T[k+1,j] +1)} ∑ p s (level a s in T[k+1,j] +1)} i ≤ k ≤j i ≤ k ≤ j s =i s = i k-1 s =k+1 j

8-30 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 DP for Optimal BST Problem (cont.) After simplifications, we obtain the recurrence for C[i,j]: C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ p s for1 ≤ i ≤ j ≤ n C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ p s for 1 ≤ i ≤ j ≤ n C[i,i] = p i for 1 ≤ i ≤ j ≤ n s =i s = i j i ≤ k ≤ j

Example: key A B C D probability probability The tables below are filled diagonal by diagonal: the left one is filled using the recurrence C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ p s, C[i,i] = p i ; C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ p s, C[i,i] = p i ; the right one, for trees’ roots, records k’s values giving the minima i ≤ k ≤j i ≤ k ≤ j s =i s = i j optimal BST B A C D i j

8-32 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Optimal Binary Search Trees

8-33 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 8 Analysis DP for Optimal BST Problem Time efficiency: Θ(n 3 ) but can be reduced to Θ(n 2 )by taking advantage of monotonicity of entries in the root table, i.e., R[i,j] is always in the range between R[i,j-1] and R[i+1,j] Time efficiency: Θ(n 3 ) but can be reduced to Θ(n 2 ) by taking advantage of monotonicity of entries in the root table, i.e., R[i,j] is always in the range between R[i,j-1] and R[i+1,j] Space efficiency: Θ(n 2 ) Method can be expanded to include unsuccessful searches