Presentation is loading. Please wait.

Presentation is loading. Please wait.

UMass Lowell Computer Science 91.503 Graduate Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Lecture 3 Tuesday, 2/10/09 Amortized Analysis.

Similar presentations


Presentation on theme: "UMass Lowell Computer Science 91.503 Graduate Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Lecture 3 Tuesday, 2/10/09 Amortized Analysis."— Presentation transcript:

1 UMass Lowell Computer Science 91.503 Graduate Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Lecture 3 Tuesday, 2/10/09 Amortized Analysis

2 Overview ä Amortize: “To pay off a debt, usually by periodic payments” [Websters] ä Amortized Analysis: ä “creative accounting” for operations ä can show average cost of an operation is small (when averaged over sequence of operations, not distribution of inputs) even though a single operation in the sequence is expensive ä result must hold for any sequence of these operations ä no probability is involved (unlike average-case analysis) ä guarantee holds in worst-case ä analysis method only; no effect on code operation

3 Overview (continued) ä 3 ways to determine amortized cost of an operation that is part of a sequence of operations: ä Aggregate Method ä find upper bound T(n) on total cost of sequence of n operations ä amortized cost = average cost per operation = T(n)/n ä same for all the operations in the sequence ä Accounting Method ä amortized cost can differ across operations ä overcharge some operations early in sequence ä store overcharge as “prepaid credit” on specific data structure objects ä Potential Method ä amortized cost can differ across operations (as in accounting method) ä overcharge some operations early in sequence (as in accounting method) ä store overcharge as “potential energy” of data structure as a whole ä (unlike accounting method)

4 Aggregate Method: Stack Operations ä Aggregate Method ä find upper bound T(n) on total cost of sequence of n operations ä amortized cost = average cost per operation = T(n)/n ä same for all the operations in the sequence ä Traditional Stack Operations ä PUSH(S,x) pushes object x onto stack S ä POP(S) pops top of stack S, returns popped object ä O(1) time: consider cost as 1 for our discussion ä Total actual cost of sequence of n PUSH/POP operations = n  STACK-EMPTY(S) time in  (1)

5 Aggregate Method: Stack Operations (continued) ä New Stack Operation ä MULTIPOP(S,k) pops top k elements off stack S ä pops entire stack if it has < k items ä MULTIPOP actual cost for stack containing s items: ä Use cost =1 for each POP ä Cost = min(s,k) ä Worst-case cost in O(s) in O(n) MULTIPOP(S,k) 1 while not STACK-EMPTY(S) and k = 0 2do POP(S) 3 k k - 1

6 Aggregate Method: Stack Operations (continued) ä Sequence of n PUSH, POP, MULTIPOP ops ä initially empty stack ä MULTIPOP worst-case O(n) O(n 2 ) for sequence ä Aggregate method yields tighter upper bound ä Sequence of n operations has O(n) worst-case cost ä Each item can be popped at most once for each push ä # POP calls (including ones in MULTIPOP) <= #push calls ä <= n ä Average cost of an operation = O(n)/n = O(1) ä = amortized cost of each operation ä holds for PUSH, POP and MULTIPOP

7 Accounting Method ä Accounting Method ä amortized cost can differ across operations ä overcharge some operations early in sequence ä store overcharge as “prepaid credit” on specific data structure objects ä Let be actual cost of ith operation ä Let be amortized cost of ith operation (what we charge) ä Total amortized cost of sequence of operations must be upper bound on total actual cost of sequence ä Total credit in data structure = ä must be nonnegative for all n

8 Accounting Method: Stack Operations ä Paying for a sequence using amortized cost: ä start with empty stack ä PUSH of an item always precedes POP, MULTIPOP ä pay for PUSH & store 1 unit of credit ä credit for each item pays for actual POP, MULTIPOP cost of that item ä credit never “goes negative” ä total amortized cost of any sequence of n ops is in O(n) ä amortized cost is upper bound on total actual cost ä PUSH12 ä POP10 ä MULTIPOPmin(k,s)0 Operation Actual Cost Assigned Amortized Cost

9 Potential Method ä Potential Method ä amortized cost can differ across operations (as in accounting method) ä overcharge some operations early in sequence (as in accounting method) ä store overcharge as “potential energy” of data structure as a whole (unlike accounting method) ä Let c i be actual cost of ith operation ä Let D i be data structure after applying ith operation  Let  (D i ) be potential associated with D i ä Amortized cost of ith operation: ä Total amortized cost of n operations: ä Require: so total amortized cost is upper bound on total actual cost ä Since n might not be known in advance, guarantee “payment in advance” by requiring terms telescope

10 Potential Method: Stack Operations ä Potential function value = number of items in stack ä guarantees nonnegative potential after ith operation ä Amortized operation costs (assuming stack has s items) ä PUSH: ä potential difference= ä amortized cost = ä MULTIPOP(S,k) pops k’=min(k,s) items off stack ä potential difference= ä amortized cost = ä POP amortized cost also = 0 ä Amortized cost O(1) total amortized cost of sequence of n operations in O(n)

11 Dynamic Tables: Overview ä Dynamic Table T: ä array of slots ä Ignore implementation choices: stack, heap, hash table... ä if too full, increase size & copy entries to T’ ä if too empty, decrease size & copy entries to T’ ä Analyze dynamic table insert and delete ä Actual expansion or contraction cost is large ä Show amortized cost of insert or delete in O(1)  Load factor  (T) = num[T]/size[T]  empty table:  (T) = 1 (by convention)  full table:  (T) = 1

12 Dynamic Tables: Table (Expansion Only) ä Load factor bounds (double size when T is full): ä Sequence of n inserts on initially empty table ä Worst-case cost of insert is in O(n) ä Worst-case cost of sequence of n inserts is in O(n 2 ) LOOSE “elementary” insertion Double only when table is already full. WHY?

13 Dynamic Tables: Table Expansion (cont) ä Aggregate Method: ä c i = i if i-1 is exact power of 2 ä 1 otherwise ä total cost of n inserts = Accounting Method: Accounting Method: ä charge cost = 3 for each element inserted ä intuition for 3 ä each item pays for 3 elementary insertions ä inserting itself into current table ä expansion: moving itself ä expansion: moving another item that has already been moved Amortized Analysis count only elementary insertions

14 Dynamic Tables: Table Expansion (cont) ä PotentialMethod :  Value of potential function  ä 0 right after expansion (then becomes 2) ä builds to table size by time table is full ä always nonnegative, so sum of amortized costs of n inserts is upper bound on sum of actual costs Amortized Analysis ä Amortized cost of ith insert   i  = potential after ith operation ä Case 1: insert does not cause expansion ä Case 2: insert causes expansion use these:

15 Dynamic Tables: Table Expansion & Contraction ä Load factor bounds: ä (double size when T is full) (halve size when T is ¼ full): ä DELETE pseudocode analogous to INSERT ä PotentialMethod :  Value of potential function  ä = 0 for empty table ä 0 right after expansion or contraction  builds as  (T) increases to 1 or decreases to ¼ ä always nonnegative, so sum of amortized costs of n inserts is upper bound on sum of actual costs  = 0 when  (T)=1/2  = num[T] when  (T)=1  = num[T] when  (T)=1/4 Amortized Analysis count elementary insertions & deletions same as INSERT

16 Dynamic Tables: Table Expansion & Contraction Amortized Analysis ä PotentialMethod

17 Dynamic Tables: Table Expansion & Contraction Amortized Analysis ä Analyze cost of sequence of n inserts and/or deletes ä Amortized cost of ith operation ä Case 1: INSERT  Case 1a:  i-1 >= ½. By previous insert analysis:  Case 1b:  i-1 < ½ and  i < ½  Case 1c:  i-1 = ½ ä PotentialMethod holds whether or not table expands no expansion

18 Dynamic Tables: Table Expansion & Contraction Amortized Analysis ä Amortized cost of ith operation (continued) ä PotentialMethod ä Case 2: DELETE  Case 2a:  i-1 >= ½.  Case 2b:  i-1 < ½ and  i < ½  Case 2c:  i-1 < ½ and  i < ½ no contraction contraction Conclusion: amortized cost of each operation is bounded above by a constant, so time for sequence of n operations is O(n).

19 Example: Dynamic Closest Pair source: “Fast hierarchical clustering and other applications of dynamic closest pairs,” David Eppstein, Journal of Experimental Algorithmics, Vol. 5, August 2000. S SS

20 Example: Dynamic Closest Pair (continued) source: “Fast hierarchical clustering and other applications of dynamic closest pairs,” David Eppstein, Journal of Experimental Algorithmics, Vol. 5, August 2000. S SS

21 Example: Dynamic Closest Pair (continued) Rules source: “Fast hierarchical clustering and other applications of dynamic closest pairs,” David Eppstein, Journal of Experimental Algorithmics, Vol. 5, August 2000. ä Partition dynamic set S into subsets. ä Each subset S i has an associated digraph G i consisting of a set of disjoint, directed paths. ä Total number of edges in all graphs remains linear ä Combine and rebuild if number of edges reaches 2n. ä Closest pair is always in some G i. ä Initially all points are in single set S 1. ä Operations: ä Create G i for a subset S i. ä Insert a point. ä Delete a point. ä Merge subsets until. We use log base 2.

22 Example: Dynamic Closest Pair (continued) Rules: Operations source: “Fast hierarchical clustering and other applications of dynamic closest pairs,” David Eppstein, Journal of Experimental Algorithmics, Vol. 5, August 2000. ä Create G i for a subset S i : ä Select starting point (we choose leftmost (or higher one in case of a tie)) ä Iteratively extend the path P, selecting next vertex as: ä Case 1: nearest neighbor in S \ P if last point on path belongs to S i ä Case 2: nearest neighbor in S i \ P if last point on path belongs to S \ S i ä Insert a point x: ä Create new subset S k+1 ={x}. ä Merge subsets if necessary. ä Create G i for new or merged subsets. ä Delete a point x: ä Create new subset S k+1 = all points y such that (y,x) is a directed edge in some G i. ä Remove x and adjacent edges from all G i. (We also remove y from its subset.) ä Merge subsets if necessary. ä Create G i for new or merged subsets. ä Merge subsets until : ä Choose subsets S i and S j to minimize size ratio | S j |/ | S i |. See handout for example.

23 Example: Dynamic Closest Pair (continued) Potential Function  Potential for a subset S i :  i = n|S i |log|S i |.  Total potential  = n 2 logn -   i. ä Paper proves this Theorem: source: “Fast hierarchical clustering and other applications of dynamic closest pairs,” David Eppstein, Journal of Experimental Algorithmics, Vol. 5, August 2000. Theorem: The data structure maintains the closest pair in S in O( n ) space, amortized time O( n log n ) per insertion, and amortized time O( n log 2 n ) per deletion.


Download ppt "UMass Lowell Computer Science 91.503 Graduate Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Lecture 3 Tuesday, 2/10/09 Amortized Analysis."

Similar presentations


Ads by Google