Download presentation

Presentation is loading. Please wait.

Published byAbraham Compton Modified over 2 years ago

1
Online Topological Ordering Siddhartha Sen, COS 518 11/20/2007

2
Outline Problem statement and motivation Prior work (summary) Result by Ajwani et al. – Algorithm – Correctness – Running time – Implementation Comparison to prior work – Incremental complexity analysis – Practical implications Open problems Breaking news

3
Problem statement Offline or static version (STO) – Given a DAG G = (V,E) (with n = V and m = E ), find a linear ordering T of its nodes such that for all directed paths from x є V to y є V (x ≠ y), T(x) < T(y), where T:V [1..n] is a bijective mapping Online version (DTO) – Edges of G are not known before hand, but are revealed one by one – Each time an edge is added to the graph, T must be updated

4
Problem statement a a b b c c d d u u v v affected region u v invalidates topological order

5
Motivation Traditional applications – Online cycle detection in pointer analysis – Incremental evaluation of computational circuits – Semantic checking by structure-based editors – Maintaining dependences between modules during compilation Other applications – Scheduling jobs in grid computing systems, where dependences arise between the subtasks of a job

6
Offline problem: per edge – for m edges Alpern et al. (AHRSZ, ‘90): per edge Marchetti-Spaccamela et al. (MNR, ‘96): per edge (amortized) – for m edges Pearce and Kelly (PK, ’04): per edge Katriel and Bodlaender (KB, ’05):. per edge (amortized) – for m edges Prior work (summary) incremental complexity analysis

7
Ajwani et al. (AFM) Contributions – Solves DTO in O(n 2.75 ) time, regardless of the number of edges m inserted – Uses generic bucket data structure with efficient support for: insert, delete, collect-all Analysis based on tunable parameter t = max number of nodes in each bucket Contributions – Poor discussion of motivating applications – No insight into how algorithm works or achieves running time – No intuitive comparison with prior algorithms (AHRSZ, MNR, etc.)

8
Notation d(u,v) denotes T(u) – T(v) u < v is shorthand for T(u) < T(v) u v denotes an edge from u to v u v means v is reachable from u

9
Algorithm AFM

10
a a b b c c d d u u v v Call: Set A: Set B: Recursion depth: R EORDER (u,v) { v, a } { c, u } u v invalidates topological order

11
Algorithm AFM a a b b c c d d u u v v Call: Set A: Set B: Recursion depth: R EORDER (c,a) Ø

12
Algorithm AFM c c b b a a d d u u v v Call: Set A: Set B: Recursion depth: R EORDER (c,a) Ø Swap!

13
Algorithm AFM c c b b a a d d u u v v Call: Set A: Set B: Recursion depth: R EORDER (u,v) { v, a } { c, u }

14
Algorithm AFM c c b b a a d d u u v v Call: Set A: Set B: Recursion depth: R EORDER (u,a) { a, b } { u }

15
Algorithm AFM c c b b a a d d u u v v Call: Set A: Set B: Recursion depth: R EORDER (u,b) Ø

16
Algorithm AFM c c u u a a d d b b v v Call: Set A: Set B: Recursion depth: R EORDER (u,b) Ø Swap!

17
Algorithm AFM Call: Set A: Set B: Recursion depth: R EORDER (u,a) { a, b } { u } c c u u a a d d b b v v

18
Algorithm AFM Call: Set A: Set B: Recursion depth: R EORDER (u,a) Ø c c u u a a d d b b v v

19
Algorithm AFM Call: Set A: Set B: Recursion depth: R EORDER (u,a) Ø c c a a u u d d b b v v Swap!

20
Algorithm AFM Call: Set A: Set B: Recursion depth: R EORDER (u,v) { v, a } { c, u } c c a a u u d d b b v v

21
Algorithm AFM Call: Set A: Set B: Recursion depth: R EORDER (c,v) Ø c c a a u u d d b b v v

22
Algorithm AFM Call: Set A: Set B: Recursion depth: R EORDER (c,v) Ø v v a a u u d d b b c c Swap!

23
Algorithm AFM Call: Set A: Set B: Recursion depth: v v a a u u d d b b c c R EORDER (u,v) { v, a } { c, u }

24
Algorithm AFM Call: Set A: Set B: Recursion depth: R EORDER (u,v) Ø v v a a u u d d b b c c

25
Algorithm AFM Call: Set A: Set B: Recursion depth: R EORDER (u,v) Ø u u a a v v d d b b c c Swap!

26
Algorithm AFM Call: Set A: Set B: Recursion depth: R EORDER (u,v) Ø u u a a v v d d b b c c Done!

27
Data structures Store T and T -1 as arrays – O(1) lookup for topological order and inverse Graph stored as array of vertices, where each vertex has two adjacency lists (for incoming/outgoing edges) Each adjacency list stored as array of buckets – Each bucket contains at most t nodes for a fixed t – i-th bucket of node u contains all adjacent nodes v with i t d(u,v) (i + 1) t

28
Data structures A bucket is any data structure with efficient support for the following operations: – Insert: insert an element into a given bucket – Delete: given an element and a bucket, delete the element from the bucket (if found; otherwise, return 0) – Collect-all: copy all elements from a given bucket to some vector Analysis assumes a generic bucket data structure and counts the number of bucket operations – Later, we will consider different implementations of the data structure and corresponding running times/space usage

29
Correctness Theorem 1. Algorithm AFM returns a valid topological order after each edge insertion. Lemma 1. Given a DAG G and a valid topological order, if u v and u v, then all subsequent calls to REORDER will maintain u v. Lemma 2. Given a DAG G with v y and x u, a call of REORDER(u,v) will ensure that x < y. Theorem 2. The algorithm detects a cycle iff there is a cycle in the given edge sequence.

30
Correctness Theorem 1. Algorithm AFM returns a valid topological order after each edge insertion. Proof: use Lemmas 1 and 2. – For graph with no edges, any ordering is a topological ordering – Need to show that I NSERT (u,v) maintains correct topological order of G’ = G ∪ {(u,v)} If u v, this is trivial; otherwise, Show that x y for all nodes x,y of G’ with x y. If there was a path x y in G, Lemma 1 gives x y. Otherwise, x y was introduced to G’ by (u,v), and Lemma 2 gives x y in G’ since there is x u v y in G’.

31
Correctness Lemma 1. Given a DAG G and a valid topological order, if u v and u v, then all subsequent calls to R EORDER will maintain u v. Proof: by contradiction – Consider the first call of R EORDER that leads to u v. Either this led to swapping u and w with w v or swapping w and v with w u. In the first case: Call was R EORDER (w,u) and A = Ø However, x A for which u x v (since v is between u and w), leading to a contradiction

32
Correctness Lemma 2. Given a DAG G with v y and x u, a call of R EORDER (u,v) will ensure that x < y. Proof: by induction on recursion depth of R EORDER (u,v) – For leaf nodes, A = B = Ø. If x y before, Lemma 1 ensures x y will continue; otherwise, x = u and y = v and swapping gives x y. – Assume lemma is true up to a certain tree level (show this implies higher levels). If A Ø, there is a v’ such that v v’ y, otherwise v’ = v = y. If B Ø, there is a u’ such that x u’ u, otherwise u’ = u = x. Hence v’ y x u’. For loops will call R EORDER (u’,v’), which ensures x y by inductive hypothesis Lemma 1 ensures further calls to R EORDER maintain x y

33
Correctness Theorem 2. The algorithm detects a cycle iff there is a cycle in the given edge sequence. Proof: – Within a call to Insert(u,v), there are paths v v’ and u’ u for each recursive call to R EORDER (u’,v’) Trivial for first call and follows by definition of A and B for subsequent calls If algorithm detects a cycle in line 1, then we have v v’ = u’ u and adding u v completes the cycle

34
Correctness Theorem 2. The algorithm detects a cycle iff there is a cycle in the given edge sequence. Proof: , by induction on number of nodes in path v u – Consider edge (u,v) of the cycle v u v inserted last. Since v u before inserting this edge, Theorem 1 states that v u, so R EORDER (u,v) will be called. Call of R EORDER (u’,v’) with u’ = v’ or v’ u’ clearly reports a cycle Consider path v x y u of length k 2 and call to R EORDER (u,v). Since v x y u before the call, x A and y B, so R EORDER (y,x) will be called. y x has k – 2 nodes in the path, so call to Reorder will detect the cycle (by the inductive hypothesis).

35
Algorithm AFM

36
Running time Theorem 3. Online topological ordering can be computed using O(n 3.5 /t) bucket inserts and deletes, O(n 3 /t) bucket collect-all operations collecting O(n 2 t) elements, and O(n 2.5 + n 2 t) operations for sorting. Lemma 4. R EORDER is called O(n 2 ) times. Lemma 5. The summation of A + B over all calls of R EORDER is O(n 2 ). Lemma 6. Calculating the sorted sets A and B over all calls of R EORDER can be done by O(n 3 /t) bucket collect- all operations touching a total of O(n 2 t) elements and O(n 2.5 + n 2 t) operations for sorting these elements. Lemma 9. Updating the data structure over all calls of R EORDER requires O(n 3.5 /t) bucket inserts and deletes.

37
Running time Theorem 3. Online topological ordering can be computed using O(n 3.5 /t) bucket inserts and deletes, O(n 3 /t) bucket collect-all operations collecting O(n 2 t) elements, and O(n 2.5 + n 2 t) operations for sorting. Proof: – Use lemmas 4, 6, and 9. Additionally, show that merging sets A and B (lines 6-7 in the algorithm) takes O(n 2 ) time Merging takes O( A + B ), which is O(n 2 ) over all calls to R EORDER by Lemma 5; finding vertices in B that exceed the chosen v’ takes O(the number of those vertices), which is also the number of recursive calls to R EORDER made. Lemma 4 says the latter value is O(n 2 ).

38
Running time Lemma 4. R EORDER is called O(n 2 ) times. Proof: – Consider the first time R EORDER (u,v) is called. If A = B = Ø, then u and v are swapped. Otherwise, R EORDER (u’,v’) is called recursrivelly for all v’ {v} ∪ A and u’ B ∪ {v} with u’ v’. The order in which recursive calls are made and the fact that R EORDER is local (only touches the affected region) ensures that R EORDER (u,v) is not called except as the last recursive call. In this second call to R EORDER (u,v), A = B = Ø Consider all v’ A and v’ B from the first call of R EORDER (u,v). R EORDER (u,v’) and R EORDER (u’,v) must have been called by the for loops before the second call to R EORDER (u,v). Therefore, u v’ and u’ v for all v’ A and v’ B, so u and v are swapped during the second call. R EORDER (u,v) will not be called again because u v.

39
Running time Lemma 9. Updating the data structure over all calls of REORDER requires O(n 3.5 /t) bucket inserts and deletes. Proof: use LP – Data structure requires O(d(u,v)n/t) bucket inserts and deletes to swap two nodes u and v. Need to update adjacency lists of u and v and all w adjacent to u and/or v. If d(u,v) t, build from scratch in O(n). Otherwise, can show that at most d(u,v) nodes need to transfer between any pair of consecutive buckets. This yields a bound of O(d(u,v)n/t). – Each node pair is swapped at most once (Lemma 7), so summing up over all calls of REORDER(u,v) where u and v are swapped, we need O( d(u,v)n/t) bucket inserts and deletes. d(u,v) = O(n 2.5 ) by Lemma 8, so the result follows.

40
Running time How to prove d(u,v) = O(n 2.5 )? Use an LP: – Let T* denote the final topological ordering and – Model some linear constraints on X(i,j): 0 X(i,j) n for all i,j [1..n] X(i,j) = 0 for all j i j i X(i,j) – j*
{
"@context": "http://schema.org",
"@type": "ImageObject",
"contentUrl": "http://images.slideplayer.com/13/4167403/slides/slide_40.jpg",
"name": "Running time How to prove d(u,v) = O(n 2.5 ).",
"description": "Use an LP: – Let T* denote the final topological ordering and – Model some linear constraints on X(i,j): 0 X(i,j) n for all i,j [1..n] X(i,j) = 0 for all j i j i X(i,j) – j
*

41
Yields the following LP: And it’s dual: Running time

42
Which yields the following feasible solution: This solution has a value of:

43
Implementation of data structure Balanced binary tree gives O(1 + log ) time insert and delete and O(1 + ) collect-all – Total time is O(n 2 t + n 3.5 log n/t) by Theorem 3. Setting t = n 0.75 (log n) 1/2, we get a total time of O(n 2.75 (log n) 1/2 ) and O(n 2 ) space n-bit array gives O(1) insert and delete and O(total output size + total # of deletes) collect-all operation – Total time is O(n 2 t + n 3.5 /t). Setting t = n 0.75 gives O(n 2.75 ) time and O(n 2.25 ) space for O(n 2 /t) buckets Uniform hashing is similar to n-bit array – O(n 2.75 ) expected time and O(n 2 ) space

44
Empirical comparison Compared against PK, MNR, and AHRSZ for the following “hard-case” graph:

45
Empirical comparison

46
Comparison to prior work No insight provided by Ajwani et al. Pearce and Kelly compare PK, AHRSZ, and MNR using incremental complexity analysis – In dynamic problems, typically no fixed input captures the minimal amount of work to be performed – Use complexity analysis based on input size: measure work in terms of a paramter representing the (minimal) change in input and output required For DTO problem, input is current DAG and topological order, output after an edge insertion is updated DAG and (any) valid ordering – Algorithm is bounded if time complexity can be expressed only in terms of ; otherwise, it is unbounded

47
Comparison to prior work Runtime comparisons: – AHRSZ is bounded by K min , the minimal cover of vertices that are incorrectly ordered after an edge insertion, plus adjacent edges – PK is bounded by uv , the set of vertices in the affected region which reach u or are reachable from v, plus adjacent edges; PK is worst-case optimal wrt number of vertices reordered – MNR takes ( uv F + + AR uv ) in the incremental complexity model, where AR uv is the set of vertices in the affected region – K min uv AR uv , so AHRSZ is strictly better than PK, but PK and MNR are more difficult to compare (former expected to outperform the latter on sparse graphs) – KB analyzes a variant of AHRSZ – AFM appears to improve the bound on the time to insert m edges for AHRSZ

48
Comparison to prior work Intuitive comparison – AHRSZ performs simultaneous forward and backward searches from u and v until the two frontiers meet; nodes with incorrect priorities are placed in a set and corrected using DFS’s in this set – MNR does a similar DFS to discover incorrect priorities, but visits all nodes in the affected region during reassignment – PK is similar to MNR but reassigns priorities using only positions previously held by members of uv – KB and AFM appear to be improvements in the runtime analysis of variants of AHRSZ

49
Comparison to prior work Practical implications – PK and MNR use simpler data structures (arrays) than AHRSZ (priority queues and Diez and Sleator ordered list structure) – PK and MNR use simpler traversal algorithms than AHRSZ – PK visits fewer nodes during reassignments Experiments run by Pearce and Kelly – MNR performs poorly on sparse graphs, but is the most efficient on dense graphs – PK performs well on very sparse/dense graphs, but not so well in between – AHRSZ is relatively poor on sparse graphs, but has constant performance otherwise (competitive with the others)

50
Open problems Only lower bound in the problem is (n log n) for inserting n – 1 edges, by Ramalingam and Reps; better lower bounds? Reduce the (wide) gap between best known lower and upper bounds Answer: does the definition of for DTO need to include adjacent edges? Does the bounded complexity model capture the power of amortization? Include edge deletions in the analysis of AFM or any of the other algorithms Perform a theoretical and empirical analysis of a parallel version of AFM or any of the other algorithms

51
Breaking news Kavitha and Mathew improve the upper bound to O(min n 2.5, (m + n log n)m 0.5 ) – Doesn’t appear to be anything wildly unique about their algorithm – Do a better job of keeping the sizes of sets uv F and uv B close to each other

52
Thank you

Similar presentations

OK

2 -1 Chapter 2 The Complexity of Algorithms and the Lower Bounds of Problems.

2 -1 Chapter 2 The Complexity of Algorithms and the Lower Bounds of Problems.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on event handling in javascript something happens Ppt on chinese food Ppt on history of cricket class 9 Download ppt on say no to crackers Ppt on self awareness games Ppt on carry save adder Ppt on power line communication scanner Free download ppt on recruitment and selection process Ppt on leverages parker Ppt on low level language list