Presentation is loading. Please wait.

Presentation is loading. Please wait.

Algorithms for computing Canonical labeling of Graphs and Sub-Graph Isomorphism.

Similar presentations


Presentation on theme: "Algorithms for computing Canonical labeling of Graphs and Sub-Graph Isomorphism."— Presentation transcript:

1 Algorithms for computing Canonical labeling of Graphs and Sub-Graph Isomorphism

2 Agenda Brief overview of complexity issues of Graph Isomorphism, Sub Graph Isomorphism. Brief overview of complexity issues of Graph Isomorphism, Sub Graph Isomorphism. Canonical Labeling problem Canonical Labeling problem McKay’s Algorithm to find canonical labels and generators for Aut(G) McKay’s Algorithm to find canonical labels and generators for Aut(G) Ullmann’s Algorithm for Sub-Graph Isomorphism. Ullmann’s Algorithm for Sub-Graph Isomorphism. Conclusion. Conclusion.

3 Complexity of Graph Isomorphism and Sub Graph Isomorphism Sub Graph Isomorphism is a NP-Complete problem Sub Graph Isomorphism is a NP-Complete problem Graph Isomorphism (GI) is NP, but currently its still a open problem if GI is in P or NP-complete. Graph Isomorphism (GI) is NP, but currently its still a open problem if GI is in P or NP-complete. Babai gave a O(n 4m+c ) deterministic algorithm, where m is the multiplicity of the eigen values (spectrum) of the adjacency matrix. Babai gave a O(n 4m+c ) deterministic algorithm, where m is the multiplicity of the eigen values (spectrum) of the adjacency matrix. Equivalence of spectrum is a Necessary but not Sufficient condition to test isomorphism. Equivalence of spectrum is a Necessary but not Sufficient condition to test isomorphism.

4 Automorphism Group of a Graph. AUT(G) subgroup within a symmetric group S n (set of all permutations) AUT(G) subgroup within a symmetric group S n (set of all permutations) AUT(G) = { g | g € S n and G g = G } AUT(G) = { g | g € S n and G g = G } G g is the relabeled graph with permutation g G g is the relabeled graph with permutation g In simple terms AUT(G) is the set of all permutations under which adjacency matrix of G remains the same. In simple terms AUT(G) is the set of all permutations under which adjacency matrix of G remains the same. A Graph G and its compliment G ’ has same AUT(G) A Graph G and its compliment G ’ has same AUT(G)

5 Example of Automorphism 1 4 3 2 Label-1 0 0 1 0 1 1 0 1 0 0 1 0 1 1 0 1 0 0 1 0 1 2 3 4 2 4 3 1 3 4 1 2 0 1 1 1 1 0 0 0 Rewrite the adjacency matrix for each permutation If a permutation g is in Aut(G) then the adjacency matrix remain the same When we apply g to the adjacency matrix G g. Aut(G) = { (1, 2,4), (3) }

6 Canonical Labeling Problem Let G*(V) be the set of labeled graphs with Vertex set V. Let G*(V) be the set of labeled graphs with Vertex set V. G € G*(V), Canon(G): G*(V)  G*(V) G € G*(V), Canon(G): G*(V)  G*(V) Canon(G) is isomorphic to G Canon(G) is isomorphic to G Canon(G g ) = Canon(G), g € S n Canon(G g ) = Canon(G), g € S n We are interested in finding this mapping function, a direct use of this labeling is that we can find isomorphism very easily for this. We are interested in finding this mapping function, a direct use of this labeling is that we can find isomorphism very easily for this.

7 Equitable Partitions V = {1,2,…,n}, partition of V is a set of disjoint of subsets of V (union of which is V) V = {1,2,…,n}, partition of V is a set of disjoint of subsets of V (union of which is V) p = {C 1,C 2,….C k }, C 1 U C 2 U … C k = V p = {C 1,C 2,….C k }, C 1 U C 2 U … C k = V d(v i,C j ) = no.of vertices adjacent to v i in C i. d(v i,C j ) = no.of vertices adjacent to v i in C i. Partition p is equitable if Partition p is equitable if v i,v j € C m, d(v i,C p ) = d(v j,C p ) v i,v j € C m, d(v i,C p ) = d(v j,C p )

8 Ordering among the Partitions. p 1 ≤ p 2, p 1 is finer than p 2 (p 2 is coarser than p 1 ) p 1 ≤ p 2, p 1 is finer than p 2 (p 2 is coarser than p 1 ) Let P(V) be a set of all partitions on the vertex set V, then the operator ‘ ≤ ’ creates a partial order among the partitions. Let P(V) be a set of all partitions on the vertex set V, then the operator ‘ ≤ ’ creates a partial order among the partitions. {(1),(2),(3), … (n)} ≤..... ≤ p i ≤ p m ≤ p n ….. ≤ {(1,2,3, …,n)} {(1),(2),(3), … (n)} ≤..... ≤ p i ≤ p m ≤ p n ….. ≤ {(1,2,3, …,n)}..... ≤ p i ≤ p m ≤ p n …....... ≤ p i ≤ p m ≤ p n ….. p i is coarsest partition finer than p m p i is coarsest partition finer than p m p n is finest partition coarser than p m p n is finest partition coarser than p m We can add the equitable adjective to finer and coarser if the equitable property is statisfied. We can add the equitable adjective to finer and coarser if the equitable property is statisfied.

9 Fix of a partition by permutation. Let g € S n (Symmetric group), v i € V, v i g is the image of v i under permutation g. Let g € S n (Symmetric group), v i € V, v i g is the image of v i under permutation g. Let W subset of V then W g = { v i g | v i € W } Let W subset of V then W g = { v i g | v i € W } p = {C 1, C 2, … C k } then p g = {C g 1, C g 2, …} p = {C 1, C 2, … C k } then p g = {C g 1, C g 2, …} If p g = p then g is said to Fix p. If p g = p then g is said to Fix p. Example: p = {(1,2,3),(4),(5,6)}, then g = {3,2,1,4,6,5} fixes p. Example: p = {(1,2,3),(4),(5,6)}, then g = {3,2,1,4,6,5} fixes p.

10 Key Ideas behind Canonical labeling algorithm Quick Recall: We are interested in finding Aut(G) and Canon(G). Quick Recall: We are interested in finding Aut(G) and Canon(G). Naïve Algorithm: generate all the O(n!) permutations and find Aut(G) and hence compute the Canon(G). Naïve Algorithm: generate all the O(n!) permutations and find Aut(G) and hence compute the Canon(G). Observation all the elements in Aut(G) should correspond to some labeling with finest partition of the vertex set V. Observation all the elements in Aut(G) should correspond to some labeling with finest partition of the vertex set V. Also since all the partitions ( P(V) ) are ordered with ‘≤’ we can make use of this ordering to prune the finest partitions which are useless. Also since all the partitions ( P(V) ) are ordered with ‘≤’ we can make use of this ordering to prune the finest partitions which are useless.

11 Basic Structure of the Algorithm Start with a Coarsest partition p v (the order of vertices in the input graph). Start with a Coarsest partition p v (the order of vertices in the input graph). Generate a Equitable partition Eq(p v ) from p v such that Generate a Equitable partition Eq(p v ) from p v such that ……. p j ≤ Eq(p v ) ≤ p v, (Eq(p v ) is coarsest partition which is finer than p v ). ……. p j ≤ Eq(p v ) ≤ p v, (Eq(p v ) is coarsest partition which is finer than p v ). This is called Refinement, R(p). This is called Refinement, R(p). Keep Refining until the partition becomes finest(discrete) call these terminal partition. Keep Refining until the partition becomes finest(discrete) call these terminal partition.

12 Backtrack depth first structure. pvpv R(p v ) R(….R(R( (p v ) Discrete partitions Refine & Prune

13 Refinement Procedure. R(p,a) R(p,a) is a function such that R(p,a) is a function such that INPUT: any partition p, a subset of p INPUT: any partition p, a subset of p OUTPUT: coarsest equitable partition which is finer than p. OUTPUT: coarsest equitable partition which is finer than p. p = { C 1, C 2,C 1.... C k }, let p’ = p p = { C 1, C 2,C 1.... C k }, let p’ = p a = {W } a = {W } for(i=0;i<k;i++) for(i=0;i<k;i++) partition C i according to d(v,W) call the partitions B. partition C i according to d(v,W) call the partitions B. p’ =( p’-{C i }) U (B) p’ =( p’-{C i }) U (B)

14 Example 4 1 0 2 3 5 p = { (0,1,2,3,4,5) }, a = { (0,1,2,3,4,5)} d(0,a)=3, d(1,a)=3,d(2,a)=3,d(3,a)=1 d(4,a)=1,d(5,a)=1 Two buckets of values 3, 1 B = {(0,1,2), (3,4,5) } = R(p,a) p = {(0,1,2),(3),(4,5)} not equitable a = {(3)} Loop1: C 1 = (0,1,2) d(0,a) = 1, d(1,a) = 0, d(2,a) =0 B= {(0),(1,2)} Loop2: C 2 = (3) d(3,a) = 0, B= {(3)} Loop3: C 3 = (4,5) d(4,a) = 0, d(5,a) = 0, B={(4,5)} R(p,a) = {(0),(1,2),(3),(4,5)}

15 Example 4 1 0 2 3 5 {(3,4,5) (0,1,2)} {(3)(4,5)(1,2)(0)} {(3)(4)(5)(2)(1)(0)}{(3)(5)(4)(1)(2)(0)} {(4)(3,5)(0,2)(1)} {(4)(3)(5)(2)(0)(1)} 3 4 4 5 3

16 Collect all the leaf nodes Let X be the collection of all the leaf nodes in the back track tree. Let X be the collection of all the leaf nodes in the back track tree. Now we can define a operator ‘~’ on t 1, t 2 € X Now we can define a operator ‘~’ on t 1, t 2 € X t 1 ~ t 2 iff G t1 = G t2 t 1 ~ t 2 iff G t1 = G t2 Re-label the vertices in the order of t 1 and t 2 and check if the adjacency matrices are the same. Re-label the vertices in the order of t 1 and t 2 and check if the adjacency matrices are the same. Note ~ partitions X into equivalence classes we just need one class pick a element and define Aut(G) Note ~ partitions X into equivalence classes we just need one class pick a element and define Aut(G) Aut(G) = { g | t = t c g, t € X, t ~ t c } Aut(G) = { g | t = t c g, t € X, t ~ t c } In fact we can stick to a single equivalence class and eliminate all the others (that’s where the pruning comes into picture) In fact we can stick to a single equivalence class and eliminate all the others (that’s where the pruning comes into picture)

17 The complete tree is never generated! Since |Aut(G)| ≤ |X| Since |Aut(G)| ≤ |X| Generating all X is a big overkill infact its not possible since |Aut(G)| could be O(n!) Generating all X is a big overkill infact its not possible since |Aut(G)| could be O(n!) If we observe carefully we just need to generate ONE equivalence class. If we observe carefully we just need to generate ONE equivalence class. How do we make sure that backtracking leads to leaf nodes from a single equivalence class ? How do we make sure that backtracking leads to leaf nodes from a single equivalence class ? We need some property which is common for the all leaf nodes in a single equivalence class along all the internal nodes. We need some property which is common for the all leaf nodes in a single equivalence class along all the internal nodes.

18 Property of terminal nodes used to prune… Objective: We want to prune all the internal nodes( from different equivalence classes) in the backtrack tree. Objective: We want to prune all the internal nodes( from different equivalence classes) in the backtrack tree. Step 1: First find two terminal nodes t 1, t 2 € X such that G t1 = G t2, then its clear that t 1 ~ t 2 (also for some g € S n, t g 1 = t 2 ) Step 1: First find two terminal nodes t 1, t 2 € X such that G t1 = G t2, then its clear that t 1 ~ t 2 (also for some g € S n, t g 1 = t 2 ) From above its clear that we need some function F which will not change when g is applied to p From above its clear that we need some function F which will not change when g is applied to p F(G g,p g ) = F(G,p) F(G g,p g ) = F(G,p)

19 Property to prune. If t 1 = p k ≤ p k ≤ p k-1 …….≤ p 2 ≤ p 1 If t 1 = p k ≤ p k ≤ p k-1 …….≤ p 2 ≤ p 1 F’(t 1 ) = (F(G, p k ) F(G, p k-1 ) ….. F(G, p 1 )) F’(t 1 ) = (F(G, p k ) F(G, p k-1 ) ….. F(G, p 1 )) F*(t 1 ) = (F(G, p 1 ) F(G, p 2 ) ….. F(G, p k )) F*(t 1 ) = (F(G, p 1 ) F(G, p 2 ) ….. F(G, p k )) We can generate F*(t 1 ) by finding some two terminal nodes t 1, t 2 € X, such that G t1 = G t2 We can generate F*(t 1 ) by finding some two terminal nodes t 1, t 2 € X, such that G t1 = G t2 Once we generate F*(t 1 ) prune any path which does not match the suffix of F*(t 1 ) Once we generate F*(t 1 ) prune any path which does not match the suffix of F*(t 1 )

20 Pruning Illustrated p1p1 p 2 = R(p v ) R(….R(R( (p v ) Pruned because F*(t 1 ) don’t match at some suffix R(….R(R( (p v ) Compute F*(t1) = (F(G, p1) F(G, p2) ….. F(G, pk)) t1t1 t2t2

21 Ullmann’s Sub-Graph Isomorphism algorithm Problem: Given a graphs G a,G b we want to find if G a is isomorphic to any sub graph in G b. Problem: Given a graphs G a,G b we want to find if G a is isomorphic to any sub graph in G b. Let M be a matching matrix of size |a x b |, M ij = 1, if degree(j, G a ) ≤ degree(j, G b ), 0 otherwise. Let M be a matching matrix of size |a x b |, M ij = 1, if degree(j, G a ) ≤ degree(j, G b ), 0 otherwise. A systematic enumeration algorithm (depth first, backtrack) would generate this matrix M such that A systematic enumeration algorithm (depth first, backtrack) would generate this matrix M such that There is exactly one 1 per row and column There is exactly one 1 per row and column Once the depth of the enumeration reaches |a| check if adjacency matrix of G a and adjacency of match from M matches. Once the depth of the enumeration reaches |a| check if adjacency matrix of G a and adjacency of match from M matches.

22 Ullmann’s observation for pruning Let M d be the matching matrix generated at any depth d ≤ |a| Let M d be the matching matrix generated at any depth d ≤ |a| Then the bottom |a-d| rows of M d and M are the same, the brute force algorithm will just continue. Then the bottom |a-d| rows of M d and M are the same, the brute force algorithm will just continue. Ullmann made an observation that makes use of some extra information from the top |d| rows which are fixed. Ullmann made an observation that makes use of some extra information from the top |d| rows which are fixed. OBSERVATION: If v i € V a and v j € V b are mapped in one of the top |d| rows in M d. Then any vertex adjacent to v i in G a can be mapped only to vertex adjacent to v j in G b. OBSERVATION: If v i € V a and v j € V b are mapped in one of the top |d| rows in M d. Then any vertex adjacent to v i in G a can be mapped only to vertex adjacent to v j in G b. With above observation we can reduce the # of 1’s in the bottom |a-d| rows of M d thus reducing search space. With above observation we can reduce the # of 1’s in the bottom |a-d| rows of M d thus reducing search space. Ullmann showed experimentally the validity of this pruning approach. Ullmann showed experimentally the validity of this pruning approach.

23 References D. G. Corneil and C. C. Gotlieb. An efficient algorithm for graph isomorphism. In Journal of the ACM (JACM), pages 51–64, Volume 17, Issue1 (January 1970). D. Cvetkovi´c, P. Rowlingson, and S. Simi´c. Eigenspaces of graphs. In Cambridge University Press, 1997. C. D. Godsil and B. D. McKay. Constructing cospectral graphs. In Acquationes Mathematicae, pages 257–268, 2001. F. Harary. The determinant of the adjacency matrix of a graph. In SIAM Review, pages 202–210, Vol.4, 1962. Brendan D. McKay. Computing automorphishms and canonical labelling of graphs. In Combinatorial Mathematics, pages 223–232, Volume 686/1978. B. D. McKay. Practical graph isomorphism. In Congressus Numerantium, pages 45–87, 1981.

24 References A. J. Schwenk. Almost all trees are cospectral. In New Directions in the Theory of Graphs (Proc. Third Annual Arbor Conference), pages 275–307, 1973. J. R. Ullmann. An algorithm for subgraph isomorphism. In Journal of the ACM (JACM), pages 31–42, Volume 23, Issue 1 (January 1976).


Download ppt "Algorithms for computing Canonical labeling of Graphs and Sub-Graph Isomorphism."

Similar presentations


Ads by Google