Download presentation
Presentation is loading. Please wait.
Published byMariah Sheryl West Modified over 6 years ago
1
MIS 644 Social Newtork Analysis 2017/2018 Spring
Chapter 5 Graph Partitioning and Community Detection
2
Outline Introduction Graph Partitioning Community Detection
Simple Modularity Maximization Spectral Modularity
3
Introduction Graph partitioning and community detection
division of vertices of a network into groups, clusters and communities according to the patterns of edges the groups formed tightly with many edges inside the groups few edges between the groups
4
Network of coauthorships in a university department
5
vertices: scientists edges: coauthorship have a paper with in the same group or having similar interests ability to groups or clusters structure and organization of networks
6
Partitioning or Community detection:
Graph Partitioning: dividing vertices of a network into given number of non-overlapping groups of given sizes number of edges between groups is minimized number and sizes of the groups are fixed arise in varity of circumstances: computer science, pure and applied math., physics and study of networks e.g., numerical solution of network processes on a parallel computer
8
Partitioning or Community detection:
the number and sizes of groups are not specified determined by the network itself goal - to find natural fault lines along which the network seperates use – as a tool analysis and understanding network data clusters of nodes in a web graph groups of related web pages
9
In Summary difference graph partitioning, community deterction
number , size of groups network is divided specified, unspecified goals gp - manageable pieces – numerical processing cd - understanding structure of a network large scale pattern not visible
10
Graph Partitioning The Kernighan-Lin algorithm Spectral partitioning
Why partitioning is hard? graph bisection: division into two parts repeated bisection partitioning into arbitary number of parts exhaustive search - looking all possible divisions into two parts costly computation time
11
The number of ways of dividing a network of n vertices into two parts of n1 and n2 vertices:
n!/n1!n2!, approximating by Stirling’s formula: n! = (2n)1/2(n/e)n, using the fact that n = n1 + n2, two parts of equal sizes time to look through all divisions – roughly exponentially
12
Partition of a network into two groups of equal sizes.
13
An algorithm clever run quckly – fail to find optimal solution or find the optimal solution – impractical time fail to find the very best division find prety good one – good enough approximate but acceptable solutions heuristic algorithms - heuristics
14
The Kernighan-Lin Algorithm
given n and n1 and n2, divide into two groups arbitarity – randomly for each pair i j of vertices i in one group, j in other calculate how much the cut size would change if i and j were interchanged find the pair i j that reduces the cut size most if not increases it by the smallest amout swap the pair of vertices the process is repeated – restriction each vertex can be moved onces
15
Figure 11.2 of N-N The Kernighan-Lin algorithm.
(a) The Kernighan-Lin algorithm starts with any division of the vertices of a network into two groups (shaded) and then searches for pairs of vertices, such as the pair highlighted here, whose interchange would reduce the cut size between the groups. (b) The same network after interchange of the two vertices.
17
the algorithm proceeds swaping on each step
the pair most decreases or least increases number of edges between groups until no pairs is remains to swap when all swaps completed go back to every state the network passed choise the state with smallest cut size
18
Finally this entire process is performed repeatedly
starting with the best division found in the last round until – no inprovement in the cut size occurs Returns the division on the last round as the best division First round – random initial division repeat the entire algorithm many times choice the best division as the smallest of all
19
The Algorithm Start random partitioning to n1 and n2 Repeat a round
until no imporvement in cut size a round: staart with best division in previous round repeat min of n1,n2 perform the best swap best swap: for all i,j not swaped perform a swap and trace change in cut size
20
Graph partitioning applied to a small mesh network
Graph partitioning applied to a small mesh network. (a) A mesh network of 547 vertices of the kind commonly used in finite element analysis (b) The edges removed indicate the best division of the network into parts of 273 and 274 vertices found by the Kernighan-Lin algorithm (c) The best division found by spectral partitioning
21
The K-L ends up with a sut size of 40
22
Disadvantage quite slow number of swaps – one round
smaller of the sizes of groups 0 – n/2 - O(n) for each swap – examine all pairs (n/2) x (n/2) = n2/4 – O(n2) change in the cut size = kiothers - kisame + kjothers - kjsame + -2Aij, running all neighbors of i and j – average degree O(m/n)
23
For each round O(n x n2 x m/n): for sparse networks m n - O(n3) for dense nets m n2 - O(n4) How many rounds ? Imporve for each round store degrees of every node i and j only update at each swap calculate in O(1) running time O(n3) both spares and dense graphs
24
More then two pieces Once divide into two pieces then,
divide into more than two: repeating the process E.g., into three: first into two n1:1/3, n2:2/3 then n2 into two equal parts
25
Spectral Partitioning
n: vertices, m: edges – into grou1 and group2 cut size: # edges between two groups
26
ΣjAij = ki , where Lij = kiδij - Aij is the ijth element of the graph Laplacian matrix in matrix notation
27
hard problem: si is restricted to ±1. relaxation method: allow si to take any value subject to a set of constraints value - minimizes R length of the vector s: √n ,
28
The relaxation of the constraint allows s to point to any position on a hypersphere circumscribing
the original hypercube, rather than just the corners of the hypercube
29
second constraint: nunber of +1 and -1 equals to group sizes n1 and n2, vector form where 1 is the vector (1, 1, 1, ) The problem minimize the cut size subject to the two constraints
30
c taking derivatives in matrix notation 1 is an eigenvector of the Laplacian with eigenvalue zero L · 1 = 0 Multiplying on the left by 1T λ(n1 – n2 ) + μn = 0
31
defining a new vector x is an eigenvector of the Laplacian with eigenvalue λ. multiplying with 1T, x is orthogonal to 1
32
d
33
x eigenvector smallest allowed eigenvalue
zero eigenvalue - eigenvector (1, 1, 1, ) x1=0: orthogonal to this lowest eigenvector. x: v2 second lowest, eigenvalue λ2, Finally, we recover the corresponding value of s from Eq. (11.30) thus:
34
s - ±1 n1 +1, n2 –1. choose s to be as close as possible to our ideal value subject to its constraints,
35
as large as possible. si = +1 for the vertices with the largest xi + (n1 − n 2)/n and si = −1 for the remainder. eigenvector v2 calculate the eigenvector v2, which has n elements, one for each vertex in the network, and place the n1 vertices with the most positive elements in group 1 and the rest in group 2.
36
Final algorithm 1. Calculate the eigenvector v2 corresponding to the second smallest eigenvalue λ2 of the graph Laplacian. 2. Sort the elements of the eigenvector in order from largest to smallest. 3. Put the vertices corresponding to the n1 largest elements in group 1, the rest in group 2, and calculate the cut size. 4. Then put the vertices corresponding to the n1 smallest elements in group 1, the rest in group 2, and recalculate the cut size. 5. Between these two divisions of the network, choose the one that gives the smaller cut size.
37
Fig. 11.3c the spectral method finds cut size 46 edges Kernighan-Lin 40. tends to find divisions of a network right general shape, but - not perhaps quite as good as other methods.
38
advantage speed. calculation of the eigenvector v2, O(mn) or O(n2) on a sparse network having m n. This is one factor of n better than the O(n3) of the Kernighan-Lin algorithm, feasible for much larger networks. hundreds of thousands of vertices, where the Kernighan-Lin algorithm is restricted to networks of a few thousand vertices at most
39
Comunity Detection the search for naturally occurring groups
regardless of their number and size tool for discovering and understanding structure of large-scale networks seperate into groups of vertices few connections between them number and sizes are not fixed simplest – graph bisection problem dividing into two non-overlaping groups without any constraint on sizes
40
find the division with minimum cut size
without any constraint on sizes of groups Not – optimal: all in one group cut size – zero ratio cut partitioning minimize R/(n1n2), n1=n2=n/2 largest denominator biased towards equally sized groups no principled rationale behind its definition
41
good measure - fewer than expected such edges
few edges expected as random number of edges within groups the two approaches - equivalent given total edges assortative mixing vertices similar characteristics – connected modularity look for dividions with high modularity scores
42
Simple Modularity Maximization
Analog of Karnighan-Lin algorithm divides into two communities starting from an initial division of equal sized groups for each vertex calculates: how much modulartiy would change if the vertex moves to the other group choice the vertex whose movment most increases or least decreases modularity repeats the process once a vertex is moved it can not be seleced
43
when all vertices has moved exactly once
go back over all states select the one with maximum modularity use the state as the starting point of a round repeat rounds untill modularity no longer improves
44
karatge club network
45
efficiency at each step evaluate modularity change O(n)
each evaluation O(m/n) each step O(m) in each round n steps O(nm) O(mn2) for the Karnighan-Lin algorithm moving steps O(n) swap steps of K-L O(n2)
46
Spectral Modularity Maximization
analog of spectral graph partitioning modularity: where ci is the group i belongs, δ(m, n) is the Kronecker delta, modularity matrix with property
47
division into two parts
is 1 if i and j are in the same group Kronicar delta: then in martix terms
48
where s vector elements si, B n × n matrix elements Bij - modularity matrix length constraint maximize modularity Q subject to constraint taking the derivatives in matrix form:
49
s is one of the eigenvectors - modularity matrix.
modularity is s: u1: eigen vector corresponding to smallest eigenvalue where [u1]i is the ith element of u1. The maximum is achieved when each term in the sum isnon-negative, i.e., when
50
can choose whichever we prefer.
And so we are led the following very simple algorithm. We calculate the eigenvector of the modularity matrix corresponding to the largest (most positive) eigenvalue and then assign vertices to communities according to the signs of the vector elements, positive signs in one group and negative signs in the other. In practice this method works very well. For example, when applied
51
One potential problem with the algorithm is that the matrix B is, unlike the Laplacian, not
sparse, and indeed usually has all elements non-zero. At first sight, this appears to make the algorithm’s complexity significantly worse than that of the normal spectral bisection algorithm; as discussed in Section , finding the leading eigenvector of a matrix takes time O(mn), which is equivalent to O(n3) in a dense matrix, as opposed to O(n2) in a sparse one. In fact, however, by exploiting special properties of the modularity matrix it is still possible to find the eigenvector in time O(n2) on a sparse network. The details can be found in [246]. Overall, this
52
Overall, this means that the spectral method is about as fast as, but not significantly faster than,
the vertex-moving algorithm of Section Both have time complexity O(n2) on sparse networks. There is, however, merit to having both algorithms. Given that all practical modularity maximizing algorithms are merely heuristics—clever perhaps, but not by any means guaranteed to perform well in all cases—having more than one fast algorithm in our toolkit is always a good thing.
53
Division into more then two groups
divide the network first into two parts then further subdivide those parts in to smaller ones, and so on. apply bisection to smaller networks recursively? No! The modularity of the complete network does not break up (as cut size does) ΔQ - modularity of the entire network upon further bisecting a community c of size nc.
54
B(c) nc × nc matrix with elements
find the leading eigenvector and dividing the network according to the signs of its elements.
55
when to stop divide as long as modularity not decreased o.w. all in one group not in other
56
Other Algorithms for Community Detection
Betweenness-based Methods Hierarchical Clustering
57
Betweenness-based Methods
the edges that lies between communities betweenness centrality – vertex: number of shortest paths that passes through tthat vertex edge betweenness: counts number of geodesic paths that run along edges edges between communities high values of edge betweenness
58
calculation: for every pair of vertices (same component) consider geodesic path(s) count how many pass along the edge Algorithm: calculate betweeness score for each edge search highest edge – remove recalculate betweenness scores as edges are removed network into two three pices
59
tree or dendrogram leaves – individual vertices as move up – join not a single sereis of decompositions quite slow O(n(m+n)) for each edge m times O(mn(m+n)) O(n3) for sparse networks mn advantage: hierarchical decomposition
60
Radicchi et. al. identify edge between communities remove them
observation: edges between poorly connected communities are unlikely belong to short loops there be two nearby edeges joining the two groups look for edges beloning to unusaally small number of loops loops – three or four – accurate results
61
adventage speed edges – beloging to loops local calculation running time: O(n2) for sparse nets disadvantage: social networks with loops not for information or biological nets
62
The algorithm of Radicchi et al.
The algorithm of Radicchi et al. uses a different measure to identify between-group edges, looking for the edges that belong to the fewest short loops. In many networks, edges within groups typically belong to many short loops, such as the loops of length three and four labeled “1” and “2.” But edges between groups, such as thedge labeled “3” here, often do not belong to such loops, because to do so would require there to be a return path along another between-group edge, of which there are, by definition, few.
63
Hierarchical Clustering
agglomerative start from individual vertice – join buttom - up divisive top – down measure of similarity or connection strength between groups any notion of similarity between vertices cosine, correlation coefficient, Eucledian or regular equivalence
64
If the connections (A,B) and (B,C) are strong but (A,C) is weak, should A and C be in the same group or not?
65
once a similarity measure is chosen
calculate it for all vertices pairs How groups are formed? for single vertices no embiguity similarity between groups three basic methods single, complete, average linkage
66
two groups n1 and n2 vertices - n1n2 pairs
single link – most similar pair single most similar link complete link – least similar pair every pair must have high similarity average link – mean similarity
67
1. Choose a similarity measure and evaluate it for all vertex pairs.
2. Assign each vertex to a group of its own, consisting of just that one vertex. The initial similarities of the groups are simply the similarities of the vertices. 3. Find the pair of groups with the highest similarity and join them together into a single group. 4. Calculate the similarity between the new composite group and all others using one of the three methods above (single-, complete-, or average-linkage clustering). 5. Repeat from step 3 until all vertices have been joined into a single group
68
Complexity single or complete link O(1) nxn pairs there are n-1 joins O(n3)
69
Figure 11.9: Partitioning of the karate club network by average linkage hierarchical clustering. This dendrogram is the result of applying the hierarchical clustering method described in the text to the karate club network of Fig. 11.4, using cosine similarity as our measure of vertex similarity. The shapes of the nodes represent the two known factions in the network, as in the two previous figures.
70
Disadvangege for core vertices join for periferial – join late
a set of tightly knit cores surrounded by a few small groups
71
Comunity Detection community detection. four categories: node-centric
Each node in a group satisfies certain properties group-centric Consider the connections within a group as a whole. The group has to satisfy certain properties without zooming into node-level network-centric Partition the whole network into several disjoint sets hierarchy-centric. category Construct a hierarchical structure of communities
72
Node-Centric Community Detection
node-centric criteria each node in a group to satisfy certain properties mutuality reachability degrees
73
Nodes satisfy different properties
Complete Mutuality cliques Reachability of members k-clique, k-clan, k-club Nodal degrees k-plex, k-core Relative frequency of Within-Outside Ties LS sets, Lambda sets Commonly used in traditional social network analysis Here, we discuss some representative ones
74
Groups based on Complete Mutuality
Clique: a maximum complete subgraph in which all nodes are adjacent to each other Nodes 5, 6, 7 and 8 form a clique NP-hard to find the maximum clique in a network Straightforward implementation to find cliques is very expensive in time complexity
75
A Brute-Force Approach
traverse of all the nodes in the network. For each node, check whether there is any clique of a specified size that contains the node. Then the clique is collected and the node is removed from future consideration. for small scale networks, but becomes impractical for large-scale networks. The main strategy to address this challenge: effectively prune those nodes and edges that are unlikely to be contained in a maximal clique or a complete bipartite.
76
The Algorithm Maintain a queue of cliques pick up a node vl
pub a clique from the queue Bk of size k vi last node added to clique Bk, For each neighbors vi say vj (j > i) form a new candidate set Bk+1 = Bk{vj} validate whether Bk+1 is a clique check vj is connected to every node of Bk+1 if so add Bk+1 to the queue of cliques
77
Example node B1 ={4} for each of its friends 5,6, form {4,5}, {4.6} to queue as B2s pup {4,5} from the queue its last node is 5 candidate clique sets {4,5,6}, {4,5,7}, {4,5,8} only {4,5,6} is a clique add it to the clique queue
79
Finding the Maximum Clique
In a clique of size k, each node maintains degree >= k-1 Nodes with degree < k-1 will not be included in the maximum clique Recursively apply the following pruning procedure Sample a sub-network from the given network, and find a clique in the sub-network, say, by a greedy approach Suppose the clique above is size k, in order to find out a larger clique, all nodes with degree <= k-1 should be removed. Repeat until the network is small enough Many nodes will be pruned as social media networks follow a power law distribution for node degrees
80
Maximum Clique Example
Suppose we sample a sub-network with nodes {1-9} and find a clique {1, 2, 3} of size 3 In order to find a clique >3, remove all nodes with degree <=3-1=2 Remove nodes 2 and 9 Remove nodes 1 and 3 Remove node 4
81
Clique Percolation Method (CPM)
Clique is a very strict definition, unstable Normally use cliques as a core or a seed to find larger communities CPM is such a method to find overlapping communities Input A parameter k, and a network Procedure Find out all cliques of size k in a given network Construct a clique graph. Two cliques are adjacent if they share k-1 nodes Each connected component in the clique graph forms a community
82
CPM Example Communities: {1, 2, 3, 4} {4, 5, 6, 7, 8}
Cliques of size 3: {1, 2, 3}, {1, 3, 4}, {4, 5, 6}, {5, 6, 7}, {5, 6, 8}, {5, 7, 8}, {6, 7, 8} Communities: {1, 2, 3, 4} {4, 5, 6, 7, 8} 82 82
83
Reachability reachability between actors.
extreme case, two nodes - belonging to one community if there exists a path between the two nodes. each component is a community. O(n +m) time. real-world networks, a giant component tends to form while many others are singletons and minor communities
84
Reachability : k-clique, k-club
Any node in a group should be reachable in k hops k-clique: a maximal subgraph in which the largest geodesic distance between any two nodes <= k k-club: a substructure of diameter <= k A k-clique might have diameter larger than k in the subgraph E.g. {1, 2, 3, 4, 5} Commonly used in traditional SNA Often involves combinatorial optimization Cliques: {1, 2, 3} 2-cliques: {1, 2, 3, 4, 5}, {2, 3, 4, 5, 6} 2-clubs: {1,2,3,4}, {1, 2, 3, 5}, {2, 3, 4, 5, 6} 84 84
85
k-clan is a k-clique in which the geodesic distance d(i, j) between all nodes in the subgraph is no greater than k for all paths within the sub-graph. A k-clan must be a k-clique, but it is not so vice versa. For instance, {v1, v2, v3, v4, v5} in Figure 16.3 is a 2-clique, but not 2-clan as the geodesic distance of v4 and v5 is 2 in the original network, but 3 in the subgraph.
86
k-club restricts the geodesic distance within the group to be no greater
than k. It is a maximal substructure of diameter k. All k-clans are k-cliques, and k-clubs are normally contained within k-cliques. substructures useful information diffusion and influence propagation.
87
Figure 16.3 of cliques: {v1, v2, v3}
2-cliques: {v1, v2, v3, v4, v5}, {v2, v3, v4, v5, v6} 2-clans: {v2, v3, v4, v5, v6} 2-clubs: {v1, v2, v3, v4}, {v1, v2, v3, v5}, {v2, v3, v4, v5, v6}
88
Group-Centric Community Detection: Density-Based Groups
The group-centric criterion requires the whole group to satisfy a certain condition E.g., the group density >= a given threshold A subgraph is a quasi-clique if where the denominator is the maximum number of degrees. A similar strategy to that of cliques can be used Sample a subgraph, and find a maximal quasi-clique (say, of size ) Remove nodes with degree less than the average degree , < 88 88
89
Notes quasi-clique becomes a clique when = 1.
not guarantee the nodal degree or reachbility for each node in the group. the degree nodes may vary drastically, suitable for large-scale networks.
90
A Gready Algorithm the maximum-dense quasi-cliques are explored. A greedy algorithm for – maximal quasi-clique initialized a vertex with the largest degree then expanded with nodes - likely to contribute to a large quasi-clique continue expansion until no nodes can be added to maintain the -density not optimal
91
A Gready Algorithm local search procedure - larger maximal quasi-clique in the local neighborhood detect close to optimal maximal quasi-clique but the network in memory For large networks heuristic pruning: suppose the quasi-clique of size k impossible to include in the maximal quasi-clique a node with degree less than k prune such nodes from the network
92
Network-Centric Community Detection
Network-centric criterion needs to consider the connections within a network globally Goal: partition nodes of a network into disjoint sets Approaches: (1) Clustering based on vertex similarity (2) Latent space models (multi-dimensional scaling ) (3) Block model approximation (4) Spectral clustering (5) Modularity maximization
93
Clustering based on Vertex Similarity
Apply k-means or similarity-based clustering to nodes Vertex similarity is defined in terms of the similarity of their neighborhood Structural equivalence: two nodes are structurally equivalent iff they are connecting to the same set of actors Nodes 1 and 3 are structurally equivalent; So are nodes 5 and 6. Structural equivalence is too strict for practical use.
94
Vertex Similarity Jaccard Similarity Cosine similarity 94 94
95
similarity of two adjasent nodes can be 0
e.g.: similarity of node 7 and 9 is 0 N7 = {5,6,8,9},N9 = {7}, N7 N9 = Modification include node v N7 = {5,6,7,8,9},N9 = {7,9}, N7 N9 ={7,9} Jacard(7.9) = |{7,9}|/|{5,6,7,8,9}| = 2/5 cosine(7.9) = |{7,9}|/sqrt(2*5) = 2/sqrt(10)
96
Cut Most interactions are within group whereas interactions between groups are few community detection minimum cut problem Cut: A partition of vertices of a graph into two disjoint sets Minimum cut problem: find a graph partition such that the number of edges between the two sets is minimized 99 99
97
Ratio Cut & Normalized Cut
Minimum cut often returns an imbalanced partition, with one set being a singleton, e.g. node 9 Change the objective function to consider community size Ci,: a community |Ci|: number of nodes in Ci vol(Ci): sum of degrees in Ci 100 100
98
Ratio Cut & Normalized Cut Example
For partition in red: For partition in green: Both ratio cut and normalized cut prefer a balanced partition 101 101
99
Modularity Maximization
Modularity measures the strength of a community partition by taking into account the degree distribution Given a network with m edges, the expected number of edges between two nodes with degrees di and dj is Strength of a community: Modularity: A larger value indicates a good community structure The expected number of edges between nodes 1 and 2 is 3*2/ (2*14) = 3/14 Given the degree distribution 104 104
100
Hierarchy-Centric Community Detection
Goal: build a hierarchical structure of communities based on network topology Allow the analysis of a network at different resolutions Representative approaches: Divisive Hierarchical Clustering (top-down) Agglomerative Hierarchical clustering (bottom-up) 108 108
101
Hierarchy-Centric Community Detection
Goal: build a hierarchical structure of communities based on network topology Allow the analysis of a network at different resolutions Representative approaches: Divisive Hierarchical Clustering (top-down) Agglomerative Hierarchical clustering (bottom-up) 109 109
102
Divisive Hierarchical Clustering
Divisive clustering Partition nodes into several sets Each set is further divided into smaller ones Network-centric partition can be applied for the partition One particular example: recursively remove the “weakest” tie Find the edge with the least strength Remove the edge and update the corresponding strength of each edge Recursively apply the above two steps until a network is decomposed into desired number of connected components. Each component forms a community 110 110
103
Edge Betweenness The strength of a tie can be measured by edge betweenness Edge betweenness: the number of shortest paths that pass along with the edge The edge with higher betweenness tends to be the bridge between two communities. The edge betweenness of e(1, 2) is 4 (=6/2 + 1), as all the shortest paths from 2 to {4, 5, 6, 7, 8, 9} have to either pass e(1, 2) or e(2, 3), and e(1,2) is the shortest path between 1 and 2 Nodes 4-9 have two paths to reach Node2: e(1,2) and e(2,3), hence, 6/2 = 3, Node 3 can reach Node 2 directly, and Node 1 reaches Node 2 via e(1,2). Hence, edge betweenness of e(1,2) is 4. 111 111
104
Divisive clustering based on edge betweenness
Initial betweenness value After remove e(4,5), the betweenness of e(4, 6) becomes 20, which is the highest; After remove e(4,6), the edge e(7,9) has the highest betweenness value 4, and should be removed. 112 Idea: progressively removing edges with the highest betweenness 112
105
Agglomerative Hierarchical Clustering
Initialize each node as a community Merge communities successively into larger communities following a certain criterion E.g., based on modularity increase Dendrogram according to Agglomerative Clustering based on Modularity 113 113
106
Summary of Community Detection
Node-Centric Community Detection cliques, k-cliques, k-clubs Group-Centric Community Detection quasi-cliques Network-Centric Community Detection Clustering based on vertex similarity Latent space models, block models, spectral clustering, modularity maximization Hierarchy-Centric Community Detection Divisive clustering Agglomerative clustering 114 114
107
Basic observations: A large complex network is bounded to be highly structured (has modules; function follows from structure) The internal organization is typically hierarchical (i.e. displays some sort of self-similarity of the structure) An important new aspect: Overlaps of modules are essential “mess”, no function Too constrained, limited function Complexity is between randomness and regularity 126 Slide is taken from www3.nd.edu/~netsci/TALKS/Vicsek.ppt
108
Overlapping Communities
This is what we want! Communities in a network 127 Slide is taken from
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.