Presentation is loading. Please wait.

Presentation is loading. Please wait.

UBI529 3. Distributed Graph Algorithms. 2.4 Distributed Path Traversals Distributed BFS Algorithms Distributed DFS Algorithms.

Similar presentations


Presentation on theme: "UBI529 3. Distributed Graph Algorithms. 2.4 Distributed Path Traversals Distributed BFS Algorithms Distributed DFS Algorithms."— Presentation transcript:

1 UBI529 3. Distributed Graph Algorithms

2 2.4 Distributed Path Traversals Distributed BFS Algorithms Distributed DFS Algorithms

3 3 Bellman-Ford BFS Tree Algorithm : Use a variant of the flooding algorithm. Each node and each message store an integer which corresponds to the distance from the root. The root stores 0, every other node initially ∞. The root starts the flooding algorithm by sending a message “1” to all neighbors. A node u with integer x receives a message “y” from a neighbor v: if y < x then node u stores y (instead of x) and sends “y+1” to all neighbors (except v).

4 4 Distributed Bellman-Ford BFS Algorithm 1. Initially, the root sets L(r0) = 0 and all other vertices set L(v) = 1. 2. The root sends out the message Layer(0) to all its neighbors. 3. A vertex v, which gets a Layer(d) message from a neighbor w does: If d + 1 < L(v) then parent(v) = w; L(v) = d + 1; Send Layer(d + 1) to all neighbors except w. Time complexity: O(D). Message Complexity: O(n|E|).

5 5 Analysis Analysis of Algorithm : The time complexity of Algorithm 3.10 is O(D), the message complexity is O(n|E|), where D is the diameter of the graph. Proof: We can prove the time complexity by induction. We claim that a node at distance d from the root has received a message “d” by time d. The root knows by time 0 that it is the root. A node v at distance d has a neighbor u at distance d-1. Node u by induction sends a message “d” to v at time d-1 or before, which is then received by v at time d or before. Message complexity : A node can reduce its integer at most n-1 times; each of these times it sends a message to all it neighbors. If all nodes do this we have O(n|E|) messages.

6 6 Remarks There are graphs and executions that produce O(n|E|) messages. How does the algorithm terminate? Algorithm 3.8 has the better message complexity; algorithm 3.10 has the better time complexity. The currently best known algorithm has message complexity O(|E|+n log3 n) and time complexity O(D log3 n). How do we find the root?!? Leader election in an arbitrary graph: FloodMax algorithm. Termination? Idea: Each node that believes to be the “max” builds a spanning tree… (More for example in Chapter 15 of Nancy Lynch “Distributed Algorithms”)

7 7 Distributed DFS Distributed DFS algorithm: There is a single message called the token 1. Start exploration (visit) at root r. 2. When v is visited for the first time: 2.1 Inform all neighbors of v that v has been visited. 2.2 Wait for acknowledgment from all neighbors. 2.3 Resume the DFS process. The above algorithm ensures that only tree edges are traversed. Hence time complexity is O(n). Message complexity is O(|E|).

8 2.5 Matching Matching Vertex Covers

9 9 Matching Def. 2.4.1 : A matching M in a graph G is a set of nonloop edges with no shared endpoints. The vertices incident to M are saturated (matched) by M and the others are unsaturated (unmatched). A perfect matching covers all vertices of the graph (all are saturated) Rem. : Odd order complete graphs have no perfect matching. K 2n+1 has po perfect matching K 5 and its matching M :

10 10 Maximal and Maximum Matching Def. 2.4.2 : A maximal matching in a graph G is a matching that cannot be enlarged by adding an edge Def. 2.4.3 : A maximum matching in a graph G is a matching of maximum size among all matchings

11 11 Vertex Cover Def. 2.4.6 : A vertex cover of a graph G is a set P V(G) such that each edge e of G has one vertex in P. Theorem 2.4.2 (Konig, 1931) : If G is a bipartite graph, then the maximum size of a matching in G equals the minimum size of a cover. Ex. : Matching (bold lines) and vertex cover (squares) differ by one for an odd cycle

12 2.6 Independent Sets and Dominating Sets

13 13 Independent Sets Def. 2.5.1 : An independent set, stable set, or coclique is a set of vertices in a graph G, no two of which are adjacent. That is, it is a set V of vertices such that for every two vertices in V, there is no edge connecting the two. Def. 2.5.2 : A maximum independent set is the largest independent set for a given graph. The problem of finding such a set is called the maximum independent set problem and is an NP-hard problem. Rem. : It is very unlikely that an efficient algorithm for finding a maximum independent set of a graph exists.

14 14 Independent Sets Rem. : If a graph has an independent set of size k, then its complement (the graph on the same vertex set, but complementary edge set) has a clique of size "k". Rem. : The problem of deciding if a graph has an independent set of a particular size is the independent set problem. It is computationally equivalent to deciding if a graph has a clique of a particular size. Rem. : The decision version of Independent Set and consequently, clique) is known to be NP-complete. However, the problem of finding the maximum independent set is NP-hard.

15 15 A Distributed Algorithm to find MIS Same as the Distributed DFS Algorithm (revisited) : 1. Start exploration (visit) at root r. 2. When v is visited for the first time: 2.1 Inform all neighbors of v that v has been visited. 2.2 Wait for acknowledgment from all neighbors. 2.3 Resume the DFS process. Whenever the token reaches an unmarked vertex, it adds it to MIS and marks its neighbors as excluded.

16 16 Dominating Sets Def. 2.5.4 : A dominating set for a graph G = (V, E) is a subset V′ of V such that every vertex not in V′ is joined to at least one member of V′ by some edge. The domination number γ(G) is the number of vertices in the smallest dominating set for G. Def. 2.5.5 : The connected dominating set problem is to find a minimum size subset S of vertices such that subgraph induced by S is connected and S is a dominating set. This problem is NP-hard. Def. 2.5.6 : A total dominating set is a set of vertices such that all vertices in the graph (including the vertices in the dominating set themselves) have a neighbor in the dominating set. Def. 2.5.7 : An independent dominating set is a set of vertices that form a dominating set and are independent.

17 17 Dominating Sets and Independent Sets Rem. : Dominating sets are closely related to independent sets : a maximal independent set in a graph is necessarily a minimal dominating set. However, dominating sets need not be independent Rem. : The dominating set problem concerns testing whether γ(G) ≤ K for a given input K; it is NP-complete (Garey and Johnson 1979). Another NP-complete problem involving domination is the domatic number problem, in which one must partition the vertices of a graph into a given number of dominating sets; the maximum number of sets in any such partition is the domatic number of the graph. Rem. : If S is a connected dominating set, one can form a spanning tree of G in which S forms the set of non-leaf nodes of the tree; conversely, if T is any spanning tree in a graph with more than two vertices, the non-leaf nodes of T form a connected dominating set. Therefore, finding minimum connected dominating sets is equivalent to finding spanning trees with the maximum possible number of leaves.

18 18 A Greedy Central Algorithm to find DS 1. First, select the vertex (or vertices) with the most neighbors. (This will be the vertex (or vertices) which have the largest degree) If that makes a dominating set, stop, you are done. 2. Otherwise, choose the vertices with the next largest degree, and check to see if you are done. 3. Keep doing this, until you have found a dominating set, then stop.

19 19 Guha and Khuller Algorithm to find CDS, 1997 Idea : Grow a tree T starting from the vertex with the highest degree, at each step, scan a node by adding all of its edges and neighbors to T. At the end, all non-leaf nodes are in CDS. 1. Initially mark all vertices WHITE 2. Mark the highest degree vertex BLACK and scan it 3. While there are WHITE nodes 4. Pick a vertex v from marked nodes and color it BLACK 5. Color all the neighbors of v GRAY and add them to T Black nodes will form the CDS Which node to pick ? : Node with the highest WHITE neighbors. This unfortunately does not always give the optimum results.

20 20 Example where scanning rule fails Optimal CDS is any path (size 4) from u to v. Alg. Chooses u and colors N(u) gray. Then it chooses a node from N(u) and scans it, making it black. If u has a degree of d, the algorithm ends in d+2 steps. Alg. picks the CDS shown in black.

21 21 Guha and Khuller Algorithm: Modification Define a new scanning rule for a pair of adjacent vertices u and v Let u be gray and v be white Scanning means, first making u black (this makes v and some other nodes gray), then coloring v black which makes more nodes gray The total number of nodes colored gray is the yield of the scan step Rule : At each step, either a single vertex or a pair of vertices are scanned, depending on whichever gives the higher yield. This simple modifiction yields a much better approximation to the OPT DS (proof omitted). Ex. : Try Modified GKA with the pervious diagram NOTE : GKA is a central algorithm.

22 22 A Distributed DS Algorithm All nodes are initially unmarked 1. Each node exchanges neighbor sets with its neighbors 2. Mark any node if it has two neighbors that are not directly connected If the original graph is connected, the resulting set of marked nodes is a dominating set The resulting set of marked nodes is connected The shortest path between any two nodes does not include any nonmarked nodes The dominating set is not minimal

23 23 Pruning Heuristics After constructing a nonminimal dominating set, it can be reduced : Unmark a node v if its neighborhood is included in the neighborhoods of two marked neighbors u and w and v has the smallest identifier The Algorithm only requires O(Delta^2) time to exchange neighborhood sets and constant time to reduce the set.

24 24 Wu and Li CDS Algorithm, 2001 Idea : Wu and Li CDS Algorithm is a step wise operational distributed algorithm, in which every node has to wait for others in lock state. Initially each vertex marks itself WHITE indicating that it is not dominated yet. In the first phase, a vertex marks itself BLACK if any two of its neighbors are not connected to each other directly. In the second phase, a BLACK marked vertex v changes its color to WHITE if either of the following conditions is met: 1. 9u 2 N(v) which is marked BLACK such that N[v] N[u] and id(v) < id(u); 2. 9u,w 2 N(v)which is marked BLACK such that N(v) N(u)SN(w) and id(v) = min{id(v), id(u), id(w)};

25 2.7 Clustering Outbreak of cholera deaths in London in 1850s. Reference: Nina Mishra, HP Labs

26 26 Clustering Clustering. Given a set U of n objects labeled p 1, …, p n, classify into coherent groups. Distance function. Numeric value specifying "closeness" of two objects. Fundamental problem. Divide into clusters so that points in different clusters are far apart. n Routing in mobile ad hoc networks. n Identify patterns in gene expression. n Document categorization for web search. n Similarity searching in medical image databases n Skycat: cluster 10 9 sky objects into stars, quasars, galaxies. photos, documents. micro-organisms number of corresponding pixels whose intensities differ by some threshold

27 27 Clustering of Maximum Spacing k-clustering. Divide objects into k non-empty groups. Distance function. Assume it satisfies several natural properties. n d(p i, p j ) = 0 iff p i = p j (identity of indiscernibles) n d(p i, p j )  0 (nonnegativity) n d(p i, p j ) = d(p j, p i ) (symmetry) Spacing. Min distance between any pair of points in different clusters. Clustering of maximum spacing. Given an integer k, find a k-clustering of maximum spacing. spacing k = 4

28 28 Greedy Clustering Algorithm Single-link k-clustering algorithm. n Form a graph on the vertex set U, corresponding to n clusters. n Find the closest pair of objects such that each object is in a different cluster, and add an edge between them. n Repeat n-k times until there are exactly k clusters. Key observation. This procedure is precisely Kruskal's algorithm (except we stop when there are k connected components). Remark. Equivalent to finding an MST and deleting the k-1 most expensive edges.

29 29

30 30

31 31 Greedy Clustering Algorithm: Analysis Theorem. Let C* denote the clustering C* 1, …, C* k formed by deleting the k-1 most expensive edges of a MST. C* is a k-clustering of max spacing. Pf. Let C denote some other clustering C 1, …, C k. n The spacing of C* is the length d* of the (k-1) st most expensive edge. n Let p i, p j be in the same cluster in C*, say C* r, but different clusters in C, say C s and C t. n Some edge (p, q) on p i -p j path in C* r spans two different clusters in C. n All edges on p i -p j path have length  d* since Kruskal chose them. n Spacing of C is  d* since p and q are in different clusters. ▪ p q pipi pjpj CsCs CtCt C* r

32 32 A Distributed Clustering Algorithm 1. Each node determines its local ranking property and exchanges it with its neighbors 2. A node can become clusterhead if it has the highest (or lowest) rank among all its undecided neighbors 3. It changes its state and announces it to all of its neighbors 4. Nodes that hear about a clusterhead next to them switch to cluster member and announce this to their neighbors Similar to the Leader Election problem ( we will see this in Part II)

33 2.8 Connectivity

34 34 Connectivity Definitions Def. 2.8.1 : A vertex cut of a graph G is a set S V(G) such that g-S has more than one component. The connectivity of G (к(G)) is the minimum size of a vertex set S such that G-S is disconencted or has only one vertex. A graph is k-connected if its connectivity is at least k. Rem. : Connectivity of к(Kn) = n-1

35 35 Sequential Connectivity Algorithms (revisited) Alg. 2.8.1 [Connectivity] : Idea : To determine whether a graph is connected (or not) can be done efficiently with a BFS or DFS from any vertex. If the tree generated has the same number of vertices as the graph then it is connected Alg. 2.8.2 [Strong Connectivity] : Can determine if G is strongly connected in O(m + n) time. Pf. n Pick any node s. n Run BFS from s in G. n Run BFS from s in G rev. n Return true iff all nodes reached in both BFS executions.

36 2.9 Distributed Routing Algorithms Routing Distributed Bellman-Ford Algorithm Chandy Misra Algorithm Link State Algorithm

37 37 Routing problem A Fundamental problem of Computer Networks Unicast routing (or just simply routing) is the process of determining a “good" path or route to send data from the source to the destination. Typically, a good path is one that has the least cost.

38 38 Routing (borrowed from cisco documentation http://www.cisco.com)

39 39 Routing: Shortest Path Most shortest path algorithms are adaptations of the classic Bellman-Ford algorithm. Computes shortest path if there are no cycle of negative weight. Let D(j) = shortest distance of j from initiator 0. Thus D(0) = 0 0 j k w(0,m), 0 (w(0,j)+w(j,k)), j The edge weights can represent latency or distance or some other appropriate parameter like power. Classical algorithms: Bellman-Ford, Dijkstra’s algorithm are found in most algorithm books. What is the difference between an ( ordinary ) graph algorithm and a distributed graph algorithm? m

40 40 Shortest path Revisiting Bellman Ford : basic idea Consider a static topology Process 0 sends w(0,i),0 to neighbor i {program for process i} do message = (S,k)  S if parent ≠ k -> parent := k fi; D(i) := S; send (D(i)+w(i,j),i) to each neighbor j ≠ parent;  message (S,k)  S ≥ D(i) --> skip od Computes the shortest distance to all nodes from an initiator node The parent pointers help the packets navigate to the initiator

41 41 Chandy Misra Distributed Shortest Path Algorithm program shortest path (for process i > 0} define D,S : distance {S = value of distance in message} parent : process; deficit : integer; N(i) : set of successors of process i; { ecah message has format (distance, sender)} initially D = inf., parent =i; deficit =0; {for process 0} send (w(0,i), 0) to each meighbor i; deficit :=|N(0)|; do deficit > 0  ack -> deficit :=deficit-1; od {deficit =0 signals termination}

42 42 Chandy Misra Distributed Shortest Path Algorithm {for process i > 0} do message = (S,k)  S if deficit > 0  parent ≠ i -> send ack to parent fi; parent := k; D := S; send (D + w(i,j), i) to each neighbor j ≠ parent; deficit := deficit + |N(i)| -1  message (S,k)  S ≥ D -> send ack to sender  ack -> deficit := deficit – 1  deficit = 0  parent  i -> send ack to parent od 0 2 4 31 6 5 2 4 7 1 2 7 6 2 3 Combines shortest path computation with termination detection. Termination is detected when the initiator receives ack from each neighbor

43 43 Distance Vector Routing What happens when topology changes ? Distance Vector D for each node i contains N elements D[i,0], D[i,1], D[i,2] … Initialize these to  {Here, D[i,j] defines its distance from node i to node j.} - Each node j periodically sends its distance vector to its immediate neighbors. - Every neighbor i of j, after receiving the broadcasts from its neighbors, updates its distance vector as follows:  k≠ i: D[i,k] = min k (w[i,j] + D[j,k] ) Used in RIP, IGRP etc 1 1 1 1

44 44 Counting to infinity Node 1 thinks d(1,3) = 2 Node 2 thinks d(2,3) = d(1,3)+1 = 3 Node 1 thinks d(1,3) = d(2,3)+1 = 4 and so on. So it will take forever for the distances to stabilize. A partial remedy is the split horizon method that will prevent 1 from sending the advertisement about d(1,3) to 2 since its first hop is node 2 Observe what can happen when the link (2,3) fails.  k≠ i: D[i,k] = min k (w[i,j] + D[j,k] ) Suitable for smaller networks. Larger volume of data is disseminated, but to its immediate neighbors only Poor convergence property

45 45 Distance Vector Algorithm Compute the shortest (least-cost) path between s and all other nodes in a given undirected graph G = (V,E, c) with real-valued positive edge weights. Each node x maintains: 1. a distance label a(x) which is the current known shortest distance from s to x. 2. a variable p(x) which contains the identity of the previous node on the current known shortest path from s to x. Initially, a(s) = 0, a(x) = 1, and p(x) is undefined for all x != s. When the algorithm terminates, a(x) = d(s; x), where d(s,x) is the shortest path distance between s and x, and p(x) holds the neighbor of x on the shortest path from x to s.

46 46 DBF Algorithm The DBF consists of two basic rules: Update rule: Suppose x with a label a(x) receives a(z) from a neighbor z. If a(z) + c(z, x) < a(x), then it updates a(x) to a(z) + c(z, x) and sets p(x) to be z. Otherwise a(x) and p(x) are not changed. Send rule: Let a(x) be a new label adopted by x. Then x sends a(x) to all its neighbors. The algorithm will work correctly in an asynchronous model also.

47 47 Computing All-pairs shortest paths Generalized to compute the shortest path between all pairs of nodes, by maintaining in each node x a distance label ay(x) for every y 2 V. This is called the distance vector of x. 1. Each node stores its own distance vector and the distance vectors of each of its neighbors. 2. Whenever something changes, say the weight of any of its incident edges or its distance vector, the node will send its distance vector to all of its neighbors. The receiving nodes then update their own distance vectors according to the update rule.

48 48 Link State Routing Algorithm A link state (LS) algorithm knows the global network topology and edge costs. 1. Each node broadcasts its identity number and costs of its incident edges to all other nodes in the network using a broadcast algorithm, e.g., flooding. 2. Each node can then run the local link state algorithm and compute the same set of shortest paths as other nodes. A well-known LS algorithm is the Dijkstra's algorithm for computing least-cost paths. The message complexity and time complexity of the algorithm is determined by the broadcast algorithm. If broadcast is done by flooding, the message complexity is O(|E|2). The time complexity is O(|E|D).

49 49 Link State Routing Each node i periodically broadcasts the weights of all edges (i,j) incident on it (this is the link state) to all its neighbors. The mechanism for dissemination is flooding. This helps each node eventually compute the topology of the network, and independently determine the shortest path to any destination node. Smaller volume data disseminated over the entire network Used in OSPF

50 50 Link State Routing Each link state packet has a sequence number seq that determines the order in which the packets were generated. When a node crashes, all packets stored in it are lost. After it is repaired, new packets start with seq = 0. So these new packets may be discarded in favor of the old packets! Problem resolved using TTL

51 51 Our Research (2000-2007) Clustering Algorithms for Mobile ad hoc Networks (MANETs) Routing Algorithms for MANETs Clustering Algorithms for Sensor Networks (WSNs) Routing Algorithms for WSNs In general distributed graph algorithms/protocols for MANETs and WSNs

52 52 Fixed Centered Partitioning, 1999,K.Erciyes, A.Alp -Choose fixed centers -Random, BFS or use heuristics - Collapse around these fixed centers

53 53 Fixed Centered Partitioning

54 54 Fixed Centered Partitioning Parallel Imp.

55 55 MANETs : Cluster based Routing (K.Erciyes, G. Marshall, 2003) A MANET Graph

56 56 MANETs : Cluster based Routing II Simplified Graph

57 57 MANETs : Cluster based Routing : Results

58 58 MANETs : Dominating Set based Clustering (K.Erciyes, D.Cokuslu,2005-6)

59 59 CEA : Description When all Neighbor LST messages are collected in the CHK NODES state, the node checks the following heuristics : – If the node has at least one isolated neighbor, it changes its color to BLACK and its state to IDLE. – If all neighbors of the node are directly connected to each other or if the node is an isolated node, it changes its color to WHITE and its state to IDLE. If the node is not suitable for one of these two coloring heuristics, then it changes its color to GRAY and its state to CHK DOM. When the node switches to state CHK DOM, it multicasts a Color REQ message to its neighbors. Then it waits until all its neighbors send their colors. When the node v collects all color information, it starts to apply the following rules:

60 60 CEA: Description 1. 9u 2 N(v) which is marked BLACK such that N[v] N[u]; 2. 9u,w 2 N(v) which is marked BLACK such that N(v) N(u)SN(w); 3. 9u 2 N(v) which is marked GRAY such that N[v] N[u] and degree(v) < degree(u) OR (degree(v) = degree(u) AND id(v) < id(u)); 4. 9u,w 2 N(v) which is marked GRAY OR BLACK such that N(v) N(u)SN(w) and degree(v) < min{degree(u), degree(w)} OR degree(v) = min{degree(u), degree(w)} AND id(v) < min{id(u), id(w)}; If one of these rules is true, then the node v changes its color to WHITE, else it changes its color to BLACK. After the node determines its permanent color, it changes its state to IDLE

61 61 CEA Analysis Theorem 1. Time complexity of the clustering algorithm is (4). Proof. Every node executes the distributed algorithm by the exchange of 4 messages. Since all these communication occurs concurrently, at the end of this phase, the members of the CDS are determined, so the time complexity of the algorithm is (4). Theorem 2. Message complexity of the clustering algorithm is O(n2) where n is the number of nodes in the graph. Proof. For every mark operation of a node, 4 messages are required (Neighbor REQ, Neighbor LST, Color REQ, Color RES). Assuming every node has n-1 adjacent neighbors, total number of messages sent is 4(n - 1). Since there are n nodes, total number of messages in the system is n(4(n - 1)) Therefore messaging complexity of our algorithm has an upperbound of O(n2).

62 62 MANETs : Dominating Set based Clustering Results

63 63 Dagdeviren, Erciyes Merging Clustering Algorithm (DEMCA), 2005 We assume that each node has distinct node id and knows its cluster leader id, cluster id and cluster level. Cluster level is identied by the number of the nodes in a cluster. Leader node is the node with maximum cluster id. Cluster leader id is identied by the node id of the leader node in a cluster. Cluster leader id is equal to the cluster id. The local algorithm consists of sending messages over adjoining links, waiting for incoming messages and processing messages.

64 64 DEMCA Operation

65 65 FSM of DEMCA

66 66 DEMCA : Message Types n Poll_Node: A cluster leader node will send this message to a destination node to begin the clustering operation. n Ldr_Poll_Node : A cluster member forwards Poll_Node to its leader as this format. n Node_Info: A cluster leader node will send this message if it receives a Poll Node or Ldr Poll Node message. n Connect_Mbr: A cluster node will send this message after it receives a Node Info which has a smaller node id than sender. n Connect_Ldr: A cluster node will send this message after it receives a Node Info message which has a greater node id than sender's node id. n Ldr_ACK: A node will send this message when it receives a Connect Mbr message. n Mbr_ACK: A node will send this message when it receives a Connect Ldr message. n Change_Cluster: A node will multicast this message after it receives a Ldr ACK message. n Change_Cluster_ACK: A node will send a Change Cluster ACK message after it receives Change Cluster message.

67 67 DEMCA : States n IDLE: Initially all nodes are in IDLE state. If Period TOUT occurs, node sends a Poll Node message to destination node and will make a state transition to WT INFO state. n WT_INFO: A node in WT INFO state waits for Node Info message. n WT_ACK: A node in WT ACK state waits for a Mbr ACK or Ldr ACK. n MEMBER: A node which is a member of a cluster, is in the MEMBER state. n n LDR_WT_ACK: A node in LDR WT ACK state waits for Change Cluster ACK messages of all new member nodes in the new cluster. n LEADER: When A cluster leader node is in the LEADER state, if a Poll Node or a Ldr Poll Node is received, the node will firstly check the 2K parameter to decide on the clustering operation. 24 n LDR_WT_CONN: A node in LDR WT CONN state waits for Connect Mbr or Connect Ldr message. n IDLE_WT_CONN: A node in IDLE WT CONN state waits for Connect Mbr or Connect Ldr message. If Connect Mbr is received, the node will make a state transition to MEMBER state.

68 68 DEMCA : Analysis Theorem 1. Time complexity of the clustering algorithm has a lower bound of (logn) and upperbound of O(n). Proof. Assume n nodes in the mobile network. Best case occurs when each node can merge with each other exactly. To double member count at each iteration such that Level 1 clusters are connected to form Level 2 clusters. Level 2 Clusters are connected to form Level 4 Clusters and so on. The clustering operation continues until the Cluster Level becomes m. The lower bound is (logN). Worst case occurs when a cluster is connected to a Level 1 cluster at each iteration. Level 1 cluster is connected to a Level 1 cluster to form a Level 2 cluster, Level 2 cluster is connected to a Level 1 cluster to form a Level 3 cluster and so on. The clustering operation continues until the Cluster Level becomes n. The upper bound is therefore O(n).

69 69 DEMCA Analysis Theorem 2. Message complexity of the clustering algorithm is O(n). Proof. Assume that we have n nodes in our network. For every merge operations of two clusters, 4 messages (Poll Node, Node Info, Connect Ldr/Connect Mbr, Leader ACK/Member ACK) are required. K Change Cluster messages and K Change Cluster ACK messages are also required. Total number of messages in this case is (4+2K)n/K which means that message complexity has an upper bound of O(n).

70 70 DEMCA Results

71 71 WSNs Apply similar ideas in MANETs to WSNs Consider energy efficiency considerations Self-stabilizing Leader Election Self-Stabilizing Mutual Exclusion Use Algebraic Graph Matching Algorithms ?

72 72 Our Research on Clustering in WSNs Cluster headers Level1 : CHL1 Cluster Headers Level 2 : CHL2 -Modify Merging Alg. and MDS Alg to this architecture with power considerations - May also use Fixed Centered Partitioning

73 73 WSNs : Crossbow Mote History $$ + Network Embedded Systems Technology Program

74 74 Available Mote Designs: MICA2 Crossbow 3 rd generation wireless sensor Design changes to MICA: Processor now offers standalone boot-loader New radio (Chipcon 1000) n 500 to 1000 ft. range, 38.4 Kbaud n Better noise immunity, linear RSSI n FM modulated (vs Mica AM) n Digitally programmable output power n Built-in Manchester encoding n Software programmable freq. n hopping within bands Tiny OS v. 1.0 - improved network stack, debugging Wireless remote programming 512 Kb serial flash

75 75 Available Mote Designs: MICA2DOT Crossbow 3 rd generation wireless sensor Similar feature set to MICA2 Degraded I/O capabilities: 18 pins vs. 51 n 6 analog inputs, digital bus, serial or UART Integrated temperature and battery voltage sensors, status LED Battery is 3V coin cell instead of AA x 2 25 mm diameter, 6 mm height Compatible with MICA2

76 76 Hardware : Mica2 Basic Kit 3 MICA2 Processor/Radio Boards 2 MTS300 Sensor Boards (Light, Temperature, Acoustic, and Sounder) 1 MIB510 Programming and Serial Interface Board Mote-Test Software Hardware User’s Manuals TinyOS Getting Started Guide Available in 315MHz, 433MHz and 868/916MHz options

77 77 Hardware Telos - UCB

78 78 TinyOS : Operating System for WSNs Events signaled Commands used Commands Event Handlers Legend msg_recv(type, data) receiver activity() init Generic Messaging Component activity () receive Send_msg thread Internal State packet_receive(success) init Why TinyOS for WSN?  Modular nature  Extensive support of platforms  Large user base  Example : Calamari application TinyOS Components A component stores the state of the system, consists of interfaces and events  Event are triggered by commands, signals or other events themselves. Example : event Timer.fired()  Sending messages is achieved using split-phase or using an event that calls an appropriate task. Sample TinyOS Component [2]

79 79 Em*: Software environment for developing and deploying wireless sensor networks Radio Topology Discovery Collaborative Sensor Processing Application Neighbor Discovery Reliable Unicast Sensors Leader Election 3d Multi- Lateration Audio Time Sync Acoustic Ranging State Sync Domain Knowledge Reusable Software Hardware (Flexible Interconnects; not a strict “stack”)

80 80 Summary of Research Activities Design of protocols and algorithms for : n Distributed, embedded and real-time systems n MANETs n WSNs n Grid Procedure n Start with an interesting idea n Architectural considerations n Provide CHFSMs for the protocol/algorithm n IPC (msg q, semaphores..), network comm (protocol stack) n Prove that it works – Theorems, Lemmas etc n Show that it works – Simulations by ns2, OPNET, SensorSim n Show that it really works – Mica2 etc.

81 81 Distributed Graph Algs : Other areas of interest Distributed Cycle/Knot Detection Distributed Center Finding Distributed Connected Dominating Set Construction in MANETs, WSNs Distributed Clustering based on Graph Partitioning

82 82 References Introduction to Graph Theory, Douglas West, Prentice Hall, 2000 (basics) Graph Theory and Its Applications, Gross and Yellen, CRC Press, 1998 (basics) Distributed Algorithm Course notes, J.Welch, TAMU (flooding and tree algorithms) CS590A Fall 2007 G. Pandurangan 1, Purdue University Distributed Computing Principles Course Notes, Roger Wattenhofer, ETH (Coloring algorithms) Introduction to Algorithm Design, Kleinman, Tardos, Prentice-Hall, 2005 (MST dependent) 22C:166 Distributed Systems and Algorithms Course, Sukumar Ghosh, University of Iowa (routing part heavily dependent)


Download ppt "UBI529 3. Distributed Graph Algorithms. 2.4 Distributed Path Traversals Distributed BFS Algorithms Distributed DFS Algorithms."

Similar presentations


Ads by Google