Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 5 Graph Theory prepped by Lecturer ahmed AL tememe 1.

Similar presentations


Presentation on theme: "Lecture 5 Graph Theory prepped by Lecturer ahmed AL tememe 1."— Presentation transcript:

1 Lecture 5 Graph Theory prepped by Lecturer ahmed AL tememe 1

2 Topics Minimum Spanning Trees(MST). History of (MST). Applications Spanning Trees. Algorithms of (MST). Matrixes and Graphs. Adjacency Matrix. Incidence Matrix. Labeled Graphs. 2

3 Minimum Spanning Trees(MST) Suppose G is a connected weighted graph. That is, each edge of G is assigned a nonnegative number called the weight of the edge. Then any spanning tree T of G is assigned a total weight obtained by adding the weights of the edges in T. A minimal spanning tree of G is a spanning tree whose total weight is as small as possible. It connects all the vertices together with the minimal total weighting for its edges. A single graph can have many different spanning trees. A minimum spanning tree (MST) or minimum weight spanning tree. 3

4 History of (MST) The first algorithm for finding a minimum spanning tree was developed by Czech scientist Otakar Borůvka in 1926, Boruvka's algorithm takes O(m log n) time.Otakar Borůvka A second algorithm is Prim's algorithm, which was invented by Jarnik in 1930 and rediscovered by Prim in 1957 and Dijkstra in 1959.Prim's algorithm A third algorithm commonly in use is the Kruskal's algorithm, which also takes O(m log n) time.Kruskal's algorithm A fourth algorithm, not as commonly used, is the reverse- delete algorithm, which is the reverse of Kruskal's algorithm. Its runtime is O(m log n (log log n) 3 ).reverse- delete algorithm 4

5 Applications Spanning Trees Design of networks, including computer networks, telecommunications networks, transportation computer networks, telecommunications networkstransportation Cluster analysis: clustering points in the plane, single-linkage clustering (a method of hierarchical clustering),graph-theoretic clustering, and clustering gene expression data. Cluster analysissingle-linkage clusteringhierarchical clusteringgene expression Image registrationand segmentation — see minimum spanning tree-based segmentation. Image registrationsegmentationminimum spanning tree-based segmentation Curvilinear feature extraction in computer vision.feature extractioncomputer vision Handwriting recognition of mathematical expressions. Handwriting recognition Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse responsefilters. Circuit designfinite impulse response Minimax process control.process control Minimum spanning trees can also be used to describe financial markets. 5

6 Algorithms of (MST) What meaning Possible multiplicity in (MST)? Our goal is N-1 6

7 7

8 8

9 9

10 10

11 Count. Adjacency Matrix 11

12 Properties Adjacency Matrix We have the following observations about the adjacency matrix X of a graph G. 1. The entries along the principal diagonal of X are all zeros if and only if the graph has no self-loops. However, a self-loop at the (I th) vertex corresponds to xii = 1. 2. If the graph has no self-loops, the degree of a vertex equals the number of ones in the corresponding row or column of X. 3. Permutation of rows and the corresponding columns imply reordering the vertices. 4. A graph G is disconnected having components G1 and G2 if and only if the adjacency matrix X(G) is partitioned as where X(G1) and X(G2) are respectively the adjacency matrices of the components G1 and G2. Obviously, the above partitioning implies that there are no edges between vertices in G1 and vertices in G2. 5. If any square, symmetric and binary matrix Q of order n is given, then there exists a graph G with n vertices and without parallel edges whose adjacency matrix is Q. 12

13 13

14 Cont. Incidence Matrix Let G be a graph with n vertices, m edges and without self-loops. = 14

15 Cont.(2) Incidence Matrix 15

16 The incidence matrix contains only two types of elements, 0 and 1. This clearly is a binary matrix or a (0, 1)-matrix. We have the following observations about the incidence matrix A. 1. Since every edge is incident on exactly two vertices, each column of A has exactly two one’s. 2. The number of one’s in each row equals the degree of the corresponding vertex. 3. A row with all zeros represents an isolated vertex. 4. Parallel edges in a graph produce identical columns in its incidence matrix. 5. Permutation of any two rows or columns in an incidence matrix simply corresponds to relabeling the vertices and edges of the same graph. Properties of Incidence Matrix 16

17 17

18 18

19 19 Thank you for Listening


Download ppt "Lecture 5 Graph Theory prepped by Lecturer ahmed AL tememe 1."

Similar presentations


Ads by Google