Presentation is loading. Please wait.

Presentation is loading. Please wait.

LECTURE 3 Introduction to PCA and PLS K-mean clustering Protein function prediction using network concepts Network Centrality measures.

Similar presentations


Presentation on theme: "LECTURE 3 Introduction to PCA and PLS K-mean clustering Protein function prediction using network concepts Network Centrality measures."— Presentation transcript:

1 LECTURE 3 Introduction to PCA and PLS K-mean clustering Protein function prediction using network concepts Network Centrality measures

2 Handling Multivariate data Multivariate data example StudentMathChemPhyBioEcoSoc A787877 B877687 C978767 D777798 E766688 F777788 G666777 H988666 I888766 J776689

3 Principle Component Analysis (PCA) and Partial Least Square (PLS) Two major common effects of using PCA or PLS  Convert a group of correlated predictive variables to a group of independent variables  Construct a “strong” predictive variable from several “weaker” predictive variables Major difference between PCA and PLS  PCA is performed without a consideration of the target variable. So PCA is an unsupervised analysis  PLS is performed to maximiz the correlation between the target variable and the predictive variables. So PLS is a supervised analysis

4 A (n x p) A (n x p) X (n x p) X (n x p) PCA PLS Y (n x q) Y (n x q) PC (n x p) PC (n x p) T (n x c) T (n x c) U (n x c) U (n x c) max cov. 1 1 1 1 2 2 1 1 Decomposition step 2 2 Regression step A = data matrix PC = principal component matrix n = # of observations p = # of variables n = # of observations p = # of predictors q = # of responses c = # of extracted factors X = matrix of predictors Y = matrix of responses T = factors of predictors U = factors of responses

5 Principle Component Analysis (PCA)  In Principal Component Analysis, we look for a few linear combinations of the predictive variables which can be used to summarize the data without loosing too much information.  Intuitively, Principal components analysis is a method of extracting information from a higher dimensional data by projecting it to a lower dimension. Example: Consider the scatter plot of a 3-dimentional data (3 variables). Data across the 3 variables are higly correlated and majority of the points cluster around the center of the space. This is also the direction of the 1 st PC and all 3 variables roughly imposes equal loading to 1 st PC. PC1 = – 0.56 (X1) – 0.57 (X2) – 0.59 (X3)

6 Properties of Principal Components Var(PC i ) = i Cov(PC i,PC j ) = 0 Var(PC 1 )  Var(PC 2 )  … Var(PC p ) Here is the sorted eigenvalue of the variance-co- varience matrix

7 Numerical Example StudentMathChemPhyBioEcoSoc A787877 B877687 C978767 D777798 E766688 F777788 G666777 H988666 I888766 J776689 The following is the high school grade of 10 students on 6 subjects (scale 1-10) Math = Mathematics Chem = Chemistry Phy = Phisics Bio = Biology Eco = Economy Soc = Sociology

8 Results PC1PC2PC3PC4PC5PC6 Eigenvalue3.0200.7080.4970.2190.1670.023 Proportion0.6520.1530.1070.0470.0360.005 Cumulative0.6520.8040.9120.9590.9951 Eigenvectors Math0.4610.621-0.0880.1680.267-0.542 Chem0.302-0.059-0.5940.016-0.740-0.074 Phy0.4280.110-0.365-0.0640.3860.720 Bio0.054-0.666-0.4100.2480.445-0.355 Eco-0.5330.271-0.526-0.5590.185-0.140 Soc-0.4750.286-0.2480.771-0.0200.192

9

10 Partial Least Squares (PLS) Unlike PCA, the PLS technique works by successively extracting factors from both predictive and target variables such that covariance between the extracted factors is maximized Decomposition step  X = TW t + E  Y = UV t + F Regression step  Y = TB + D = XWB + D = XB PLS + D; B PLS = WB

11 Numerical Example StudentMathChemPhyBioEcoSocGPA A7878772.9 B8776873.1 C9787673.6 D7777983.3 E7666883.0 F7777882.9 G6667773.2 H9886663.4 I8887662.8 J7766893.5 The following is the high school grade of 10 students on 6 subjects (scale 1-10) Math = Mathematics Chem = Chemistry Phy = Phisics Bio = Biology Eco = Economy Soc = Sociology and the corresponding GPA score during undergraduate level. Objective: Can we use information of student’s performance during high school to predict their GPA score when they enter undergraduate level?

12

13 K-mean clustering

14 Source: “Clustering Challenges in Biological Networks” edited by S. Butenko et. al.

15 Source: Teknomo, Kardi. K-Means Clustering Tutorials http:\\people.revoledu.com\kardi\ tutorial\kMean\

16 1.Initial value of centroids: Suppose we use medicine A and medicine B as the first centroids. Let c1 and c2 denote the coordinate of the centroids, then c1 = (1,1) and c2 = (2,1)

17

18

19

20 Protein function prediction using network concepts

21 Topology of Protein-protein interaction is informative but further analysis can reveal other information. A popular assumption, which is true in many cases is that similar function proteins interact with each other. Based on these assumption, we have developed methods to predict protein functions and protein complexes from the PPI networks mainly based on cluster analysis.

22 Cluster Analysis Cluster Analysis, also called data segmentation, implies grouping or segmenting a collection of objects into subsets or "clusters", such that those within each cluster are more closely related to one another than objects assigned to different clusters. In the context of a graph densely connected nodes are considered as clusters Visually we can detect two clusters in this graph

23 K-cores of Protein-Protein Interaction Networks Definition Let, a graph G=(V, E) consists of a finite set of nodes V and a finite set of edges E. A subgraph S=(V, E) where V  V and E  E is a k-core or a core of order k of G if and only if  v  V: deg(v)  k within S and S is the maximal subgraph of this property.

24 1-core graph: The degree of all nodes are one or more Graph G Concept of a k-core graph

25 1-core graph: The degree of all nodes are one or more Concept of a k-core graph

26 2-core graph: The degree of all nodes are two or more Concept of a k-core graph

27 1-core graph: The degree of all nodes are one or more Concept of a k-core graph

28 3-core graph: The degree of all nodes are three or more The 3-core is the highest k-core subgraph of the graph G Graph G

29 Analyzing protein-protein interaction data obtained from different sources, G. D. Bader and C.W.V. Hogue, Nature biotechnology, Vol 20, 2002 A model of Internet topology using k-shell decomposition, Shai Carmi, Shlomo Havlin, Scott Kirkpatrick, Yuval Shavitt, and Eran Shir, PNAS, vol. 104, no. 27, 2007 Notice that the concept of a K-core graph has been applied to very different areas of research. Application of a k-core graph

30

31 Protein function prediction using k-core graphs

32 Hishigaki, H., Nakai, K., Ono, T., Tanigami, A., and Tagaki, T. Assessment of prediction accuracy of protein function from protein-protein interaction data. Yeast 18, 523-531 (2001) Reported similar results.. Schwikowski, B., Uetz, P. and Fields, S. A network of protein- protein interactions in yeast. Nature Biotech. 18, 1257-1261 (2000) Deals with a network of 2039 proteins and 2709 interactions. 65% of interactions occurred between protein pairs with at least one common function Introduction : Function prediction

33 33 Hypothesis Unknown function proteins that form densely connected subgraph with proteins of a particular function may belong to that functional group. Introduction : Function prediction We utilize this concept by determining k-cores of strategically constructed sub-networks.

34 Prediction of Protein Functions Based on K-cores of Protein-Protein Interaction Networks “Prediction of Protein Functions Based on K-cores of Protein-Protein Interaction Networks and Amino Acid Sequences”, Md. Altaf-Ul-Amin, Kensaku Nishikata, Toshihiro Koma, Teppei Miyasato, Yoko Shinbo, Md. Arifuzzaman, Chieko Wada, Maki Maeda, Taku Oshima, Hirotada Mori, Shigehiko Kanaya The 14th International Conference on Genome Informatics December 14-17, 2003, Yokohama Japan.

35 Total 3007 proteins and 11531 interactions Around 2000 are unknown function proteins Highest K-core of this total graph is not so helpful E.Coli PPI network

36 10-core graph—the highest k-core of the E.Coli PPI network

37 We separate 1072 interactions (out of 11531) involving protein synthesis and function unknown proteins. P. S.U. F. P. S.

38 Unknown Function unknown Proteins of this 6-kore graph are likely to be involved in protein synthesis

39 Extending the k-core based function prediction method and its application to PPI data of Arabidopsis thaliana Protein Function Prediction based on k-cores of Interaction Networks, Norihiko Kamakura, Hiroki Takahashi, Kensuke Nakamura, Shigehiko Kanaya and Md. Altaf-Ul-Amin, Proceedings of 2010 International Conference on Bioinformatics and Biomedical Technology (ICBBT 2010)

40 40 Materials and Methods : Dataset All PPI data of Arabidopsis thaliana 3118 interactions involving 1302 proteins. Collected from databases and scientific literature by our laboratory. Green= Unknown proteins (289 proteins) Pink= Known proteins (1013 proteins)

41 41 Materials and Methods : Dataset Functional groups in the network The PPI dataset contains proteins of 19 different functions according to the first level categories of the KNApSAcK database.

42 42 Materials and Methods : Dataset The trends of interactions in the context of functional similarity Diagonal elements show number of interactions between similar function proteins.

43 43 Materials And Methods : Flowchart of the method

44 44 Results : Subnetworks we do not consider in this work the sub-networks that contain less than 100 interactions. And finally I consider subnetworks corresponding to 9 functional classes. Subnetwork Name Number of interactions

45 45 Subnetwork extraction Cellular communication-Cellular communication Cellular communication-Unknown, Unknown-Unknown Total 603 interactions We extracted the following 3 types of interactions. Results : Subnetwork corresponding to cellular communication As an example here we show the subnetworks and k-cores corresponding to cellular communication.

46 46 1-core Results : Subnetwork corresponding to cellular communication The red nodes : known proteins. The green nodes : unknown proteins.

47 47 2-core 3-core The red color nodes represent known proteins, the green color nodes represent function unknown proteins. Results : k-cores corresponding to cellular communication The red nodes : known proteins. The green nodes : unknown proteins.

48 48 4-core 5-core The red nodes : known proteins The green nodes : unknown proteins. 6-core 7-core This figure implies that determination of k-cores in strategically constructed sub-networks can reveal which unknown proteins are densely connected to proteins of a particular functional class. Results : k-cores corresponding to cellular communication

49 49 k-core 2k-core 3k-core 4k-core 5k-core 6k-core 7k-core 8 cell_cycle117 cell_rescue4 cellular_communicati on37332315128 energy5222222 metabo511 protein_fate693525 1510 protein_synthesis2 transcription3324141188 transport_facilitation2 total12988645236272 The number of unknown genes included in different k-cores corresponding to different functional groups Results : Function Predictions

50 50 Most proteins have been assigned unique functions and some have been assigned multiple functions 2-core 3-core Prediction based on 2-cores, 3-cores and 4-cores Results : Function Predictions 4-core Most proteins have been assigned unique functions

51 51 Assessment of Predictions However to assess statistically, we constructed 1000 random graphs consisting of the same 1,302 proteins but I inserted 3,118 edges randomly and constructed subnetworks. When k is much larger than one, the effect of false positives is greatly reduced. As most of the function predicted proteins are still unknown their annotations do not contain clear information on their functions.

52 The box plots show the distribution of k-cores with respect to their size in 1000 graphs corresponding to each sub-network and the filled triangles show the size of k-cores in real PPI sub-networks. Assessment of Predictions

53 53 Assessment of Predictions it can be theoretically concluded that the existence of higher order k-core graphs in PPI sub-networks compared to in the random graphs of the same size are likely to be because of interaction between similar function proteins. Therefore we assume that the function prediction based on k- cores for the value of k greater than highest possible value of k for corresponding random graphs are statistically significant predictions. Based on this we predicted the functions of 67 proteins(list is available online at http://kanaya.naist.jp/Kcore/supplementary/Function_prediction.xls. http://kanaya.naist.jp/Kcore/supplementary/Function_prediction.xls

54 “Prediction of Protein Functions Based on Protein- Protein Interaction Networks: A Min-Cut Approach”, Md. Altaf-Ul-Amin, Toshihiro Koma, Ken Kurokawa, Shigehiko Kanaya, Proceedings of the Workshop on Biomedical Data Engineering (BMDE), Tokyo, Japan, pp. 37-43, April 3-4, 2005.

55 Outline Introduction The concept of Min-Cut Problem Formulation A Heuristic Method Evaluation of the Proposed Method Conclusions

56 Outline Introduction The concept of Min-Cut Problem Formulation A Heuristic Method Evaluation of the Proposed Method Conclusions

57 Introduction After the complete sequencing of several genomes, the challenging problem now is to determine the functions of proteins 1)Determining protein functions experimentally 2)Using various computational methods a) sequence b) structure c) gene neighborhood d) gene fusions e) cellular localization f) protein-protein interactions

58 Present work predicts protein functions based on protein- protein interaction network. Introduction For the purpose of prediction, we consider the interactions of function-unknown proteins with function-known proteins and function-unknown proteins with function-unknown proteins In the context of the whole network.

59 Hence we call the proposed approach a Min-Cut approach. Introduction Majority of protein-protein interactions are between similar function protein pairs. Therefore, We assign function-unknown proteins to different functional groups in such a way so that the number of inter-group interactions becomes the minimum.

60 Outline Introduction The concept of Min-Cut Problem Formulation A Heuristic Method Evaluation of the Proposed Method Conclusions

61 U4 K2 K6 K4 K3 K1 K8 K5 U1 U2 U3 The concept of Min-Cut G1G1 G2G2 A typical and small network of known and unknown proteins

62 U4 K K K K K K K U1 U2 U3 G1G1 G2G2 The concept of Min-Cut Unknown proteins assigned to known groups based on majority interactions

63 U4 K K K K K K K U1 U2 U3 G1G1 G2G2 The concept of Min-Cut Number of CUT = 4

64 U4 K K K K K K K U1 U2 U3 G1G1 G2G2 The concept of Min-Cut An alternative assignment of unknown proteins

65 U4 K K K K K K K U1 U2 U3 G1G1 G2G2 The concept of Min-Cut Number of CUT = 2 For every assignment of unknown proteins, there is a value of CUT. Min-cut approach looks for an assignment for which the number of CUT is minimum.

66 Outline Introduction The concept of Min-Cut Problem Formulation A Heuristic Method Evaluation of the Proposed Method Conclusions

67 Problem Formulation Here we explain some points with a typical example.

68 V= set of all nodes E =set of all edges G={K1, K2, K3, K4, K5, K6, K7, K8, K9, K10} U={U1, U2, U3, U4, U5, U6, U7, U8} Problem Formulation

69 U´= {U1, U2, U3, U4, U5, U6, U7} Problem Formulation We generate U´  U such that each protein of U´ is connected in N with at least one protein of group G by a path of length 1 or length 2.

70 For this assignment of unknown proteins, the CUT= 6 Interactions between known protein pairs can never be part of CUT Problem Formulation We can assign proteins of U´ to different groups and calculate CUT

71 The problem we are trying to solve is to assign the proteins of set U´ to known groups G 1, G 2,…….., G 3 in such a way so that the CUT becomes the minimum. Problem Formulation

72 Outline Introduction The concept of Min-Cut Problem Formulation A Heuristic Method Evaluation of the Proposed Method Conclusions

73 The problem under hand is a variant of network partitioning problem. It is known that network partitioning problems are NP-hard. Therefore, we resort to some heuristics to find a solution as better as it is possible. A Heuristic Method

74 U1 U2 U3 U4 U5 U6 U7

75 U1G2G1x U2 U3 U4 U5 U6 U7 A Heuristic Method U1 has one path of length 1 with G 2 and two paths of length two with G 1

76 U1G2G1x U2G2G1x U3G2G1x U4G1G2G3 U5 U6 U7 A Heuristic Method U4 has two paths of length 1 with G 1, one path of length one with G 2 and one path of length two with G 3.

77 U1G2G1x U2G2G1x U3G2G1x U4G1G2G3 U5G1G2G3 U6G1G3G2 U7G3G2x A Heuristic Method

78 U1G2G1x U2G2G1x U3G2G1x U4G1G2G3 U5G1G2G3 U6G1G3G2 U7G3G2x A Heuristic Method

79 U1G2G1x U2G2G1x U3G2G1x U4G1G2G3 U5G1G2G3 U6G1G3G2 U7G3G2x A Heuristic Method By assigning all the unknown proteins to respective height priority groups, CUT = 6

80 U1G2G1x U2G2G1x U3G2G1x U4G1G2G3 U5G1G2G3 U6G1G3G2 U7G3G2x A Heuristic Method For this assignment of unknown proteins, the CUT= 7

81 U1G2G1x U2G2G1x U3G2G1x U4G1G2G3 U5G1G2G3 U6G1G3G2 U7G3G2x For this assignment of unknown proteins, the CUT= 4 A Heuristic Method

82 Outline Introduction The concept of Min-Cut Problem Formulation A Heuristic Method Evaluation of the Proposed Method Conclusions

83 Evaluation of the Proposed Approach The proposed method is a general one and can be applied to any organism and any type of functional classification. Here we applied it to yeast Saccharomyces cerevisiae protein-protein interaction network We obtain the protein-protein interaction data from ftp://ftpmips.gsf.de/yeast/PPI/ which contains 15613 genetic and physical interactions.

84 YAR019cYMR001c YAR019cYNL098c YAR019cYOR101w YAR019cYPR111w YAR027wYAR030c YAR027wYBR135w YAR031wYBR217w-------------------------- Total 12487 pairs We discard self- interactions and extract a set of 12487 unique binary interactions involving 4648 proteins. Evaluation of the Proposed Approach

85 A network of 12487 interactions and 4648 proteins is reasonably big Evaluation of the Proposed Approach

86 We collect from http://mips.gsf.de/genre/proj/yeast/index.jsp the classification data Evaluation of the Proposed Approach

87 The proposed approach is intended to predict the functions of function-unknown proteins. However, by predicting the functions of function-unknown proteins, it is not possible to determine the correctness of the predictions. We consider around 10% randomly selected proteins of each group of Table 1 as function-unknown proteins. Evaluation of the Proposed Approach

88 The union of 10% of all groups consists of 604 proteins. This is the unknown group U. The union of the rest 90% of each of the functional groups constitutes the set of known proteins G. There are total 3783 proteins in G. We generate U´  U such that each protein of U´ is connected in N with at least one protein of group G by a path of length 1 or length 2. There are 470 proteins in U´. We predicted functions of these 470 proteins using the proposed method. Evaluation of the Proposed Approach

89 We applied this algorithm using Max_value=50000 to predict the functions 470 proteins. Evaluation of the Proposed Approach

90 We cannot guarantee that minimum CUT corresponds to maximum successful prediction. However, the trends of the results of the Figure above shows that it is very likely that the lower is the value of CUT the greater is the number of successful predictions Evaluation of the Proposed Approach

91 We then examine the relation of successful predictions with the number of degrees of the proteins in the network. Evaluation of the Proposed Approach Degree of U4 =7 Degree of U7=3

92 We then examine the relation of successful predictions with the number of degrees of the proteins in the network. Evaluation of the Proposed Approach

93 The success rate of prediction is as low as 30.46% for proteins that have only one degree in the interaction network. However it is 67.61% for proteins that have degrees 8 or more. This implies that the reliability of the prediction can be improved by providing reasonable amount of interaction information Evaluation of the Proposed Approach

94 Centrality measures of nodes

95 Centrality measures Within graph theory and network analysis, there are various measures of the centrality of a vertex within a graph that determine the relative importance of a vertex within the graph. Degree centrality Betweenness centrality Closeness centrality Eigenvector centrality Subgraph centrality We will discuss on the following centrality measures:

96 Degree centrality Degree centrality is defined as the number of links incident upon a node i.e. the number of degree of the node Degree centrality is often interpreted in terms of the immediate risk of the node for catching whatever is flowing through the network (such as a virus, or some information). Degree centrality of the blue nodes are higher

97 Betweenness centrality The vertex betweenness centrality BC(v) of a vertex v is defined as follows: Here σ uw is the total number of shortest paths between node u and w and σ uw (v) is number of shortest paths between node u and w that pass node v Vertices that occur on many shortest paths between other vertices have higher betweenness than those that do not.

98 a d b f e c Betweenness centrality σ uw σ uw (v) σ uw /σ uw (v) (a,b) 10 0 (a,d) 111 (a,e) 111 (a,f) 111 (b,d) 111 (b,e) 111 (b,f) 111 (d,e) 100 (d,f) 100 (e,f) 100 Betweenness centrality of node c=6 Betweenness centrality of node a=0 Calculation for node c

99 Hue (from red=0 to blue=max) shows the node betweenness. Betweenness centrality Nodes of high betweenness centrality are important for transport. If they are blocked, transport becomes less efficient and on the other hand if their capacity is improved transport becomes more efficient. Using a similar concept edge betweenness is calculated. http://en.wikipedia.org/wiki/Between ness_centrality#betweenness

100 Closeness centrality The farness of a vortex is the sum of the shortest-path distance from the vertex to any other vertex in the graph. The reciprocal of farness is the closeness centrality (CC). Here, d(v,t) is the shortest distance between vertex v and vertex t Closeness centrality can be viewed as the efficiency of a vertex in spreading information to all other vertices

101 Eigenvector centrality Let A is the adjacency matrix of a graph and λ is the largest eigenvalue of A and x is the corresponding eigenvector then The i th component of the eigenvector x then gives the eigenvector centrality score of the i th node in the network. From (1) Therefore, for any node, the eigenvector centrality score be proportional to the sum of the scores of all nodes which are connected to it. Consequently, a node has high value of EC either if it is connected to many other nodes or if it is connected to others that themselves have high EC -----(1) N×NN×1 |A-λI|=0, where I is an identity matrix

102 Subgraph centrality the number of closed walks of length k starting and ending on vertex i in the network is given by the local spectral moments μ k (i), which are simply defined as the ith diagonal entry of the kth power of the adjacency matrix, A: Closed walks can be trivial or nontrivial and are directly related to the subgraphs of the network. Subgraph Centrality in Complex Networks, Physical Review E 71, 056103(2005)

103 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 0 1 0 0 0 0 0 0 0 0 0 1 0 1 1 1 0 0 0 0 0 0 0 0 0 1 1 0 1 1 0 1 0 0 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 1 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 1 0 0 0 0 0 0 0 0 1 0 1 0 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 1 0 1 0 1 0 0 0 0 0 0 0 0 1 1 0 0 1 0 M = M uv = 1 if there is an edge between nodes u and v and 0 otherwise. Subgraph centrality Adjacency matrix

104 1 0 1 1 0 1 0 0 0 0 0 0 0 0 0 4 2 2 3 2 1 1 0 0 0 0 0 0 1 2 4 3 2 3 1 1 0 0 0 0 0 0 1 2 3 5 2 3 1 0 1 0 0 0 0 0 0 3 2 2 3 2 1 1 0 0 0 0 0 0 1 2 3 3 2 5 0 1 0 0 1 0 0 0 0 1 1 1 1 0 2 0 0 1 0 0 0 0 0 1 1 0 1 1 0 2 0 1 0 0 1 1 0 0 0 1 0 0 0 0 4 2 1 1 2 2 0 0 0 0 0 0 1 1 2 4 0 1 2 2 0 0 0 0 0 1 0 0 1 0 2 0 1 1 0 0 0 0 0 0 0 0 1 1 0 1 0 1 0 0 0 0 0 0 0 1 2 2 1 0 4 2 0 0 0 0 0 0 0 1 2 2 1 1 2 3 M 2 = (M 2 ) uv for u  v represents the number of common neighbor of the nodes u and v. local spectral moment Subgraph centrality

105 The subgraph centrality of the node i is given by Let λ be the main eigenvalue of the adjacency matrix A. It can be shown that Thus, the subgraph centrality of any vertex i is bounded above by Subgraph centrality

106 Table 2. Summary of results of eight real-world complex networks.

107 Software Open Access Exploration of biological network centralities with CentiBiN Björn H Junker, Dirk Koschützki* and Falk Schreiber Address: Department of Molecular Genetics, Leibniz Institute of Plant Genetics and Crop Plant Research (IPK), Corrensstr. 3, 06466 Gatersleben, Germany Email: Björn H Junker - junker@ipk-gatersleben.de; Dirk Koschützki* - koschuet@ipk-gatersleben.de; Falk Schreiber - schreibe@ipkgatersleben. de BMC Bioinformatics 2006, 7:219 doi:10.1186/1471-2105-7-219

108 NodeDegreeClosenessEccentricityBetweenessEigenvector n110.040.200.215411 n210.040.200.215411 n310.040.200.215411 n450.0588240.25210.5717 n520.050.2500.394844 n630.06250.333333200.476217 n730.0555560.33333318.50.297338 n820.0416670.253.50.156459 n920.0333330.20.50.117904 n1020.0416670.253.50.156459 Centrality values of the nodes of an example graph calculated using CentiBiN


Download ppt "LECTURE 3 Introduction to PCA and PLS K-mean clustering Protein function prediction using network concepts Network Centrality measures."

Similar presentations


Ads by Google