Gene expression & Clustering (Chapter 10)

Slides:



Advertisements
Similar presentations
Clustering II.
Advertisements

Clustering.
Cluster Analysis: Basic Concepts and Algorithms
Hierarchical Clustering. Produces a set of nested clusters organized as a hierarchical tree Can be visualized as a dendrogram – A tree-like diagram that.
PARTITIONAL CLUSTERING
Data Mining Cluster Analysis: Basic Concepts and Algorithms
Computing a tree Genome 559: Introduction to Statistical and Computational Genomics Prof. James H. Thomas.
Online Social Networks and Media. Graph partitioning The general problem – Input: a graph G=(V,E) edge (u,v) denotes similarity between u and v weighted.
Introduction to Bioinformatics
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
Introduction to Approximation Algorithms Lecture 12: Mar 1.
Clustering II.
Clustering short time series gene expression data Jason Ernst, Gerard J. Nau and Ziv Bar-Joseph BIOINFORMATICS, vol
Clustering… in General In vector space, clusters are vectors found within  of a cluster vector, with different techniques for determining the cluster.
HCS Clustering Algorithm
Introduction to Bioinformatics Algorithms Clustering.
University of CreteCS4831 The use of Minimum Spanning Trees in microarray expression data Gkirtzou Ekaterini.
L16: Micro-array analysis Dimension reduction Unsupervised clustering.
Greedy Algorithms Reading Material: Chapter 8 (Except Section 8.5)
Analysis of Algorithms CS 477/677
Cluster Analysis: Basic Concepts and Algorithms
Computational Biology, Part 12 Expression array cluster analysis Robert F. Murphy, Shann-Ching Chen Copyright  All rights reserved.
Introduction to Bioinformatics Algorithms Clustering.
CSE182-L17 Clustering Population Genetics: Basics.
Chapter 11: Limitations of Algorithmic Power
Backtracking Reading Material: Chapter 13, Sections 1, 2, 4, and 5.
Greedy Algorithms Like dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems Unlike dynamic programming.
Ulf Schmitz, Pattern recognition - Clustering1 Bioinformatics Pattern recognition - Clustering Ulf Schmitz
Comp602 Bioinformatics Algorithms -m werner 2011
Introduction to Bioinformatics Algorithms Clustering and Microarray Analysis.
Radial Basis Function Networks
B IOINFORMATICS Dr. Aladdin HamwiehKhalid Al-shamaa Abdulqader Jighly Lecture 8 Analyzing Microarray Data Aleppo University Faculty of technical.
Clustering Unsupervised learning Generating “classes”
BIONFORMATIC ALGORITHMS Ryan Tinsley Brandon Lile May 9th, 2014.
Minimal Spanning Trees What is a minimal spanning tree (MST) and how to find one.
The Theory of NP-Completeness 1. What is NP-completeness? Consider the circuit satisfiability problem Difficult to answer the decision problem in polynomial.
A Clustering Algorithm based on Graph Connectivity Balakrishna Thiagarajan Computer Science and Engineering State University of New York at Buffalo.
1 Motivation Web query is usually two or three words long. –Prone to ambiguity –Example “keyboard” –Input device of computer –Musical instruments How can.
Computational Molecular Biology Non-unique Probe Selection via Group Testing.
Clustering What is clustering? Also called “unsupervised learning”Also called “unsupervised learning”
Intel Confidential – Internal Only Co-clustering of biological networks and gene expression data Hanisch et al. This paper appears in: bioinformatics 2002.
Microarray Data Analysis (Lecture for CS498-CXZ Algorithms in Bioinformatics) Oct 13, 2005 ChengXiang Zhai Department of Computer Science University of.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Clustering Gene Expression Data BMI/CS 576 Colin Dewey Fall 2010.
Clustering.
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
Spectral Sequencing Based on Graph Distance Rong Liu, Hao Zhang, Oliver van Kaick {lrong, haoz, cs.sfu.ca {lrong, haoz, cs.sfu.ca.
CZ5225: Modeling and Simulation in Biology Lecture 3: Clustering Analysis for Microarray Data I Prof. Chen Yu Zong Tel:
Gene expression & Clustering. Determining gene function Sequence comparison tells us if a gene is similar to another gene, e.g., in a new species –Dynamic.
CS 8751 ML & KDDData Clustering1 Clustering Unsupervised learning Generating “classes” Distance/similarity measures Agglomerative methods Divisive methods.
Computational Molecular Biology Non-unique Probe Selection via Group Testing.
Comp. Genomics Recitation 10 Clustering and analysis of microarrays.
Chapter 13 Backtracking Introduction The 3-coloring problem
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Cluster Analysis This lecture node is modified based on Lecture Notes for Chapter.
1 Microarray Clustering. 2 Outline Microarrays Hierarchical Clustering K-Means Clustering Corrupted Cliques Problem CAST Clustering Algorithm.
ICS 353: Design and Analysis of Algorithms Backtracking King Fahd University of Petroleum & Minerals Information & Computer Science Department.
CZ5211 Topics in Computational Biology Lecture 4: Clustering Analysis for Microarray Data II Prof. Chen Yu Zong Tel:
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Network Partition –Finding modules of the network. Graph Clustering –Partition graphs according to the connectivity. –Nodes within a cluster is highly.
Finding Motifs Vasileios Hatzivassiloglou University of Texas at Dallas.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Clustering Approaches Ka-Lok Ng Department of Bioinformatics Asia University.
Color Image Segmentation Mentor : Dr. Rajeev Srivastava Students: Achit Kumar Ojha Aseem Kumar Akshay Tyagi.
Clustering [Idea only, Chapter 10.1, 10.2, 10.4].
dij(T) - the length of a path between leaves i and j
Haim Kaplan and Uri Zwick
Clustering BE203: Functional Genomics Spring 2011 Vineet Bafna and Trey Ideker Trey Ideker Acknowledgements: Jones and Pevzner, An Introduction to Bioinformatics.
Clustering.
Clustering.
Presentation transcript:

Gene expression & Clustering (Chapter 10)

Determining gene function Sequence comparison tells us if a gene is similar to another gene, e.g., in a new species Dynamic programming Approximate pattern matching Genes with similar sequence likely to have similar function Doesn’t always work. “Homologous” genes may not be similar enough at the sequence level, to be detected this way New method to determine gene function: directly measure gene activity (DNA arrays)

DNA Arrays--Technical Foundations An Introduction to Bioinformatics Algorithms www.bioalgorithms.info DNA Arrays--Technical Foundations An array works by exploiting the ability of a given mRNA molecule to hybridize to the DNA template. Using an array containing many DNA samples in an experiment, the expression levels of hundreds or thousands genes within a cell by measuring the amount of mRNA bound to each site on the array. With the aid of a computer, the amount of mRNA bound to the spots on the microarray is precisely measured, generating a profile of gene expression in the cell. May, 11, 2004 http://www.ncbi.nih.gov/About/primer/microarrays.html 3

An experiment on a microarray In this schematic:   GREEN represents Control DNA RED represents Sample DNA   YELLOW represents a combination of Control and Sample DNA   BLACK represents areas where neither the Control nor Sample DNA   Each color in an array represents either healthy (control) or diseased (sample) tissue. The location and intensity of a color tell us whether the gene is present in the control and/or sample DNA. May 11,2004 http://www.ncbi.nih.gov/About/primer/microarrays.html 10

DNA Microarray An Introduction to Bioinformatics Algorithms www.bioalgorithms.info DNA Microarray Millions of DNA strands build up on each location. Tagged probes become hybridized to the DNA chip’s microarray. May, 11, 2004 5 http://www.affymetrix.com/corporate/media/image_library/image_library_1.affx

Gene expression Microarray gives us an n x m expression matrix I Each of n rows corresponds to a gene Each of m columns corresponds to a condition or time point Each column comes from one microarray I(j,k) is the expression level of gene j in condition/experiment k If two genes (rows) have similar “expression profiles”, then they may be related in function they may be “co-regulated”

Clustering Find groups of genes that have similar expression profiles to one another Such groups may be functionally related, and/or co-regulated Compute pairwise distance metric d(i,j) for every pair of genes i and j This gives an n x n “distance matrix” d

Goal of clustering To group together genes into clusters such that Genes within a cluster have highly similar expression profiles (small d(i,j)): “homogeneity” Genes in different clusters have very different expression profiles (large d(i,j)): “separation” “Good” clustering is one that adheres to these goals A really “good” clustering is decided by biological interpretation of the clusters

Clustering of Microarray Data Clusters

Clustering problems How to measure distance/similarity ? How many clusters ? Very large data sets: ~10,000 gene, ~100 conditions create computational difficulties

Hierarchical clustering One approach to clustering Does not explicitly partition genes into groups Organizes genes into a tree; genes are at the leaves of the tree Edges have lengths Total path length between two genes (leaves) correlates with the distance between the genes

Hierarchical Clustering

Hierarchical Clustering: Example

Hierarchical Clustering: Example

Hierarchical Clustering: Example

Hierarchical Clustering: Example

Hierarchical Clustering: Example

Hierarchical Clustering Algorithm Hierarchical Clustering (d , n) Form n clusters each with one element Construct a graph T by assigning one vertex to each cluster while there is more than one cluster Find the two closest clusters C1 and C2 Merge C1 and C2 into new cluster C with |C1| +|C2| elements Compute distance from C to all other clusters Add a new vertex C to T and connect to vertices C1 and C2 Remove rows and columns of d corresponding to C1 and C2 Add a row and column to d corresponding to the new cluster C return T Different ways to define distances between clusters may lead to different clusterings

Hierarchical Clustering: Recomputing Distances dmin(C, C*) = min d(x,y) for all elements x in C and y in C* Distance between two clusters is the smallest distance between any pair of their elements davg(C, C*) = (1 / |C*||C|) ∑ d(x,y) Distance between two clusters is the average distance between all pairs of their elements

K-means clustering Another popular solution to the clustering problem Guess a number k, which is the number of clusters that will be reported Finds explicit clusters (unlike hierarchical clustering)

Objective function Let V = (v1,v2,… vn) be the n data points Each data point for is a vector in an m-dimensional space Let X be a set of k points in the same vector space For each vi, find the x  X that is closest to it, i.e., the Euclidian distance d(vi,x) is least Sum the square of this Euclidian distance, over all vi Formally,

Objective function Find X such that d(V,X) is minimized Input: A set, V, consisting of n points and a parameter k Output: A set X consisting of k points (cluster centers) that minimizes the d(V,X) over all possible choices of X An NP-complete problem for k > 1 Easy for k=1 !

K-Means Clustering: Lloyd Algorithm Arbitrarily assign the k cluster centers while the cluster centers keep changing Assign each data point to the cluster Ci corresponding to the closest cluster representative (center) (1 ≤ i ≤ k) After the assignment of all data points, compute new cluster representatives according to the center of gravity of each cluster, that is, the new cluster representative is ∑v \ |C| for all v in C for every cluster C *This may lead to merely a locally optimal clustering.

Conservative K-Means Algorithm Lloyd algorithm is fast but in each iteration it moves many data points, not necessarily causing better convergence. A more conservative method would be to move one point at a time only if it improves the overall clustering cost The smaller the clustering cost of a partition of data points is, the better that clustering is Different methods (e.g.,d(V,X) we saw earlier) can be used to measure this clustering cost

K-Means “Greedy” Algorithm ProgressiveGreedyK-Means(k) Select an arbitrary partition P into k clusters while forever bestChange  0 for every cluster C for every element i not in C if moving i to cluster C reduces its clustering cost if (cost(P) – cost(Pi  C) > bestChange bestChange  cost(P) – cost(Pi  C) i*  i C*  C if bestChange > 0 Change partition P by moving i* to C* else return P

Corrupted Cliques problem Another approach to clustering Based on the concept of “cliques” Uses threshold on pairwise distance to decide which genes are “close” and which genes are “distant” Builds a graph with edge between every pair of “close” genes

Clique Graphs A clique is a graph with every vertex connected to every other vertex A clique graph is a graph where each connected component is a clique

Clique Graphs (cont’d) A graph can be transformed into a clique graph by adding or removing edges Example: removing two edges to make a clique graph

Corrupted Cliques Problem Input: A graph G Output: The smallest number of additions and removals of edges that will transform G into a clique graph

Corrupted Cliques & Clustering: Idea Distance graph ‘G’, vertices are genes and edge between genes i and j if di,j< A clique graph satisfies homogeneity and separation criteria. However, G is typically not a clique graph, presumably due to (a) noise in measurement, (b) absence of a «universally» good threshold  Therefore, find the fewest changes to G that make it a clique graph. The cliques are the desired clusters

Heuristics for Corrupted Clique Problem Corrupted Cliques problem is NP-Hard, some heuristics exist to approximately solve it: PCC (Parallel classification with cores): a time consuming algorithm CAST (Cluster Affinity Search Technique): a practical and fast algorithm.

PCC Given a set of genes S and its distance graph G. Suppose we knew the “correct clustering” for a subset S’ of S. Could we extend this to a clustering of S? For every gene j  S \ S’, and for every cluster Ci, calculate the “affinity” of j to Ci : where N(j, Ci) is the number of edges (in G) between j and genes in Ci. Add every unclustered gene j to its maximum affinity cluster If S’ is sufficiently large, and clustering of S’ is correct, then the above approach is likely to be correct.

PCC We assumed we knew the “correct clustering” for the subset S’ of S. How to get around this? Generate all possible clusterings of S’ ! This takes time O(k|S’|). Choose a small subset S’. But that leads to a less accurate solution (we assumed also that S’ was “large enough”) Tradeoff between speed and accuracy Authors of this algorithm used |S’| = log log n. This leads to k|S’| = (log n)c for some constant c. For each such clustering, O(n2) time to extend. Therefore O(n2 (log n)c ) time complexity. Too slow.

CAST CAST (Cluster Affinity Search Technique): a practical and fast algorithm: CAST is based on the notion of genes close to cluster C or distant from cluster C Distance between gene i and cluster C: d(i,C) = average distance between gene i and all genes in C Gene i is close to cluster C if d(i,C)< θ and distant otherwise

CAST Algorithm CAST(S, G, θ) P  Ø while S ≠ Ø V  vertex of maximal degree in the distance graph G C  {V} while a close gene i not in C or distant gene i in C exists Find the nearest close gene i not in C and add it to C Remove the farthest distant gene i in C Add cluster C to partition P S  S \ C Remove vertices of cluster C from the distance graph G return P S – set of elements, G – distance graph, θ - distance threshold