Presentation is loading. Please wait.

Presentation is loading. Please wait.

SAURABH KIROLIKAR. BIOINFORMATICS: DEFINATIONS wheel.gif.

Similar presentations


Presentation on theme: "SAURABH KIROLIKAR. BIOINFORMATICS: DEFINATIONS wheel.gif."— Presentation transcript:

1 SAURABH KIROLIKAR

2 BIOINFORMATICS: DEFINATIONS http://www.ittc.ku.edu/bioinfo_seminar/images/ wheel.gif

3 WHY WE SELECTED THESE PAPERS?

4 What is Gene Expression?  It is the process by which information from a gene is used in the synthesis of a functional gene product  These products are often proteins but in non-protein coding genes such as rRNA genes or tRNA genes, the product is a functional RNA.

5 Steps of Gene Expression  Transcription  RNA splicing  Translation  Post-translational modification

6 What is Gene Expression Profiling?  It is the measurement of the expression of thousands of genes at once, to create a global picture of cellular function.  can distinguish between cells that are actively dividing, or show how the cells react to a particular treatment.

7 Gene Expression Profiling  Expression profiling is a logical next step after sequencing a genome: the sequence tells us what the cell could possibly do, while the expression profile tells us what it is actually doing now.

8 http://www.accessexcellence.org/RC/VL/GG/images/microarray_te chnology.gif

9 BIOINFORMATICS OF GENE EXPRESSION PROFILING. http://www.wor mbook.org/chap ters/www_germl inegenomics/ge rmlinegenomics fig1.jpg

10  Normalization  Filtering Data  Statistical analysis  Clustering  Gene Ontology  Pathway analysis

11 SNP  Single nucleotide polymorphism also termed as simple nucleotide polymorphism  SNPs are single nucleotide variation observed in the human genome.  Eg… AAGCCTA to AAGCTTA, Presence of 2 allele.  For a variation to be considered a SNP, it must occur in at least 1% of the population. SNPs, which make up about 90% of all human genetic variation, occur every 100 to 300 bases along the 3-billion-base human genome.

12 SNP (Contd….)  SNPs can occur in coding (gene) and noncoding regions of the genome.  Although more than 99% of human DNA sequences are the same, variations in DNA sequence can have a major impact on how humans respond to disease; environmental factors such as bacteria, viruses, toxins, and chemicals; and drugs and other therapies. This makes SNPs valuable for biomedical research and for developing pharmaceutical products or medical diagnostics.

13 SNP (Contd….)  SNPs are also evolutionarily stable.  Scientists believe SNP maps will help them identify the multiple genes associated with complex ailments such as cancer, diabetes, vascular disease, and some forms of mental illness.  SNPs do not cause disease, but they can help determine the likelihood that someone will develop a particular illness.  Eg…ApoE contains two SNPs that result in three possible alleles for this gene: E2, E3, and E4. Each allele differs by one DNA base, and the protein product of each gene differs by one amino acid.

14 FINAL WORDS

15 By SundaresanRajasekaran

16 outline  Problem Definition  Current status of the problem  Methods and materials to asses the problem  Present current research going on  Mathematical explanation of the present work  My views  conclusion

17

18

19

20 Problems posed  Contributors are no longer anonymous.  Can be tracked back very easily.  Identify the potential medical issues of the contributors.  Data no longer available for the researchers.

21 Background  All humans are 99.9% exactly the same.  The 0.1% difference is called the ‘Single Nucleotide polymorphism’ or SNP.  Allele’s at a particular locus can be classified as AA,AB or BB.  To convert it mathematically the values of allele’s can be considered as 0,0.5 and 1 corresponding to AA,AB and BB respectively.

22 How?  A mixture of various concentration was constructed.  Pick up any random individual – of any race (mostly from HapMap database).  Find the appropriate reference population by matching the mixture with the ancestral data.

23 Sample Data SNPs #MixtureIndividualpopulation 12340.51 145610.51 1567011 1666011 16670.5 178610.5 179900.5 1800000 19990.510

24 The main picture

25 Calculations  We calculate D (Y i,j ) = |Y i,j -Pop j | - |Y i,j -M j |  Y i,j be the allele frequency estimate for the individual i and SNP j  We use the same formula’s to calculate M j and POP j.  The first difference |Y i,j -M j | measures how the allele frequency of the mixture M j at SNP j differs from the allele frequency of the individual Y i,j for SNP j.  The second difference |Y i,j -Pop j | measures how the reference population’s allele frequency Pop j differs from the allele frequency of the individual Y i,j for each SNP j.

26 Test Statistics  By sampling 500 K+ SNPs, D(Yi,j) will follow a normal distribution which is determined by  Where U0 is the mean of D(Yk) over individuals Yk not in the mixture,  SD(D(Yi)) is the standard deviation of D(Yi,j) for all SNPsj and individual Yi, and s is the number of SNPs.  We assume U0 is zero since a random individual Yk should be equally distant from the mixture and the mixture’s reference population

27 The Normal Example:Testing Test an hypothesis about the mean: t-test If, t follows a t-distribution with n-1 degrees of freedom p-value

28 Experimental Validation

29 Can we improve?  Yes. How can we do that?  By increasing the accuracy of the existing system. i.e. Be able to reduce all the false positives.  Or, we can improve this method by reducing the number of SNPs. i.e. Do feature reduction.

30  What is feature reduction?  Why feature reduction?  Feature reduction algorithms  Principal Component Analysis (PCA)

31 What is feature reduction?  Feature reduction refers to the mapping of the original high-dimensional data onto a lower- dimensional space. Criterion for feature reduction can be different based on different problem settings. ○ Unsupervised setting: minimize the information loss ○ Supervised setting: maximize the class discrimination

32 High-dimensional data Gene expression Face imagesHandwritten digits

33  What is feature reduction?  Why feature reduction?  Feature reduction algorithms  Principal Component Analysis

34 Why feature reduction?  Most machine learning and data mining techniques may not be effective for high- dimensional data Query accuracy and efficiency degrade rapidly as the dimension increases.  The intrinsic dimension may be small. For example, the number of genes responsible for a certain type of disease may be small.

35  What is feature reduction?  Why feature reduction?  Feature reduction algorithms  Principal Component Analysis

36 Feature reduction algorithms  Unsupervised Latent Semantic Indexing (LSI): truncated SVD Independent Component Analysis (ICA) Principal Component Analysis (PCA) Canonical Correlation Analysis (CCA)  Supervised Linear Discriminant Analysis (LDA)

37 Application to microarrays  Dimension reduction (simplify a dataset) Clustering (two many samples) Discriminant analysis (find a group of genes)  Exploratory data analysis tool Find the most important signal in data 2D projections (clusters?)

38 Outline  What is feature reduction?  Why feature reduction?  Feature reduction algorithms  Principal Component Analysis

39 What is Principal Component Analysis?  Principal component analysis (PCA) Reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables Retains most of the sample's information. Useful for the compression and classification of data.  By information we mean the variation present in the sample, given by the correlations between the original variables. The new variables, called principal components (PCs), are uncorrelated.

40 40 Principal Component Analysis (PCA) Information loss −Dimensionality reduction implies information loss !! −PCA preserves as much information as possible: What is the “best” lower dimensional sub-space? The “best” low-dimensional space is centered at the sample mean and has directions determined by the “best” eigenvectors of the covariance matrix of the data x. −By “best” eigenvectors we mean those corresponding to the largesteigenvalues ( i.e., “principal components”).

41 41 Principal Component Analysis (PCA) Geometric interpretation −PCA projects the data along the directions where the data varies the most. −These directions are determined by the eigenvectors of the covariance matrix corresponding to the largest eigenvalues. −The magnitude of the eigenvaluescorresponds to the variance of the data along the eigenvector directions.

42 Singular Value Decomposition (SVD) Given any m  n matrix A, algorithm to find matrices U, V, and W such that A = UWV T U is m  n and orthonormal W is n  n and diagonal V is n  n and orthonormal

43 SVD

44 Quick Summary of PCA 1.Organize data as an m × n matrix, where m is the number of measurement types and n is the number of samples. 2.Subtract off the mean for each measurement type. 3.Calculate the SVD or the eigenvectors of the covariance. To Perform SVD, first calculate the new matrix Y such that Y ≡ (1 /√n) X T where Y is normalized along its dimensions. Performing SVD on Y yields the Principal components of X.

45 Questions?

46 Bianca Lott Teng Li CS 144

47 Characteristics of Clustering Algorithms  Hierarchial -algorithms that find successive clusters using previously established clusters.  Hirarchial algorithms can be agglomerative(bottom-up) or divisive(top- down)  Agglomerative algorithms begin with each element as a separate cluster and merge them into larger clusters.  Divisive algorithms begin with the whole set and divide it into smaller clusters.

48 Characteristics of Clustering Algorithms(Cont’d)  Partitional algorithms determine clusters all at once.  Density-based clustering algorithm is where a cluster regarded as a region in which the density of data objects exceeds a threshold. DBSCAN and OPTICS are two typical algorithms of this kind.  Two-way clustering, co-clustering or biclustering are clustering methods where not only the objects are clustered but also the features of the objects, i.e., if the data is represented in a data matrix the rows and columns are clustered simultaneously.

49 Characteristics of Clustering Algorithms  Many clustering algorithms require specification of the number of clusters to produce the input data set, prior to execution of the algorithms.

50 Types of Clustering Algorithms that Optimize Some Quantities  CLIQUE[3]- fixes the minimum density of each dense unit by user parameter and searches for clusters that maximize the number of selected attributes.  PROCLUS[1]- requires a user parameter, l, to determine the number of attributes to be selected.  ORCLUS[2]- is close to the PROCLUS algorithm, except it adds a merging process of clusters and asks each cluster to select principal components instead of attributes.

51 Types of Clustering Algorithms that Optimize Some Quantities(Cont’d)  DOC and FastDOC algorithm-fix the maximum distance between attribute values by a user parameter.  In all these algorithms, the user parameter determines the attributes to be selected and the clusters to be formed.  If the wrong parameter is used, then the clustering accuracy is not diminished.

52 What You Want in a Projected Clustering Algorithm  One with minimal input or use of parameters, since the parameters are usually unknown.  One with a deterministic characteristic or follows one path to produce accurate results in a timely manner.  One with excellent recall values so that the selection of relevant attributes is accurate.

53 Relevance Index  Local Variance  Global Variance Low local variance does not imply high relevance  Relevance Index A baseline to determine the relevance of an attribute to a cluster  Will Be Used in Two Parts of the Algorithm Attribute Selection ○ Each cluster selects all attributes above a dynamic threshold Similarity Calculation ○ Hierarchical clustering merging order

54 Mutual Disagreement  Merge using RSim is guaranteed to be the highest quality  Mutual Disagreement Two merging clusters are not similar ○ Big merges with small ○ One with substantial attributes VS one with few  Relative relevance of attribute a to cluster C with the potential new cluster Cn  Severity of MD can be calculated by the following Use MD to reject heavy MD

55 HARP Algorithm is a hierarchical approach with automatic relevant attribute selection for projected clustering. does not rely on user parameters in determining relevant attributes in a cluster. Tries to maximize the relevance index of each selected attribute and the number of selected attributes of each cluster at the same time. Two thresholds(Amin, Rmin) are in place to restrict the minimum number of selected attributes for each cluster and the minimum relevance index values of them.

56 HARP Algorithm // d: dataset dimensionality // jAjmin: min. no. of selected attributes per cluster // Rmin: min. relevance index of a selected attribute Algorithm HARP(k: target no. of clusters) Begin 1 // Initially, each record is a cluster 2 For step := 0 to d- 1 do { 3 |A|min := d -step 4 Rmin := 1 - step=(d-1) 5 For each cluster C 6 SelectAttrs(C,Rmin) 7 BuildScoreHeaps(|A|min,Rmin) 8 While global score heap is not empty { 9 // Cb1 and Cb2 are the clusters involved in the 10 // best merge, which forms the new cluster Cn 11 Cn := Cb1 U Cb2 12 SelectAttrs(Cn,Rmin) 13 Update score heaps 14 If clusters remained = k 15 Goto 18 16 } 17 } 18 Output result End

57 Attribute Selection Procedure

58 Experiments  Metrics Synthetic Datasets Real Datasets ○ Datasets used in studying larger B-cell lymphoma 96 samples, 4026 expression values each. Expression values of the gene as attributes and has been categorized into 9 classes

59 Metrics-Continued  Similarity Functions Non-projected algorithms ○ Euclidean Distance ○ Pearson Correlation Two hierarchic algorithms ○ RSim  Performance Metrics Adjust Rand Index ○ 1 for 100% match and 0 for a random partition

60 Performance Results

61 Analyzing Real Data  Proclus and Harp are of the best performance  For Proclus Average is very low – 0.27

62 HARP Algorithm Analysis  Worst Case Time Complexity O(N*N*d*(d+logN)) --Loose Upper Bound No Repeated Runs

63 Summary – Comparisons of Projected Clustering Algorithms  The HARP algorithm was the best one out of all the algorithms (PROCLUS, ORCLUS, FastDoc).  The other algorithms only produced sufficient results when the correct parameters were used.  HARP produced accurate results in a single run.  HARP had excellent recall values so it could accurately select relevant attributes.

64 QUESTIONS?


Download ppt "SAURABH KIROLIKAR. BIOINFORMATICS: DEFINATIONS wheel.gif."

Similar presentations


Ads by Google