Presentation is loading. Please wait.

Presentation is loading. Please wait.

Design & Analysis of Microarray Studies for Diagnostic & Prognostic Classification Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer.

Similar presentations


Presentation on theme: "Design & Analysis of Microarray Studies for Diagnostic & Prognostic Classification Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer."— Presentation transcript:

1 Design & Analysis of Microarray Studies for Diagnostic & Prognostic Classification Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute http://linus.nci.nih.gov/brb

2 –Powerpoint presentations –Reprints & Technical Reports –BRB-ArrayTools software –BRB-ArrayTools Data Archive –Sample Size Planning for Targeted Clinical Trials

3 Simon R, Korn E, McShane L, Radmacher M, Wright G, Zhao Y. Design and analysis of DNA microarray investigations, Springer-Verlag, 2003. Radmacher MD, McShane LM, Simon R. A paradigm for class prediction using gene expression profiles. Journal of Computational Biology 9:505-511, 2002. Simon R, Radmacher MD, Dobbin K, McShane LM. Pitfalls in the analysis of DNA microarray data. Journal of the National Cancer Institute 95:14-18, 2003. Dobbin K, Simon R. Comparison of microarray designs for class comparison and class discovery, Bioinformatics 18:1462-69, 2002; 19:803-810, 2003; 21:2430-37, 2005; 21:2803-4, 2005. Dobbin K and Simon R. Sample size determination in microarray experiments for class comparison and prognostic classification. Biostatistics 6:27-38, 2005. Dobbin K, Shih J, Simon R. Questions and answers on design of dual-label microarrays for identifying differentially expressed genes. Journal of the National Cancer Institute 95:1362-69, 2003. Wright G, Simon R. A random variance model for detection of differential gene expression in small microarray experiments. Bioinformatics 19:2448-55, 2003. Korn EL, Troendle JF, McShane LM, Simon R.Controlling the number of false discoveries. Journal of Statistical Planning and Inference 124:379-08, 2004. Molinaro A, Simon R, Pfeiffer R. Prediction error estimation: A comparison of resampling methods. Bioinformatics 21:3301-7,2005.

4 Simon R. Using DNA microarrays for diagnostic and prognostic prediction. Expert Review of Molecular Diagnostics, 3(5) 587-595, 2003. Simon R. Diagnostic and prognostic prediction using gene expression profiles in high dimensional microarray data. British Journal of Cancer 89:1599-1604, 2003. Simon R and Maitnourim A. Evaluating the efficiency of targeted designs for randomized clinical trials. Clinical Cancer Research 10:6759-63, 2004. Maitnourim A and Simon R. On the efficiency of targeted clinical trials. Statistics in Medicine 24:329-339, 2005. Simon R. When is a genomic classifier ready for prime time? Nature Clinical Practice – Oncology 1:4-5, 2004. Simon R. An agenda for Clinical Trials: clinical trials in the genomic era. Clinical Trials 1:468-470, 2004. Simon R. Development and Validation of Therapeutically Relevant Multi-gene Biomarker Classifiers. Journal of the National Cancer Institute 97:866-867, 2005. Simon R. A roadmap for developing and validating therapeutically relevant genomic classifiers. Journal of Clinical Oncology (In Press). Freidlin B and Simon R. Adaptive signature design. Clinical Cancer Research (In Press). Simon R. Validation of pharmacogenomic biomarker classifiers for treatment selection. Disease Markers (In Press). Simon R. Guidelines for the design of clinical studies for development and validation of therapeutically relevant biomarkers and biomarker classification systems. In Biomarkers in Breast Cancer, Hayes DF and Gasparini G, Humana Press (In Press).

5 Myth That microarray investigations should be unstructured data-mining adventures without clear objectives

6 Good microarray studies have clear objectives, but not generally gene specific mechanistic hypotheses Design and analysis methods should be tailored to study objectives

7 Good Microarray Studies Have Clear Objectives Class Comparison –Find genes whose expression differs among predetermined classes Class Prediction –Prediction of predetermined class (phenotype) using information from gene expression profile Class Discovery –Discover clusters of specimens having similar expression profiles –Discover clusters of genes having similar expression profiles

8 Class Comparison and Class Prediction Not clustering problems –Global similarity measures generally used for clustering arrays may not distinguish classes –Don’t control multiplicity or for distinguishing data used for classifier development from data used for classifier evaluation Supervised methods Requires multiple biological samples from each class

9 Levels of Replication Technical replicates –RNA sample divided into multiple aliquots and re- arrayed Biological replicates –Multiple subjects –Replication of the tissue culture experiment

10 Biological conclusions generally require independent biological replicates. The power of statistical methods for microarray data depends on the number of biological replicates. Technical replicates are useful insurance to ensure that at least one good quality array of each specimen will be obtained.

11 Class Prediction Predict which tumors will respond to a particular treatment Predict which patients will relapse after a particular treatment

12 Class prediction methods usually have gene selection as a component –The criteria for gene selection for class prediction and for class comparison are different For class comparison false discovery rate is important For class prediction, predictive accuracy is important

13 Clarity of Objectives is Important Patient selection –Many microarray studies developing classifiers are not “therapeutically relevant” Analysis methods –Many microarray studies use cluster analysis inappropriately or misleadingly

14 Microarray Platforms for Developing Predictive Classifiers Single label arrays –Affymetrix GeneChips Dual label arrays using common reference design –Dye swaps are unnecessary

15 Common Reference Design A1A1 R A2A2 B1B1 B2B2 RRR RED GREEN Array 1Array 2Array 3Array 4 A i = ith specimen from class A R = aliquot from reference pool B i = ith specimen from class B

16 The reference generally serves to control variation in the size of corresponding spots on different arrays and variation in sample distribution over the slide. The reference provides a relative measure of expression for a given gene in a given sample that is less variable than an absolute measure. The reference is not the object of comparison. The relative measure of expression will be compared among biologically independent samples from different classes.

17

18 Myth For two color microarrays, each sample of interest should be labeled once with Cy3 and once with Cy5 in dye-swap pairs of arrays.

19 Dye Bias Average differences among dyes in label concentration, labeling efficiency, photon emission efficiency and photon detection are corrected by normalization procedures Gene specific dye bias may not be corrected by normalization

20 Dye swap technical replicates of the same two rna samples are rarely necessary. Using a common reference design, dye swap arrays are not necessary for valid comparisons of classes since specimens labeled with different dyes are never compared. For two-label direct comparison designs for comparing two classes, it is more efficient to balance the dye-class assignments for independent biological specimens than to do dye swap technical replicates

21 Balanced Block Design A1A1 A2A2 B2B2 A3A3 B4B4 B1B1 B3B3 A4A4 RED GREEN Array 1Array 2Array 3Array 4 A i = ith specimen from class A B i = ith specimen from class B

22 Detailed comparisons of the effectiveness of designs: –Dobbin K, Simon R. Comparison of microarray designs for class comparison and class discovery. Bioinformatics 18:1462-9, 2002 –Dobbin K, Shih J, Simon R. Statistical design of reverse dye microarrays. Bioinformatics 19:803-10, 2003 –Dobbin K, Simon R. Questions and answers on the design of dual-label microarrays for identifying differentially expressed genes, JNCI 95:1362-1369, 2003

23 Common reference designs are very effective for many microarray studies. They are robust, permit comparisons among separate experiments, and permit many types of comparisons and analyses to be performed. For simple two class comparison problems, balanced block designs require many fewer arrays than common reference designs. –Efficiency decreases for more than two classes –Are more difficult to apply to more complicated class comparison problems. –They are not appropriate for class discovery or class prediction. Loop designs are less robust, and dominated by either common reference designs or balanced block designs, and are not suitable for class prediction or class discovery.

24 What We Will Not Discuss Today Image analysis Normalization Clustering methods Class comparison –SAM and other methods of gene finding –FDR (false discovery rate) and methods for controlling the number of false positive genes

25 Class Prediction Model Given a sample with an expression profile vector x of log-ratios or log signals and unknown class. Predict which class the sample belongs to The class prediction model is a function f which maps from the set of vectors x to the set of class labels {1,2} (if there are two classes). f generally utilizes only some of the components of x (i.e. only some of the genes) Specifying the model f involves specifying some parameters (e.g. regression coefficients) by fitting the model to the data (learning the data).

26 Problems With Many Diagnostic/Prognostic Marker Studies Are not reproducible –Retrospective non-focused analysis –Multiplicity problems –Inter-laboratory assay variation Have no impact –Not therapeutically relevant questions –Not therapeutically relevant group of patients –Black box predictors

27 Components of Class Prediction Feature (gene) selection –Which genes will be included in the model Select model type –E.g. Diagonal linear discriminant analysis, Nearest-Neighbor, … Fitting parameters (regression coefficients) for model –Selecting value of tuning parameters

28 Do Not Confuse Statistical Methods Appropriate for Class Comparison with Those Appropriate for Class Prediction Demonstrating statistical significance of prognostic factors is not the same as demonstrating predictive accuracy. Demonstrating goodness of fit of a model to the data used to develop it is not a demonstration of predictive accuracy. Statisticians are used to inference, not prediction Most statistical methods were not developed for p>>n prediction problems

29 Feature Selection Genes that are differentially expressed among the classes at a significance level  (e.g. 0.01) –The  level is selected only to control the number of genes in the model

30 t-test Comparisons of Gene Expression x j ~N(  j1,  j 2 ) for class 1 x j ~N(  j2,  j 2 ) for class 2 H 0j :  j1 =  j2

31 Estimation of Within-Class Variance Estimate separately for each gene –Limited degrees of freedom –Gene list dominated by genes with small fold changes and small variances Assume all genes have same variance –Poor assumption Random (hierarchical) variance model –Wright G.W. and Simon R. Bioinformatics19:2448-2455,2003 –Inverse gamma distribution of residual variances – Results in exact F (or t) distribution of test statistics with increased degrees of freedom for error variance –For any normal linear model

32 Feature Selection Small subset of genes which together give most accurate predictions –Combinatorial optimization algorithms Genetic algorithms Little evidence that complex feature selection is useful in microarray problems –Failure to compare to simpler methods –Some published complex methods for selecting combinations of features do not appear to have been properly evaluated

33 Linear Classifiers for Two Classes

34 Fisher linear discriminant analysis –Requires estimating correlations among all genes selected for model –y = vector of class labels Diagonal linear discriminant analysis (DLDA) assumes features are uncorrelated Naïve Bayes classifier Compound covariate predictor (Radmacher) and Golub’s method are similar to DLDA in that they can be viewed as weighted voting of univariate classifiers

35 Linear Classifiers for Two Classes Compound covariate predictor Instead of for DLDA

36 Linear Classifiers for Two Classes Support vector machines with inner product kernel are linear classifiers with weights determined to separate the classes with a hyperplain that minimizes the length of the weight vector

37 Support Vector Machine

38 Perceptrons Perceptrons are neural networks with no hidden layer and linear transfer functions between input output –Number of input nodes equals number of genes selected –Number of output nodes equals number of classes minus 1 –Number of inputs may be major principal components of genes or major principal components of informative genes Perceptrons are linear classifiers

39 Naïve Bayes Classifier Expression profiles for class j assumed normal with mean vector m j and diagonal covariance matrix D Likelihood of expression profile vector x is l(x; m j,D) Posterior probability of class j for case with expression profile vector x is proportional to π j l(x; m j,D)

40 Compound Covariate Bayes Classifier Compound covariate y =  t i x i –Sum over the genes selected as differentially expressed –x i the expression level of the ith selected gene for the case whose class is to be predicted –t i the t statistic for testing differential expression for the i’th gene Proceed as for the naïve Bayes classifier but using the single compound covariate as predictive variable –GW Wright et al. PNAS 2005.

41 When p>>n The Linear Model is Too Complex It is always possible to find a set of features and a weight vector for which the classification error on the training set is zero. Why consider more complex models?

42 Myth Complex classification algorithms such as neural networks perform better than simpler methods for class prediction.

43 Artificial intelligence sells to journal reviewers and peers who cannot distinguish hype from substance when it comes to microarray data analysis. Comparative studies have shown that simpler methods work as well or better for microarray problems because they avoid overfitting the data.

44 Other Simple Methods Nearest neighbor classification Nearest k-neighbors Nearest centroid classification Shrunken centroid classification

45 Nearest Neighbor Classifier To classify a sample in the validation set as being in outcome class 1 or outcome class 2, determine which sample in the training set it’s gene expression profile is most similar to. –Similarity measure used is based on genes selected as being univariately differentially expressed between the classes –Correlation similarity or Euclidean distance generally used Classify the sample as being in the same class as it’s nearest neighbor in the training set

46 Other Methods Neural networks Top-scoring pairs CART Random Forrest Genetic algorithm based classification

47 Apparent Dimension Reduction Based Methods Principal component regression Supervised principal component regression Partial least squares Stepwise logistic regression

48 When There Are More Than 2 Classes Nearest neighbor type methods Decision tree of binary classifiers

49 Decision Tree of Binary Classifiers Partition the set of classes {1,2,…,K} into two disjoint subsets S 1 and S 2 Develop a binary classifier for distinguishing the composite classes S 1 and S 2 Compute the cross-validated classification error for distinguishing S 1 and S 2 Repeat the above steps for all possible partitions in order to find the partition S 1 and S 2 for which the cross-validated classification error is minimized If S 1 and S 2 are not singleton sets, then repeat all of the above steps separately for the classes in S 1 and S 2 to optimally partition each of them

50

51 Evaluating a Classifier “Prediction is difficult, especially the future.” –Neils Bohr Fit of a model to the same data used to develop it is no evidence of prediction accuracy for independent data.

52 Evaluating a Classifier Fit of a model to the same data used to develop it is no evidence of prediction accuracy for independent data –Goodness of fit vs prediction accuracy Demonstrating statistical significance of prognostic factors is not the same as demonstrating predictive accuracy Demonstrating stability of identification of gene predictors is not necessary for demonstrating predictive accuracy

53 Evaluating a Classifier The classification algorithm includes the following parts: –Determining what type of classifier to use –Gene selection –Fitting parameters –Optimizing with regard to tuning parameters If a re-sampling method such as cross-validation is to be used to estimate predictive error of a classifier, all aspects of the classification algorithm must be repeated for each training set and the accuracy of the resulting classifier scored on the corresponding validation set

54 Split-Sample Evaluation Training-set –Used to select features, select model type, determine parameters and cut-off thresholds Test-set –Withheld until a single model is fully specified using the training-set. –Fully specified model is applied to the expression profiles in the test-set to predict class labels. –Number of errors is counted –Ideally test set data is from different centers than the training data and assayed at a different time

55 Leave-one-out Cross Validation Omit sample 1 –Develop multivariate classifier from scratch on training set with sample 1 omitted –Predict class for sample 1 and record whether prediction is correct

56 Leave-one-out Cross Validation Repeat analysis for training sets with each single sample omitted one at a time e = number of misclassifications determined by cross-validation Subdivide e for estimation of sensitivity and specificity

57 Cross validation is only valid if the test set is not used in any way in the development of the model. Using the complete set of samples to select genes violates this assumption and invalidates cross-validation. With proper cross-validation, the model must be developed from scratch for each leave-one-out training set. This means that feature selection must be repeated for each leave-one-out training set. The cross-validated estimate of misclassification error is an estimate of the prediction error for model fit using specified algorithm to full dataset If you use cross-validation estimates of prediction error for a set of algorithms indexed by a tuning parameter and select the algorithm with the smallest cv error estimate, you do not have a valid estimate of the prediction error for the selected model

58 Prediction on Simulated Null Data Generation of Gene Expression Profiles 14 specimens (P i is the expression profile for specimen i) Log-ratio measurements on 6000 genes P i ~ MVN(0, I 6000 ) Can we distinguish between the first 7 specimens (Class 1) and the last 7 (Class 2)? Prediction Method Compound covariate prediction (discussed later) Compound covariate built from the log-ratios of the 10 most differentially expressed genes.

59

60 Invalid Criticisms of Cross- Validation “You can always find a set of features that will provide perfect prediction for the training and test sets.” –For complex models, there may be many sets of features that provide zero training errors. –A modeling strategy that either selects among those sets or aggregates among those models, will have a generalization error which will be validly estimated by cross-validation.

61

62

63 Simulated Data 40 cases, 10 genes selected from 5000 MethodEstimateStd Deviation True.078 Resubstitution.007.016 LOOCV.092.115 10-fold CV.118.120 5-fold CV.161.127 Split sample 1-1.345.185 Split sample 2-1.205.184.632+ bootstrap.274.084

64 DLBCL Data MethodBiasStd DeviationMSE LOOCV-.019.072.008 10-fold CV-.007.063.006 5-fold CV.004.07.007 Split 1-1.037.117.018 Split 2-1.001.119.017.632+ bootstrap-.006.049.004

65 Simulated Data 40 cases MethodEstimateStd Deviation True.078 10-fold.118.120 Repeated 10-fold.116.109 5-fold.161.127 Repeated 5-fold.159.114 Split 1-1.345.185 Repeated split 1-1.371.065

66 Permutation Distribution of Cross- validated Misclassification Rate of a Multivariate Classifier Randomly permute class labels and repeat the entire cross-validation Re-do for all (or 1000) random permutations of class labels Permutation p value is fraction of random permutations that gave as few misclassifications as e in the real data

67 Gene-Expression Profiles in Hereditary Breast Cancer Breast tumors studied: 7 BRCA1+ tumors 8 BRCA2+ tumors 7 sporadic tumors Log-ratios measurements of 3226 genes for each tumor after initial data filtering cDNA Microarrays Parallel Gene Expression Analysis RESEARCH QUESTION Can we distinguish BRCA1+ from BRCA1– cancers and BRCA2+ from BRCA2– cancers based solely on their gene expression profiles?

68 BRCA1

69 BRCA2

70 Classification of BRCA2 Germline Mutations Classification MethodLOOCV Prediction Error Compound Covariate Predictor14% Fisher LDA36% Diagonal LDA14% 1-Nearest Neighbor9% 3-Nearest Neighbor23% Support Vector Machine (linear kernel) 18% Classification Tree45%

71 Common Problems With Internal Classifier Validation Pre-selection of genes using entire dataset Failure to consider optimization of tuning parameter part of classification algorithm –Varma & Simon, BMC Bioinformatics 2006 Erroneous use of predicted class in regression model

72 Incomplete (incorrect) Cross- Validation Publications are using all the data to select genes and then cross-validating only the parameter estimation component of model development –Highly biased –Many published complex methods which make strong claims based on incorrect cross-validation. Frequently seen in complex feature set selection algorithms Some software encourages inappropriate cross-validation

73 Incomplete (incorrect) Cross- Validation Let M(b,D) denote a classification model developed on a set of data D where the model is of a particular type that is parameterized by a scalar b. Use cross-validation to estimate the classification error of M(b,D) for a grid of values of b; Err(b). Select the value of b* that minimizes Err(b). Caution: Err(b*) is a biased estimate of the prediction error of M(b*,D). This error is made in some commonly used methods

74 Complete (correct) Cross- Validation Construct a learning set D as a subset of the full set S of cases. Use cross-validation restricted to D in order to estimate the classification error of M(b,D) for a grid of values of b; Err(b). Select the value of b* that minimizes Err(b). Use the mode M(b*,D) to predict for the cases in S but not in D (S-D) and compute the error rate in S-D Repeat this full procedure for different learning sets D 1, D 2 and average the error rates of the models M(b i *,D i ) over the corresponding validation sets S-D i

75 Does an Expression Profile Classifier Predict More Accurately Than Standard Prognostic Variables? Not an issue of which variables are significant after adjusting for which others or which are independent predictors –Predictive accuracy and inference are different The two classifiers can be compared by ROC analysis as functions of the threshold for classification The predictiveness of the expression profile classifier can be evaluated within levels of the classifier based on standard prognostic variables

76 Does an Expression Profile Classifier Predict More Accurately Than Standard Prognostic Variables? Some publications fit logistic model to standard covariates and the cross-validated predictions of expression profile classifiers This is valid only with split-sample analysis because the cross-validated predictions are not independent

77 External Validation Should address clinical utility, not just predictive accuracy –Therapeutic relevance Should incorporate all sources of variability likely to be seen in broad clinical application –Expression profile assay distributed over time and space –Real world tissue handling –Patients selected from different centers than those used for developing the classifier

78 Survival Risk Group Prediction Evaluate individual genes by fitting single variable proportional hazards regression models to log signal or log ratio for gene Select genes based on p-value threshold for single gene PH regressions Compute first k principal components of the selected genes Fit PH regression model with the k pc’s as predictors. Let b 1, …, b k denote the estimated regression coefficients To predict for case with expression profile vector x, compute the k supervised pc’s y 1, …, y k and the predictive index = b 1 y 1 + … + b k y k

79 Survival Risk Group Prediction LOOCV loop: –Create training set by omitting i’th case Develop supervised pc PH model for training set Compute cross-validated predictive index for i’th case using PH model developed for training set Compute predictive risk percentile of predictive index for i’th case among predictive indices for cases in the training set

80 Survival Risk Group Prediction Plot Kaplan Meier survival curves for cases with cross-validated risk percentiles above 50% and for cases with cross- validated risk percentiles below 50% –Or for however many risk groups and thresholds is desired Compute log-rank statistic comparing the cross-validated Kaplan Meier curves

81 Survival Risk Group Prediction Repeat the entire procedure for all (or large number) of permutations of survival times and censoring indicators to generate the null distribution of the log-rank statistic –The usual chi-square null distribution is not valid because the cross-validated risk percentiles are correlated among cases Evaluate statistical significance of the association of survival and expression profiles by referring the log-rank statistic for the unpermuted data to the permutation null distribution

82 Survival Risk Group Prediction Other approaches to survival risk group prediction have been published The supervised pc method is implemented in BRB-ArrayTools BRB-ArrayTools also provides for comparing the risk group classifier based on expression profiles to one based on standard covariates and one based on a combination of both types of variables

83 Sample Size Planning References K Dobbin, R Simon. Sample size determination in microarray experiments for class comparison and prognostic classification. Biostatistics 6:27-38, 2005 K Dobbin, R Simon. Sample size planning for developing classifiers using high dimensional DNA microarray data. Biostatistics (In Press)

84 Sample Size Planning GOAL: Identify genes differentially expressed in a comparison of two pre-defined classes of specimens on dual-label arrays using reference design or single label arrays Compare classes separately by gene with adjustment for multiple comparisons Approximate expression levels (log ratio or log signal) as normally distributed Determine number of samples n/2 per class to give power 1-  for detecting mean difference  at level 

85 n = 4  2 (z  /2 + z  ) 2 /  2 where  = mean log-ratio difference between classes  = within class standard deviation of biological replicates z  /2, z  = standard normal percentiles Choose  small, e.g.  =.001 Use percentiles of t distribution for improved accuracy Comparing 2 equal size classes

86 Total Number of Samples for Two Class Comparison  Samples Per Class 0.0010.051 (2-fold) 0.5 human tissue 13 0.25 transgenic mice 6 (t approximation)

87 Dual Label Arrays With Reference Design Pools of k Biological Samples

88 m = number of technical reps per sample k = number of samples per pool n = total number of arrays  = mean difference between classes in log signal  2 = biological variance within class  2 = technical variance  = significance level e.g. 0.001 1-  = power z = normal percentiles (use t percentiles for better accuracy)

89 Number of samples pooled per array Number of arrays required Number of samples required 125 21734 31442 41352  =0.001,  =0.05,  =1,  2 +2  2 =0.25,  2 /  2 =4

90  =0.001  =0.05  =1  2 +2  2 =0.25,  2 /  2 =4 m technical reps n arrays required samples required 125 24221 36020 47619

91 Number of Events Needed to Detect Gene Specific Effects on Survival  = standard deviation in log2 ratios for each gene  = hazard ratio (>1) corresponding to 2-fold change in gene expression  = 1/N for 1 expected false positive gene identified per N genes examined  = 0.05 for 5% false negative rate

92 Number of Events Required to Detect Gene Specific Effects on Survival  =0.001,  =0.05 Hazard Ratio   Events Required 20.526 1.50.576

93 Sample Size Planning for Classifier Development The expected value (over training sets) of the probability of correct classification PCC(n) should be within  of the maximum achievable PCC(  )

94 Probability Model Two classes Log expression or log ratio MVN in each class with common covariance matrix m differentially expressed genes p-m noise genes Expression of differentially expressed genes are independent of expression for noise genes All differentially expressed genes have same inter-class mean difference 2  Common variance for differentially expressed genes and for noise genes

95 Classifier Feature selection based on univariate t- tests for differential expression at significance level  Simple linear classifier with equal weights (except for sign) for all selected genes. Power for selecting each of the informative genes that are differentially expressed by mean difference 2  is 1-  (n)

96 For 2 classes of equal prevalence, let 1 denote the largest eigenvalue of the covariance matrix of informative genes. Then

97

98 Sample size as a function of effect size (log-base 2 fold-change between classes divided by standard deviation). Two different tolerances shown,. Each class is equally represented in the population. 22000 genes on an array.

99

100 Optimal significance level cutoffs for gene selection. 50 differentially expressed genes out of 22,000 genes on the microarrays 2δ/σn=10n=30n=50 10.1670.0030.00068 1.250.0850.00110.00035 1.50.0450.000630.00016 1.750.0260.000360.00006 20.0150.00020.00002

101 γ=0.05, p=22,000 genes, gene standard deviation σ=0.75. Distance 2δ Fold changeSample size n PCC(n), m=1 PCCworst(n) m=1, p min =0.25 PCC(n), m=5 PCCworst(n) m=5, p min =0.25 Adjusted sample size 0.751.681060.640.340.820.70212 1.1252.18580.720.450.930.87116 1.52.83380.800.520.980.9576 1.8753.67280.850.630.990.9656

102 a) Power example with 60 samples, 2δ/σ=1/0.71 effect size for differentially expressed genes, alpha = 0.001 cutoffs for gene selection. As the proportion in the under-represented class gets smaller, the power to identify differentially expressed genes decreases.

103 b) PCC(60) as a function of the proportion in the under-represented class. Parameter settings same as a), with 10 differentially expressed genes among 22,000 total genes. If the proportion in the under- represented class is small (e.g., <20%), then the PCC(60) can decline significantly.

104

105

106

107

108

109 BRB-ArrayTools Integrated software package using Excel-based user interface but state-of-the art analysis methods programmed in R, Java & Fortran Publicly available for non-commercial use http://linus.nci.nih.gov/brb

110 Selected Features of BRB-ArrayTools Multivariate permutation tests for class comparison to control number and proportion of false discoveries with specified confidence level –Permits blocking by another variable, pairing of data, averaging of technical replicates SAM –Fortran implementation 7X faster than R versions Extensive annotation for identified genes –Internal annotation of NetAffx, Source, Gene Ontology, Pathway information –Links to annotations in genomic databases Find genes correlated with quantitative factor while controlling number of proportion of false discoveries Find genes correlated with censored survival while controlling number or proportion of false discoveries Analysis of variance –Time course analysis –Log intensities for non-reference designs –Mixed models

111 Selected Features of BRB-ArrayTools Gene enhancement analysis. –Find Gene Ontology groups and signaling pathways that are differentially expressed among classes Class prediction –DLDA, CCP, Nearest Neighbor, Nearest Centroid, Shrunken Centroids, SVM, Random Forests – Complete LOOCV, k-fold CV, repeated k-fold,.632+ bootstrap – permutation significance of cross-validated error rate

112 Selected Features of BRB-ArrayTools Clustering tools for class discovery with reproducibility statistics on clusters –Internal access to Eisen’s Cluster and Treeview Visualization tools including rotating 3D principal components plot exportable to Powerpoint with rotation controls Extensible via R plug-in feature Tutorials and datasets

113 Acknowledgements Kevin Dobbin Michael Radmacher Sudhir Varma Yingdong Zhao BRB-ArrayTools Development Team –Amy Lam –Ming-Chung Li –Supriya Menezes


Download ppt "Design & Analysis of Microarray Studies for Diagnostic & Prognostic Classification Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer."

Similar presentations


Ads by Google