Presentation is loading. Please wait.

Presentation is loading. Please wait.

CZ5225: Modeling and Simulation in Biology Lecture 8: Microarray disease predictor-gene selection by feature selection methods Prof. Chen Yu Zong Tel:

Similar presentations


Presentation on theme: "CZ5225: Modeling and Simulation in Biology Lecture 8: Microarray disease predictor-gene selection by feature selection methods Prof. Chen Yu Zong Tel:"— Presentation transcript:

1 CZ5225: Modeling and Simulation in Biology Lecture 8: Microarray disease predictor-gene selection by feature selection methods Prof. Chen Yu Zong Tel: 6874-6877 Email: phacyz@nus.edu.sg http://bidd.nus.edu.sg Room 07-24, level 8, S17, National University of Singapore phacyz@nus.edu.sg http://bidd.nus.edu.sgphacyz@nus.edu.sg http://bidd.nus.edu.sg

2 2 All classification methods we have studied so far use all genes/features Molecular biologists/oncologists seem to be convinced that only a small subset of genes are responsible for particular biological properties, so they want the genes most important in discriminating disease-types and treatment outcomes Practical reasons, a clinical device with thousands of genes is not financially practical Gene selection?

3 3 Disease Example: Childhood Leukemia Cancer in the cells of the immune system Approx. 35 new cases in Denmark every year 50 years ago – all patients died Today – approx. 78% are cured Risk Groups: –Standard –Intermediate –High –Very high –Extra high Treatment: –Chemotherapy –Bone marrow transplantation –Radiation

4 4 Prognostic factors: Immunophenotype Age Leukocyte count Number of chromosomes Translocations Treatment response Risk Classification Today Risk group: Standard Intermediate High Very high Extra high Patient: Clinical data Immunopheno- typing Morphology Genetic measurements Microarray technology

5 5 Study and Diagnosis of Childhood Leukemia Diagnostic bone marrow samples from leukemia patients Platform: Affymetrix Focus Array –8793 human genes Immunophenotype –18 patients with precursor B immunophenotype –17 patients with T immunophenotype Outcome 5 years from diagnosis –11 patients with relapse –18 patients in complete remission

6 6 Problem: Too much data! Gene Pat1Pat2Pat3Pat4Pat5Pat6Pat7Pat8Pat9 209619_at775847055342744387474933795050315293 32541_at280387392238385329337163225 206398_s_at105083512681723137780418461180252 219281_at391593298265491517334387285 207857_at142597720271184939814658593659 211338_at372728383316362331 213539_at1241974541161621139797160 221497_x_at1208617599115808311966 213958_at179225449174185203186185157 210835_s_at203144197314250353173285325 209199_s_at7581234833144976911109876381133 217979_at5705639727968694946731013665 201015_s_at533343325270691460563321261 203332_s_at649354494554710455748392418 204670_x_at557732165323442357713374432835152072 208788_at6483271057746541270361774590 210784_x_at142151144173148145131146147 204319_s_at298172200298196104144110150 205049_s_at329413512080206637261396224421421248 202114_at8336747331298862371886501734 213792_s_at646375370436738497546406376 203932_at1977101624361856191782211891092623 203963_at9763771368574916166 203978_at315279221260227222232141123 203753_at146811053811154980141912535541045 204891_s_at787115274127576615370 209365_s_at472519365349756528637828720 209604_s_at7727413021610831180235177 211005_at4958129705677616175 219686_at694342345502960403535513258 38521_at775604305563542543725587406

7 7 Reduction of dimensions –Principle Component Analysis (PCA) Feature selection (gene selection) –Significant genes: t-test –Selection of a limited number of genes So, what do we do?

8 8 Principal Component Analysis (PCA) Used for visualization of complex data Developed to capture as much of the variation in data as possible Generic features of principal components –summary variables –linear combinations of the original variables –uncorrelated with each other –capture as much of the original variance as possible

9 9 Principal components 1.principal component (PC1) –the direction along which there is greatest variation 2.principal component (PC2) –the direction with maximum variation left in data, orthogonal to the direction (i.e. vector) of PC1 3.principal component (PC3) –the direction with maximal variation left in data, orthogonal to the plane of PC1 and PC2 –(Less frequently used)

10 10 Example: 3 dimensions => 2 dimensions

11 11 PCA - Example

12 12 PCA on all Genes Leukemia data, precursor B and T Plot of 34 patients, 8973 dimensions (genes) reduced to 2

13 13 Ranking of PCs and Gene Selection

14 14 The t-test method Compares the means ( & ) of two data sets –tells us if they can be assumed to be equal Can be used to identify significant genes –i.e. those that change their expression a lot!

15 15 PCA on 100 top significant genes based on t-test Plot of 34 patients, 100 dimensions (genes) reduced to 2

16 16 The next question: Can we classify new patients? Plot of 34 patients, 100 dimensions (genes) reduced to 2 P99.?? ????

17 17 Feature Selection Problem Statement A process of selecting a minimum subset of features that is sufficient to construct a hypothesis consistent with the training examples (Almuallim and Dietterich, 1991) Selecting a minimum subset G such that P(C|G) is equal or as close as possible to P(C|F) (Koller and Sahami, 1996)

18 18 Feature Selection Strategies Wrapper methods –Relying on a predetermined classification algorithm –Using predictive accuracy as goodness measure –High accuracy, computationally expensive Filter methods –Separating feature selection from classifier learning –Relying on general characteristics of data (distance, correlation, consistency) –No bias towards any learning algorithm, fast Embedded methods –Jointly or simultaneously train both a classifier and a feature subset by optimizing an objective function that jointly rewards accuracy of classification and penalizes use of more features.

19 19 Feature Selection Strategies Filter methods –Features (genes) are scored according to the evidence of predictive power and then are ranked. Top s genes with high score are selected and used by the classifier. –Scores: t-statistics, F-statistics, signal-noise ratio, … –The # of features selected, s, is then determined by cross validation. –Advantage: Fast and easy to interpret.

20 20 Feature Selection Strategies Problems of filter methods –Genes are considered independently. –Redundant genes may be included. –Some genes jointly with strong discriminant power but individually with weak contribution will be ignored. –The filtering procedure is independent to the classifying method.

21 21 Feature Selection Step-wise variable selection: n*<N effective variables modeling the classification function N features N steps Step 1Step N … One feature vs. N features …

22 22 Feature Selection Step-wise selection of the features. Steps Ranked Features Discarded Features

23 23 Feature Selection Strategies Wrapper methods –Iterative search: many “ feature subsets ” are scored base on classification performance and the best is used. –Subset selection: Forward selection, backward selection, their –combinations. –The problem is very similar to variable selection in regression.

24 24 Feature Selection Strategies Wrapper methods –Analogous to variable selection in regression –Exhaustive searching is not impossible, and greedy algorithms are used instead. –Confounding problem can happen in both scenario. In regression, it is usually recommended not to include highly correlated covariates in analysis to avoid confounding. But it ’ s impossible to avoid confounding in feature selection of microarray classification.

25 25 Feature Selection Strategies Problems of wrapper methods –Computationally expensive: for each feature subset considered, the classifier is built and evaluated. –Exhaustive searching is impossible. Greedy search only. –Easy to overfit.

26 26 Feature Selection Strategies Embedded methods –Attempt to jointly or simultaneously train both a classifier and a feature subset. –Often optimize an objective function that jointly rewards accuracy of classification and penalizes use of more features. –Intuitively appealing –Examples: nearest shrunken centroids, CART and other tree-based algorithms.

27 27 Feature Selection Strategies Example of wrapper methods –Recursive Feature Elimination (RFE) 1.Train the classifier with SVM. (or LDA) 2.Compute the ranking criterion for all features 3.Remove the feature with the smallest ranking criterion. 4.Repeat step 1~3.

28 28 Feature Ranking Weighting and ranking individual features Selecting top-ranked ones for feature selection Advantages –Efficient: O(N) in terms of dimensionality N –Easy to implement Disadvantages –Hard to determine the threshold –Unable to consider correlation between features

29 29 Leave-one out method

30 30 Use leave-one out (LOO) criterion or upper bound on LOO to select features by searching over all possible subsets of n features for the ones that minimizes the criterion. When such a search is impossible because of too many possibilities, scale each feature by a real value variable and compute this scaling via gradient descent on the leave-one out bound. One can then keep the features corresponding to the largest scaling variables. Basic idea

31 31 Rescale features to minimize the LOO bound R 2 /M 2 x2x2 x1x1 R 2 /M 2 >1 M R x2x2 R 2 /M 2 =1 M = R Illustration

32 32 Radius margin bound: simple to compute, continuous very loose but often tracks LOO well Jaakkola Haussler bound: somewhat tighter, simple to compute, discontinuous so need to smooth, valid only for SVMs with no b term Span bound: tight as a Britney Spears outfit complicated to compute, discontinuous so need to smooth Three upper bounds on LOO

33 33 Radius margin bound

34 34 Jaakkola-Haussler bound

35 35 Span bound

36 36 We add a scaling parameter  to the SVM, which scales genes, genes corresponding to small  j are removed. The SVM function has the form: Classification function with gene selection


Download ppt "CZ5225: Modeling and Simulation in Biology Lecture 8: Microarray disease predictor-gene selection by feature selection methods Prof. Chen Yu Zong Tel:"

Similar presentations


Ads by Google