Presentation is loading. Please wait.

Presentation is loading. Please wait.

December 2, 20051 Feature Selection of DNA Micrroarray Data Presented by: Mohammed Liakat Ali Course: 60-520 Fall 2005 University of Windsor.

Similar presentations


Presentation on theme: "December 2, 20051 Feature Selection of DNA Micrroarray Data Presented by: Mohammed Liakat Ali Course: 60-520 Fall 2005 University of Windsor."— Presentation transcript:

1 December 2, Feature Selection of DNA Micrroarray Data Presented by: Mohammed Liakat Ali Course: Fall 2005 University of Windsor

2 December 2, Outline Introduction Deployment of Feature Selection methods Feature Selection Methods Class Separability Measures Review of Minimum Redundancy feature selection methods Comparison with our Experimental Results Conclusions Q & A

3 December 2, Introduction Microarray Data Representation of Objects Classifiers Feature Selection vs. Feature Extraction Optimal Feature Set for Classification

4 December 2, Microarray Data Microarray technology is one of the most promising tools available to life science researchers. Two technologies are used to produce DNA microarray: The cDNA arrays the Affymatrix technologies Also known as DNA chip The final result of microarray experiment is a set of numbers representing expression level of DNA fragments i.e., genes.

5 December 2, Representation of Objects Objects are represented by their characteristic features Three main reasons to keep dimensionality low: Measurement Cost Classification Accuracy To identify and monitor the target disease or function types It is very important to represent an object with features having high discriminating ability.

6 December 2, Classifiers A classifier will use features of an object and a discriminant function to assign the object to a category i.e., class. Domain independent theory of classification is based on the abstraction provided by features of the input data We can divide classifiers as: linear non-linear

7 December 2, Feature Selection vs. Feature Extraction In feature selection we try to find the best subset of the input feature set In feature extraction we create new features based on transformation or combination of the original feature set

8 December 2, Optimal Feature Subset for Classification To find optimal feature subset we have to evaluate objective function for subsets Exponential complexity

9 December 2, Deployment of Feature Selection Methods Based on their relation to the induction algorithm feature selection methods can be grouped as: Embedded: They are a part of induction algorithms Filter: They are separate processes from the induction algorithms Wrapper: They are also separate processes from induction algorithm but they use induction algorithm as a subroutine

10 December 2, Deployment of Feature Selection Methods

11 December 2, Feature Selection Methods Based on the optimal solution of the problem, we can divide feature selection methods as: Optimal Selection Methods Suboptimal Selection Methods

12 December 2, Feature Selection Methods

13 December 2, Optimal Selection Methods Exhaustive Search Branch and Bound Search

14 December 2, Exhaustive Search Evaluate all possible subsets consisting of m features of total d features i.e., subsets Guaranteed to find optimal subset An exponential problem

15 December 2, Branch and Bound Search Only fraction of all possible feature subsets will be evaluated Guaranteed to find optimal subset Criterion function must satisfy the monotonicity property i.e.,

16 December 2, Suboptimal Selection Methods Best individual Feature Sequential Forward Selection (SFS) Sequential Backward Selection (SBS) “Plus l take away r” Selection Sequential Forward Floating Search (SFFS) Sequential Backward Floating Search (SBFS)

17 December 2, Best individual Feature Evaluate all d features individually using an scalar criterion function Select m best features Clearly a sub optimal method Complexity is O(d)

18 December 2, Sequential Forward Selection (SFS) At the beginning select the best feature using a scalar criterion function Add one feature at a time which along with already selected features to maximize the criterion function, J(.) A greedy algorithm, cannot retract Complexity is O(d)

19 December 2, Sequential Backward Selection (SBS) At the beginning select all d features Delete one feature at a time and Select the subset which maximize the criterion function, J(.) Also a greedy algorithm, cannot retract Complexity is O(d)

20 December 2, “Plus l take away r” Selection At first add l features by forward selection, then discard r features by backward selection Need to decide optimal l and r No subset nesting problems Like SFS and SBS

21 December 2, Sequential Forward Floating Search (SFFS) It is a generalized ‘plus l take away r’ algorithm The value of l and r are determined automatically Close to optimal solution Affordable computational cost

22 December 2, Sequential Backward Floating Search (SBFS) It is also a generalized ‘plus l take away r’ algorithm like SFFS The value of l and r are also determined automatically Close to optimal solution as SFFS More efficient than SFFS for m closer to d than to 1

23 December 2, Class Separability Measures Divergence Scatter Matrices

24 December 2, Divergence As per Bayes rule, given two classes ω1 and ω2 and a feature vector x, we select ω1 if P(ω1|x) > P(ω2|x) Hence ratio has discriminating capability

25 December 2, Divergence For given P(ω1) and P(ω2) same information resides in D 12 (x) = ln For completely overlapping classes D 12 (x) = 0

26 December 2, Divergence Since x takes different values, it is natural to consider mean value over class ω1 D 12 = Similarly for ω2 D 21 = The sum d 12 = D 12 +D 21

27 December 2, Scatter Matrices Computation of Divergence is not easy for non Gaussian distribution Within class scatter matrix is defined as Sw = Si is the covariance matrix for class ωi Si =

28 December 2, Scatter Matrices Between class scatter matrix is defined as S b = Where μ 0 =

29 December 2, Scatter Matrices Total Mixture scatter matrix is defined as Sm = E[(x-µ0)(x-μ0)’] Where Sm = Sw + Sb

30 December 2, Scatter Matrices The following criterion functions can be defined among others J1= J2= J3 =

31 December 2, Scatter Matrices For equally probable two classes problem |Sw| is proportional to σ 1 ²+ σ 2 ² |Sb| is proportional to (µ 1 -µ 2 )²

32 December 2, Review of Minimum Redundancy feature selection methods Now we will discuss two minimum redundancy feature selection methods given in the two following papers Ding and Peng (2003) Yu and Liu (2004)

33 December 2, Review of Minimum Redundancy feature selection methods In Ding and Peng (2003) Filter method is used Algorithm is SFS The first feature was selected using maxV 1, for all genes in the set S

34 December 2, Review of Minimum Redundancy feature selection methods Suppose already selected m features for the set X The additional features will be selected from the set Y = S – X The following two conditions will be optimized simultaneously 1. 2.

35 December 2, Review of Minimum Redundancy feature selection methods Mutual information, I of two variable x and y is defined as Importance of minimum redundancy is highlighted in the paper

36 December 2, Review of Minimum Redundancy feature selection methods In Yu and Liu (2004) Filter method is used Algorithm is: Relevance analysis 1 Order features based on decreasing ISU values Redundancy analysis 2 Initialize Fi with the first feature in the list 3 Find and remove all features for which Fi forms an approximate redundant cover 4 Set Fi as the next remaining feature in the list and repeat step 3 until the end of the list

37 December 2, Review of Minimum Redundancy feature selection methods Combines SFS with elimination The entropy of a variable X is defined as H(X) = - The entropy of X after observing values of another variable Y is defined as H(X|Y) = - The amount by which the entropy of X decreases reflects additional information about X provided by Y, is called Information Gain IG(X|Y) = H(X) – H(X|Y)

38 December 2, Review of Minimum Redundancy feature selection methods Symmetrical uncertainty is defined as SU(X, Y) = Individual C-correlation (ISUi): The correlation between any feature Fi and the class C is called Individual C-correlation, ISUi Combined C-correlation (CSUi): The correlation between any feature Fi and Fj (i ≠ j) and the class C is called combined C-correlation, CSUi_j Approximate redundant cover: For two features Fi and Fj, Fi formed an approximate redundant cover for Fj iff ISUi ≥ ISUj and ISUi ≥ CSUi_j

39 December 2, Comparison with our Experimental Results To investigate the problem of feature selection we implement a filter method We used FDR as criterion function Initial gene selection was based on gene ranking Then Fisher and Loog-Duin Discriminant techniques are applied to transform the feature space Then linear and quadratic classifier are used 10-fold cross validation was applied We used Leukemia, Lung cancer, and Breast cancer data from UCI repository

40 December 2, Comparison with our Experimental Results Dataset #G #S #SG RBF #S #SG FQ LDQ FL LDL Leukemia Lung cancer Breast cancer Table 1. Comparison of gene selection results. RBF = Redundancy Based Filter FQ = Fisher’s Discriminant + Quadratic classifier FL = Fisher’s Discriminant + Linear classifier LDQ = Loog-Duin’s Discriminant + Quadratic classifier LDL = Loog-Duin’s Discriminant + Linear classifier

41 December 2, Comparison with our Experimental Results From the table we can observed that RBF selected very compact gene sets for all the cases. FQ and FL out perform LDQ and LDL in all 3 datasets. RBF out perform all methods in 1 dataset by big margin. FQ and FL jointly out perform others in 1 dataset also in big margin. RBF, FQ, and FL have comparable result in 1 dataset.

42 December 2, Conclusions We can conclude that minimum redundancy methods select very compact gene sets. It can help to identify and monitor the target disease or function types.

43 December 2, Conclusions From our experience, on average the performance of LDQ is better than FQ because Fisher discrminant analysis is linear in nature. Here we select gene by FDR ranking. Due this performance of FQ and FL may get enhancement. From the result we can also conclude that gene selection by only ranking has some merits.

44 December 2, References 1.Blum, A. and Langley, P. (1997). Selection of relevant features and examples in machine learning. Artificial Intelligence, 97(1-2) 245– T.M. Cover, “ The Best Two Independent Measurements Are Not the Two Best, ” IEEE Trans. Systems, Man, and Cybernetics, vol. 4, pp , Ding, C. and Peng, H. C. (2003). Minimum Redundancy Feature Selection from Microarray Gene Expression Data. Proc. Second 3. EEE Computational Systems Bioinformatics Conf., R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley and Sons, Inc., New York, NY, 2nd edition, K. S. V. Horn and T. Martinez. The Minimum Set Problem. Neural Networks, 7(3):491–494, 1994.

45 December 2, References 6. Duin R. P. W. Jain, A. K. and J. Mao. Statistical Pattern Recognition: A review. IEEE Transaction on Pattern Analysis and Machine Intelligence, 22(1), M. Loog and P.W. Duin. Linear Dimensionality Reduction via a Heteroscedastic Extension of LDA: The Chernoff Criterion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(6):732–739, S. Theodoridis and K. Koutroumbas. Pattern Recognition. Elsevier Academic Press, second edition, L. Yu and H. Liu. Redundency Based Feature Selection for Microarray Data. Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 737 – 742, 2004.

46 December 2, Q & A Thanking You


Download ppt "December 2, 20051 Feature Selection of DNA Micrroarray Data Presented by: Mohammed Liakat Ali Course: 60-520 Fall 2005 University of Windsor."

Similar presentations


Ads by Google