Feature Selection: Algorithms and Challenges

Slides:



Advertisements
Similar presentations
DECISION TREES. Decision trees  One possible representation for hypotheses.
Advertisements

Predictive Analysis of Gene Expression Data from Human SAGE Libraries Alexessander Alves* Nikolay Zagoruiko + Oleg Okun § Olga Kutnenko + Irina Borisova.
Data Mining Feature Selection. Data reduction: Obtain a reduced representation of the data set that is much smaller in volume but yet produces the same.
Feature Grouping-Based Fuzzy-Rough Feature Selection Richard Jensen Neil Mac Parthaláin Chris Cornelis.
Molecular Biomedical Informatics 分子生醫資訊實驗室 Machine Learning and Bioinformatics 機器學習與生物資訊學 Machine Learning & Bioinformatics 1.
Software Quality Ranking: Bringing Order to Software Modules in Testing Fei Xing Michael R. Lyu Ping Guo.
Feature Selection Presented by: Nafise Hatamikhah
Exploratory Data Mining and Data Preparation
Feature Selection for Regression Problems
Three kinds of learning
Dimension Reduction and Feature Selection Craig A. Struble, Ph.D. Department of Mathematics, Statistics, and Computer Science Marquette University.
Feature Selection and Its Application in Genomic Data Analysis March 9, 2004 Lei Yu Arizona State University.
CS Instance Based Learning1 Instance Based Learning.
Jeff Howbert Introduction to Machine Learning Winter Machine Learning Feature Creation and Selection.
Evaluating Performance for Data Mining Techniques
1 Feature Selection: Algorithms and Challenges Joint Work with Yanglan Gang, Hao Wang & Xuegang Hu Xindong Wu University of Vermont, USA; Hefei University.
Attention Deficit Hyperactivity Disorder (ADHD) Student Classification Using Genetic Algorithm and Artificial Neural Network S. Yenaeng 1, S. Saelee 2.
Issues with Data Mining
Feature Selection.
1 Mining with Noise Knowledge: Error Awareness Data Mining Xindong Wu Department of Computer Science University of Vermont, USA; Hong Kong Polytechnic.
Introduction to variable selection I Qi Yu. 2 Problems due to poor variable selection: Input dimension is too large; the curse of dimensionality problem.
1 Local search and optimization Local search= use single current state and move to neighboring states. Advantages: –Use very little memory –Find often.
315 Feature Selection. 316 Goals –What is Feature Selection for classification? –Why feature selection is important? –What is the filter and what is the.
Treatment Learning: Implementation and Application Ying Hu Electrical & Computer Engineering University of British Columbia.
计算机学院 计算感知 Support Vector Machines. 2 University of Texas at Austin Machine Learning Group 计算感知 计算机学院 Perceptron Revisited: Linear Separators Binary classification.
Data Reduction. 1.Overview 2.The Curse of Dimensionality 3.Data Sampling 4.Binning and Reduction of Cardinality.
MINING MULTI-LABEL DATA BY GRIGORIOS TSOUMAKAS, IOANNIS KATAKIS, AND IOANNIS VLAHAVAS Published on July, 7, 2010 Team Members: Kristopher Tadlock, Jimmy.
Prediction of Molecular Bioactivity for Drug Design Experiences from the KDD Cup 2001 competition Sunita Sarawagi, IITB
An Efficient Greedy Method for Unsupervised Feature Selection
Computational Approaches for Biomarker Discovery SubbaLakshmiswetha Patchamatla.
Data Mining Practical Machine Learning Tools and Techniques By I. H. Witten, E. Frank and M. A. Hall Chapter 6.2: Classification Rules Rodney Nielsen Many.
© 2002 IBM Corporation IBM Research 1 Policy Transformation Techniques in Policy- based System Management Mandis Beigi, Seraphin Calo and Dinesh Verma.
Improving Support Vector Machine through Parameter Optimized Rujiang Bai, Junhua Liao Shandong University of Technology Library Zibo , China { brj,
Dimensionality Reduction in Unsupervised Learning of Conditional Gaussian Networks Authors: Pegna, J.M., Lozano, J.A., Larragnaga, P., and Inza, I. In.
Data Mining and Decision Support
Feature Selction for SVMs J. Weston et al., NIPS 2000 오장민 (2000/01/04) Second reference : Mark A. Holl, Correlation-based Feature Selection for Machine.
Dr. Gheith Abandah 1.  Feature selection is typically a search problem for finding an optimal or suboptimal subset of m features out of original M features.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Data Mining CH6 Implementation: Real machine learning schemes(2) Reporter: H.C. Tsai.
Experience Report: System Log Analysis for Anomaly Detection
Cluster Analysis II 10/03/2012.
Data Science Algorithms: The Basic Methods
Presented by Jingting Zeng 11/26/2007
Data Mining K-means Algorithm
Information Management course
Feature Selection for Pattern Recognition
Advanced Artificial Intelligence Feature Selection
Machine Learning Feature Creation and Selection
Feature Selection To avid “curse of dimensionality”
Data Mining Practical Machine Learning Tools and Techniques
A Unifying View on Instance Selection
Heuristic search INT 404.
ECE539 final project Instructor: Yu Hen Hu Fall 2005
Classification Algorithms
Classification and Prediction
Data Preprocessing Copyright, 1996 © Dale Carnegie & Associates, Inc.
Dept. of Computer Science University of Liverpool
More on Search: A* and Optimization
Machine Learning in Practice Lecture 23
Statistical Learning Dong Liu Dept. EEIS, USTC.
Machine Learning in Practice Lecture 22
Nearest Neighbors CSC 576: Data Mining.
CS 485G: Special Topics in Data Mining
Chapter 7: Transformations
Data Preprocessing Copyright, 1996 © Dale Carnegie & Associates, Inc.
Microarray Data Set The microarray data set we are dealing with is represented as a 2d numerical array.
Search.
Search.
Data Preprocessing Copyright, 1996 © Dale Carnegie & Associates, Inc.
Memory-Based Learning Instance-Based Learning K-Nearest Neighbor
Presentation transcript:

Feature Selection: Algorithms and Challenges Joint Work with Yanglan Gang, Hao Wang & Xuegang Hu Xindong Wu University of Vermont, USA; Hefei University of Technology, China 合肥工业大学计算机应用长江学者讲座教授

Deduction Induction: My Research Background 1988 Expert Systems 2004 1990 Expert Systems 1995 …

Outlines Why feature selection What is feature selection Components of feature selection Some research efforts by myself Challenges in feature selection

1. Why Feature Selection? High-dimensional data often contain irrelevant or redundant features reduce the accuracy of data mining algorithms slow down the mining process be a problem in storage and retrieval hard to interpret

2. What Is Feature Selection? Select the most “relevant” subset of attributes according to some selection criteria.

Outlines Why feature selection What is feature selection Components of feature selection Some research efforts by myself Challenges in feature selection

Traditional Taxonomy Wrapper approach Filter approach Features are selected as part of the mining algorithm Filter approach Features selected before a mining algorithm,using heuristics based on general characteristics of the data, rather than a learning algorithm to evaluate the merit of feature subsets Wrapper approach is generally more accurate but also more computationally expensive.

Components of Feature Selection Feature selection is actually a search problem, including four basic components: an initial subset one or more selection criteria (*) a search strategy (*) some given stopping conditions

Feature Selection Criteria Selection criteria generally use “relevance” to estimate the goodness of a selected feature subset in one way or another: Distance Measure Information Measure Inconsistency Measure Relevance Estimation Selection Criteria related to Learning Algorithms (wrapper approach) Some unified framework for relevance has been proposed recently.

Search Strategy Exhaustive Search A modified approach: B&B Every possible subset is evaluated and the best one is chosen Guarantee the optimal solution Low efficiency A modified approach: B&B

Search Strategy (2) Heuristic search Sequential search, including SFS,SFFS,SBS and SBFS SFS: Start with empty attribute set Add “best” of attributes Add “best” of remaining attributes Repeat until the maximum performance is reached SBS: Start with the entire attribute set Remove “worst” of attributes Repeat until the maximum performance has been reached.

Search Strategy (3) Random search It proceeds in two different ways Inject randomness into classical sequential approaches (simulated annealing, beam search, the genetic algorithm , and random-start hill-climbing) Generate the next subset randomly The use of randomness can help to escape local optima in the search space, and the optimality of the selected subset would depend on the available resources.

Outlines Why feature selection What is feature selection Components of feature selection Some research efforts by myself Challenges in feature selection

RITIO: Rule Induction Two In One Feature selection using the information gain in a reverse order Delete features that are lest informative Results are significant compared to forward selection [Wu et al 1999, TKDE].

Induction as Pre-processing Use one induction algorithm to select attributes for another induction algorithm Can be a decision-tree method for rule induction, or vice versa Accuracy results are not as good as expected Reason: feature selection normally causes information loss Details: [Wu 1999, PAKDD].

Subspacing with Asysmetric Bagging When the number of examples is less than the number of attributes When the number of positive examples is smaller than the number of negative examples An example: content-based information retrieval Details: [Tao et al., 2006, TPAMI].

Outlines Why feature selection What is feature selection Components of feature selection Some research efforts by myself Challenges in feature selection

Challenges in Feature Selection (1) Dealing with ultra-high dimensional data and feature interactions Traditional feature selection encounter two major problems when the dimensionality runs into tens or hundreds of thousands: curse of dimensionality the relative shortage of instances. 1 curse of dimensionality As most existing feature selection algorithms have quadratic or higher time complexity about N, it is difficult to scale up with high dimensionality. 2 the relative shortage of instances. That is, the dimensionality N can sometimes greatly exceed the number of instances I

Challenges in Feature Selection (2) Dealing with active instances (Liu et al., 2005) When the dataset is huge, feature selection performed on the whole dataset is inefficient, so instance selection is necessary: Random sampling (pure random sampling without exploiting any data characteristics) Active feature selection (selective sampling using data characteristics achieves better or equally good results with a significantly smaller number of instances). Random sampling (pure random sampling without exploiting any data characteristic) Active feature selection (selective sampling by using data characteristics achieves better or equally good results with a significantly smaller number of instances)

Challenges in Feature Selection (3) Dealing with new data types (Liu et al., 2005) traditional data type: an N*M data matrix Due to the growth of computer and Internet/Web techniques, new data types are emerging: text-based data (e.g., e-mails, online news, newsgroups) semistructure data (e.g., HTML, XML) data streams.

Challenges in Feature Selection (4) Unsupervised feature selection Feature selection vs classification: almost every classification algorithm Subspace method with the curse of dimensionality in classification Subspace clustering.

Challenges in Feature Selection (5) Dealing with predictive-but-unpredictable attributes in noisy data Attribute noise is difficult to process, and removing noisy instances is dangerous Predictive attributes: essential to classification Unpredictable attributes: cannot be predicted by the class and other attributes Noise identification, cleansing, and measurement need special attention [Yang et al., 2004]

Challenges in Feature Selection (6) Deal with inconsistent and redundant features Redundancy can indicate reliability Inconsistency can also indicate a problem for handling Researchers in Rough Set Theory: What is the purpose of feature selection? Can you really demonstrate the usefulness of reduction, in data mining accuracy, or what? Removing attributes can well result in information loss When the data is very noisy, removals can cause a very different data distribution Discretization can possibly bring new issues.

Concluding Remarks Feature selection is and will remain an important issue in data mining, machine learning, and related disciplines Feature selection has a price in accuracy for efficiency Researchers need to have the bigger picture in mind, not just doing selection for the purpose of feature selection.