Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lab 1 Getting started with CLOP and the Spider package.

Similar presentations


Presentation on theme: "Lab 1 Getting started with CLOP and the Spider package."— Presentation transcript:

1 Lab 1 Getting started with CLOP and the Spider package

2 CLOP Tutorial CLOP=Challenge Learning Object Package. Based on the Spider developed at the Max Planck Institute. Two basic abstractions: –Data object –Model object http://clopinet.com/isabelle/Projects/modelselect/MFAQ.html

3 CLOP Data Objects  cd  use_spider_clop;  X=rand(10,8);  Y=[1 1 1 1 1 -1 -1 -1 -1 -1]';  D=data(X,Y); % constructor  [p,n]=get_dim(D)  get_x(D)  get_y(D) At the Matlab prompt:

4 CLOP Model Objects  model = kridge; % constructor  [resu, model] = train(model, D);  resu, model.W, model.b0  Yhat = D.X*model.W' + model.b0  testD = data(rand(3,8), [-1 -1 1]');  tresu = test(model, testD);  balanced_errate(tresu.X, tresu.Y) D is a data object previously defined.

5 Hyperparameters and Chains  default(kridge)  hyper = {'degree=3', 'shrinkage=0.1'};  model = kridge(hyper);  model = chain({standardize,kridge(hyper)});  [resu, model] = train(model, D);  tresu = test(model, testD);  balanced_errate(tresu.X, tresu.Y) A model often has hyperparameters: Models can be chained:

6 Hyper-parameters http://clopinet.com/isabelle/Projects/modelsel ect/MFAQ.htmlhttp://clopinet.com/isabelle/Projects/modelsel ect/MFAQ.html Kernel methods: kridge and svc: k(x, y) = (coef0 + x  y) degree exp(-gamma ||x - y|| 2 ) k ij = k(x i, x j ) k ii  k ii + shrinkage Naïve Bayes: naive: none Neural network: neural units, shrinkage, maxiter Random Forest: rf (windows only) mtry

7 Lab 2 Getting started with the NIPS 2003 FS challenge

8 The Datasets Arcene: cancer vs. normal with mass- spectrometry analysis of blood serum. Dexter: filter texts about corporate acquisition from Reuters collection. Dorothea: predict which compounds bind to Thrombin from KDD cup 2001. Gisette: OCR digit “4” vs. digit “9” from NIST. Madelon: artificial data. http://clopinet.com/isabelle/Projects/NIPS2003/Slides/NIPS2003-Datasets.pdf

9 Data Preparation Preprocessing and scaling to numerical range 0 to 999 for continuous data and 0/1 for binary data. Probes: Addition of “random” features distributed similarly to the real features. Shuffling: Randomization of the order of the patterns and the features. Baseline error rates (errate): Training and testing on various data splits with simple methods. Test set size: Number of test examples needed using rule-of-thumb n test = 100/errate.

10 Data Statistics Dataset SizeTypeFeatures Training Examples Validation Examples Test Examples Arcene 8.7 MB Dense10000100 700 Gisette 22.5 MB Dense5000600010006500 Dexter 0.9 MB Sparse integer 20000300 2000 Dorothea 4.7 MB Sparse binary 100000800350800 Madelon 2.9 MB Dense50020006001800

11 ARCENE Sources: National Cancer Institute (NCI) and Eastern Virginia Medical School (EVMS). Three datasets: 1 ovarian cancer, 2 prostate cancer, all preprocessed similarly. Task: Separate cancer vs. normal. ARCENE is the cancer dataset

12 DEXTER Sources: Carnegie Group, Inc. and Reuters, Ltd. Preprocessing: Thorsten Joachims. Task: Filter “corporate acquisition” texts. DEXTER filters texts NEW YORK, October 2, 2001 – Instinet Group Incorporated (Nasdaq: INET), the world’s largest electronic agency securities broker, today announced that it has completed the acquisition of ProTrader Group, LP, a provider of advanced trading technologies and electronic brokerage services primarily for retail active traders and hedge funds. The acquisition excludes ProTrader’s proprietary trading business. ProTrader’s 2000 annual revenues exceeded $83 million.

13 DOROTHEA Sources: DuPont Pharmaceuticals Research Laboratories and KDD Cup 2001. Task: Predict compounds that bind to Thrombin. DOROTHEA is the Thrombin dataset

14 GISETTE Source: National Institute of Standards and Technologies (NIST). Preprocessing: Yann LeCun and collaborators. Task: Separate digits “4” and “9”. GISETTE contains handwritten digits

15 MADELON Source: Isabelle Guyon, inspired by Simon Perkins et al. Type of data: Clusters on the summits of a hypercube. MADELON is random data

16 Performance Measures Confusion matrix Balanced Error Rate (BER): the average of the error rates for each class: BER = 0.5*(b/(a+b) + c/(c+d)). Area Under Curve (AUC): the area under the ROC curve obtained by plotting a/(a+b) against d/(c+d) for each confidence value, starting at (0,1) and ending at (1,0). Fraction of Features (FF): the ratio of the num. of features selected to the total num. of features in the dataset. Fraction of Probes (FP): the ratio of the num. of “garbage features” (probes) selected to the total num. of feat. select.

17 BER distribution

18 Power of Feature Selection Best frac. feat Actual frac. probes ARCENE5%30% DEXTER1.5%50% DOROTHEA0.3%50% GISETTE18%50% MADELON1.6%96%

19 Visualization 1) Create a heatmap of the data matrix: show(D.train); 2) Look at individual patterns: browse(D.train); 3) Make a scatter plot of the 2 first features: show(D.train); 4) Visualize the results: [Dat,Model]=train(model, D.train); Dat=test(Model, D.valid); roc(Dat);

20 BER = f(threshold) Theta = -37.40 Training set Theta = -38.14 Test set No bias adjustment, test BER=22.54%; with bias, test BER=12.37% DOROTHEA

21 ROC curve 00.10.20.30.40.50.60.70.80.91 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 AUC=0.91 Specificity Sensitivity DOROTHEA

22 Feature Selection MADELON (pval_max=0.1) 0102030405060708090 1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 rank W 0102030405060708090 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 rank FDR

23 Heat map ARCENE

24 Scatter plots chain({standardize, s2n('f_max=2'), normalize, my_svc}) Test BER=49% chain({standardize, s2n('f_max=1100'), normalize, gs('f_max=2'), my_svc}) Test BER=29.37% ARCENE

25 Lab 3 Playing with FS filters and classifiers on Madelon and Dexter

26 Lab 3 software 1)Try the examples in Lab3 README.m 2)Inspiring your self by the examples, write a new feature ranking filter object. Choose one in Chapter 3 or invent your own. 3)Provide the pvalue and FDR (using a tabulated distribution or the probe method).

27 Filters: see chapter 3

28 Filters Implemented @s2n @Relief @Ttest @Pearson (Use Matlab corrcoef. Gives the same results as Ttest, classes are balanced.) @Ftest (gives the same results as Ttest. Important for the pvalues: the Fisher criterion needs to be multiplied by num_patt_per_class or use anovan.) @aucfs (ranksum test)

29 Evalution of pval and FDR Ttest object: –computes pval analytically –FDR~pval*n sc /n probe object: –takes any feature ranking object as an argument (e.g. s2n, relief, Ttest) –pval~n sp /n p –FDR~pval*n sc /n

30 Analytic vs. probe 0500100015002000250030003500400045005000 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 rank FDR Arcene 0500100015002000250030003500400045005000 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 rank FDR Dexter 0500100015002000250030003500400045005000 0 1 2 x 10 -4 rank FDR Dorothea 0500100015002000250030003500400045005000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 rank FDR Gisette 050100150200250300350400450500 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 rank FDR Madelon Red analytic – Blue probe

31 Relief vs. Ttest (Madelon) 05101520253035404550 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 rank pval 05101520253035404550 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 FDR Ttest Relief Ttest Relief rank

32 Lab 4 Plying with Feature construction on Gisette

33 Gisette Handwritten digits. Goal: get familiar with the data and result formats. Make a first submission. Easiest LM for Gisette: naive and svc. Best preprocessing: normalize. Easiest feature selection method: s2n. Many training examples (6000). Unsuitable for kridge unless subsampling is used. Many features (5000). Select features before running neural or rf.

34 Baseline Model baselineGisette (BER=1.8%, feat=20%) my_classif=svc({'coef0=1', 'degree=3', 'gamma=0', 'shrinkage=1'}); my_model=chain({normalize, s2n('f_max=1000'), my_classif});

35 Baseline methods baselineGisette (CV=1.91%, test=1.80%, feat=20%) my_classif=svc({'coef0=1', 'degree=3', 'gamma=0', 'shrinkage=1'}); my_model=chain({normalize, s2n('f_max=1000'), my_classif}); baselineGisette2 (CV=1.34%, test=1.17%, feat=20%) my_model=chain({s2n('f_max=1000'), normalize, my_classif}); pixelGisette (CV=1.31%, test=0.91%) my_classif=svc({'coef0=1', 'degree=4', 'gamma=0', 'shrinkage=0.1'}); my_model=chain({normalize, my_classif});

36 Convolutions GISETTE (pixelGisette_exp_conv) prepro=my_model{1}; show(prepro.child); DD=test(prepro,D.train); browse_digit(DD.X, D.train.Y); chain({convolve(exp_ker({'dim1=9', 'dim2=9'})), normalize, my_classif})

37 Principal Components @pca_bank: Filter bank object retaining the first f_max principal components of the data matrix @kmeans_bank: Filter bank containing templates corresponding to f_max cluster centers.

38 @hadamard_bank: Filter bank object performing a Hadamard transform. Hadamard bank

39 Fourier Transform @fourier: Two dimensional Fourier transform.

40 Epilogue Becoming a pro and playing with other datasets

41 Baseline Methods for the Feature Extraction Class Isabelle Guyon Best BER=1.26  0.14% - n0=1000 (20%) – BER0=1.80% GISETTE Best BER=1.26  0.14% - n0=1000 (20%) – BER0=1.80% my_classif=svc({'coef0=1', 'degree=3', 'gamma=0', 'shrinkage=1'}); my_model=chain({normalize, s2n('f_max=1000'), my_classif}); Best BER= 11.9  1.2 % - n0=1100 (11%) – BER0=14.7% ARCENE Best BER= 11.9  1.2 % - n0=1100 (11%) – BER0=14.7% my_svc=svc({'coef0=1', 'degree=3', 'gamma=0', 'shrinkage=0.1'}); my_model=chain({standardize, s2n('f_max=1100'), normalize, my_svc}) BACKGROUND We present supplementary course material complementing the book “Feature Extraction, Fundamentals and Applications”, I. Guyon et al Eds., to appear in Springer. Classical algorithms of feature extraction were reviewed in class. More attention was given to the feature selection than feature construction because of the recent success of methods involving a large number of "low-level" features. The book includes the results of a NIPS 2003 feature selection challenge. The students learned techniques employed by the best challengers and tried to match the best performances. A Matlab® toolbox was provided with sample code.The students could makepost-challenge entries to: http://www.nipsfsc.ecs.soton.ac.uk/. http://www.nipsfsc.ecs.soton.ac.uk/ Challenge: Good performance + few features. Tasks: Two-class classification. Data split: Training/validation/test. Valid entry: Results on all 5 datasets. DATASETS Dataset Size MB TypeFeaturesTrainingValidationTest Arcene 8.7Dense10000100 700 Gisette 22.5Dense5000600010006500 Dexter 0.9Sparse20000300 2000 Dorothea 4.7Sparse100000800350800 Madelon 2.9Dense50020006001800 Best BER=6.22  0.57% - n0=20 (4%) – BER0=7.33% MADELON Best BER=6.22  0.57% - n0=20 (4%) – BER0=7.33% my_classif=svc({'coef0=1', 'degree=0', 'gamma=1', 'shrinkage=1'}); my_model=chain({probe(relief,{'p_num=2000', 'pval_max=0'}), standardize, my_classif}) METHODS Scoring: Ranking according to test set balanced error rate (BER), i.e. the average positive class error rate and negative class error rate. Ties broken by the feature set size. Learning objects: CLOP learning objects implemented in Matlab®. Two simple abstractions: data and algorithm. Download: http://www.modelselect.inf.ethz.ch/models.php. http://www.modelselect.inf.ethz.ch/models.php Task of the students: Baseline method provided, BER0 performance and n0 features. Get BER<BER0 or BER=BER0 but n<n0. Extra credit for beating the best challenge entry. OK to use the validation set labels for training. Best BER=8.54  0.99% - n0=1000 (1%) – BER0=12.37% DOROTHEA Best BER=8.54  0.99% - n0=1000 (1%) – BER0=12.37% my_model=chain({TP('f_max=1000'), naive, bias}); Best BER=3.30  0.40% - n0=300 (1.5%) – BER0=5% DEXTER Best BER=3.30  0.40% - n0=300 (1.5%) – BER0=5% my_classif=svc({'coef0=1', 'degree=1', 'gamma=0', 'shrinkage=0.5'}); my_model=chain({s2n('f_max=300'), normalize, my_classif}) ARCENE 051015202530354045 50 DEXTER 051010 1515 2020 2525 3030 3535 4040 4545 5050 DEXTER: text categorization GISETTE 05101520253035404550 GISETTE: digit recognition DOROTHEA: drug discovery DOROTHEA 051010 1515 2020 2525 3030 3535 4040 4545 5050 RESULTS 051015202530354045 50 MADELON MADELON: artificial data CONCLUSIONS The performances of the challengers could be matched with the CLOP library. Simple filter methods (S2N and Relief) were sufficient to get a space dimensionality reduction comparable to what the winners obtained. SVMs are easy to use and generally work better than other methods. We experienced with Gisette to add prior knowledge about the task and could outperform the winners. Further work includes using prior knowledge for other datasets. NEW YORK, October 2, 2001 – Instinet Group Incorporated (Nasdaq: INET), the world’s largest electronic agency securities broker, today announced tha ARCENE: cancer diagnosis

42 Best student results http://clopinet.com/isabelle/Projects/ETH/Feature_Selection_w_CLOP.html

43 Agnostic Learning vs. Prior Knowledge challenge Isabelle Guyon, Amir Saffari, Gideon Dror, Gavin Cawley, Olivier Guyon, and many other volunteers, see http://www.agnostic.inf.ethz.ch/credits.php http://www.agnostic.inf.ethz.ch/credits.php Open until August 1 st, 2007

44 Datasets Dataset Domain Type Feat- ures Training Examples Validation Examples Test Examples ADA Marketing Dense 484147 415 41471 GINA Digits Dense 9703153 315 31532 HIVA Drug discovery Dense 16173845 384 38449 NOVA Text classif. Sparse binary 169691754 175 17537 SYLVA Ecology Dense 21613086 1308 130858 http://www.agnostic.inf.ethz.ch

45 ADA ADA is the marketing database Task: Discover high revenue people from census data. Two- class pb. Source: Census bureau, “Adult” database from the UCI machine- learning repository. Features: 14 original attributes including age, workclass, education, education, marital status, occupation, native country. Continuous, binary and categorical features.

46 GINA Task: Handwritten digit recognition. Separate the odd from the even digits. Two-class pb. with heterogeneous classes. Source: MNIST database formatted by LeCun and Cortes. Features: 28x28 pixel map. GINA is the digit database

47 HIVA HIVA is the HIV database Task: Find compounds active against the AIDS HIV infection. We brought it back to a two-class pb. (active vs. inactive), but provide the original labels (active, moderately active, and inactive). Data source: National Cancer Inst. Data representation: The compounds are represented by their 3d molecular structure.

48 NOVA NOVA is the text classification database Task: Classify newsgroup emails into politics or religion vs. other topics. Source: The 20-Newsgroup dataset from in the UCI machine- learning repository. Data representation : The raw text with an estimated 17000 words of vocabulary. Subject: Re: Goalie masks Lines: 21 Tom Barrasso wore a great mask, one time, last season. He unveiled it at a game in Boston. It was all black, with Pgh city scenes on it. The "Golden Triangle" graced the top, along with a steel mill on one side and the Civic Arena on the other. On the back of the helmet was the old Pens' logo the current (at the time) Pens logo, and a space for the "new" logo. A great mask done in by a goalie's superstition. Lori

49 SYLVA SYLVA is the ecology database Task: Classify forest cover types into Ponderosa pine vs. everything else. Source: US Forest Service (USFS). Data representation: Forest cover type for 30 x 30 meter cells encoded with 108 features (elavation, hill shade, wilderness type, soil type, etc.)

50 BER distribution (March 1 st ) Agnostic learning Prior knowledge The black vertical line indicates the best ranked entry (only the 5 last entry of each participant were ranked). Beware of overfitting!

51 CLOP models

52 Preprocessing and FS

53 Model grouping for k=1:10 base_model{k}=chain({standardize, naive}); end my_model=ensemble(base_model);

54 CLOP models (best entrant) DatasetCLOP models selected ADA 2*{sns,std,norm,gentleboost(neural),bias}; 2*{std,norm,gentleboost(kridge),bias}; 1*{rf,bias} GINA 6*{std,gs,svc(degree=1)}; 3*{std,svc(degree=2)} HIVA 3*{norm,svc(degree=1),bias} NOVA 5*{norm,gentleboost(kridge),bias} SYLVA 4*{std,norm,gentleboost(neural),bias}; 4*{std,neural}; 1*{rf,bias} Juha Reunanen, cross-indexing-7 sns = shift’n’scale, std = standardize, norm = normalize (some details of hyperparameters not shown)

55 CLOP models (2 nd best entrant) DatasetCLOP models selected ADA {sns, std, norm, neural(units=5), bias} GINA {norm, svc(degree=5, shrinkage=0.01), bias} HIVA {std, norm, gentleboost(kridge), bias} NOVA {norm,gentleboost(neural), bias} SYLVA {std, norm, neural(units=1), bias} Hugo Jair Escalante Balderas, BRun2311062 sns = shift’n’scale, std = standardize, norm = normalize (some details of hyperparameters not shown) Note: entry Boosting_1_001_x900 gave better results, but was older.


Download ppt "Lab 1 Getting started with CLOP and the Spider package."

Similar presentations


Ads by Google