Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Robust Bagging Method using Median as a Combination Rule Zaman Md. Faisal and Hideo Hirose Department of Information Design and Informatics Kyushu Institute.

Similar presentations


Presentation on theme: "A Robust Bagging Method using Median as a Combination Rule Zaman Md. Faisal and Hideo Hirose Department of Information Design and Informatics Kyushu Institute."— Presentation transcript:

1 A Robust Bagging Method using Median as a Combination Rule Zaman Md. Faisal and Hideo Hirose Department of Information Design and Informatics Kyushu Institute of Technology Fukuoka, Japan 1

2 1.Bagging Algorithm 2.Comparative view of Bagging and Bragging Procedure 3.Nice bagging Algorithm 4.Robust Bagging (Robag) Procedure. 5.Classifiers used in the study 6.Datasets used in the study 7.Relative Improvement Measure 8.Results based on different classifiers 9.Conclusion 10.Reference 2 Contents of the Study 2

3 Objectives and Features of the Study 3 Objectives of the study: 1)Propose a robust bagging algorithm which can perform a) Comparatively well with Linear Classifiers (FLDA, NMC). b) Overcome the overfitting problem. Feature of the study: 1) A new bagging algorithm named, Robust Bagging (Robag). 2) A Relative Improvement Measure [R.I.M] to measure the relative improvement of bagging algorithms over the base classifiers. 3) Comparison of four bagging algorithms, s.t., bagging[1],bragging [2], nice bagging [3] and robag using the RIM.

4 4 Standard Bagging Algorithm 4 Proposed in 1994 by Leo Breiman. The Bagging Algorithm 1. Create B bootstrap replicates of the dataset. 2. Fit a model to each of the replicates. 3. Average (or vote) the predictions of the B models. In each Bootstrapped sample 63% of the data is sampled, and 37% is unsampled (these samples are called out-of-bag samples). To exploit this variation the base classifier should be unstable. The examples of unstable classifier are Decision Tree, Neural Network, MARS e.t.c.

5 5 Bragging Algorithm 5 Bragging (Bootstrap Robust Aggregating) was proposed by P. Bühlman in In Bragging : 1) A Robust Location Estimator, Median is used instead of mean to combine the multiple classifiers. 2) It was proposed to improve the performance of MARS (Multivariate Adaptive Regression Spline). 3) In the case of CART, it performs quite similar to Bagging.

6 6 6 T T2T2 O2O2 T3T3 O3O3 T B-1 O B-1 TBTB OBOB C2C2 C3C3 C B-1 CBCB w2w2 w2w2 w1w1 w2w2 w1w1 w1w1 w1w1 w2w2 C COM w1w2w1w2 T1T1 w2w1w2w1 Median O1O1 C1C1 w1w2w1w2 Majority voting Standard Bagging Procedure Bragging Procedure Using bootstrapping create multiple training sets The out-of bag samples Create multiple version of the baseclassifiers Classifier outputs Bagging and Bragging Algorithm (Comparative view)

7 7 7 Nice Bagging Nice bagging algorithm was proposed by Skurichina and Duin in They proposed to a)Use a validation set(tuning set) to validate the classifiers before combining. b)They used the bootstrapped training sets for the validation. c)They selected only the, nice classifiers ( classifiers having misclassification error less than APER ( Apparent Error). d)They combine the coefficients of the linear classifiers using the, average rule.

8 8 8 Robust Bagging (Robag) In the Robag algorithm We used the Out-of-Bag(OOB) samples as the validation set as for validation it is better to use a data set which is independent of the training set ( the OOB is independent of the bootstrap training samples). We also used, Median as the combiner because in that case any extreme results yield by any classifiers will automatically be filtered out.

9 9 T2T2 O2O2 T3T3 O3O3 T B-1 O B-1 TBTB OBOB C2C2 C3C3 C B-1 CBCB C COM T1T1 O1O1 C1C1 The out-of bag samples T Validation Sets Validation Process Using OOB samples V2V2 V3V3 V B-1 VBVB V1V1 Using bootstrapping create multiple training sets Validated Classifiers Classifier Outputs 9 Robust Bagging (Robag) contd Create multiple version of the base classifiers Combining Classifier Outputs Combined Classifier Using Median to combine the classifier outputs

10 10 Relative Improvement Measure (RIM) To check whether bagging or any other variant of bagging improves or degrades the performance of the base classifier we use here a relative improvement measure. Relative Improvement = Here, Err base= Test Error of the base Classifier Err bagging= Test Error of the bagged Classifier So the RIM measures the decrease (increase) of the test error of the Bagged classifier over the base classifier.

11 11 Classifiers and the data set used 11 We used here an unstable classifier that is a tree classifier, CART and two stable classifier (linear), i,e., Fisher Linear Discriminant (FLD)and Nearest Mean Classifier (NMC). The stable classifiers are used to check the performance of the robag algorithm. As usually bagging algorithm do not perform well in case of stable classifiers. We use 5 of the well known UCI [4](University of California Irvine) Machine Learning Repository data sets. DatasetsNq Austral69014 Breast Cancer6999 Diabetes7688 Ionosphere35133 Spectral26723 N= no. of observations q = no. of feautures

12 12 Experimental Setup 12 In the experiments we 1)We divide all the datasets into 2 parts a training part (80% of the total data) a testing part (20% of the data), randomly. 2) We make this random partition 50 times. 3)In each partition a) calculate the APER. b) use 50 bootstrap replicates to generate bagged classifiers. c) calculate the RIM. 4) Average the results over the 50 iterations.

13 13 Results of CART 13 BaggingBraggingNice Bagging Robag Austral Breast Cancer Diabetes Ionosphere Spectral Table: Mean relative improvements in error rate of Bagging, Robust Bagging, Nice Bagging and Bragging with respect to a classification tree

14 14 Results of FLD 14 BaggingBraggingNice Bagging Robag Austral Breast Cancer Diabetes Ionosphere Spectral Table: Mean relative improvements in error rate of Bagging, Robust Bagging, Nice Bagging and Bragging with respect to a FLD

15 15 Results of NMC 15 BaggingBraggingNice Bagging Robag Austral Breast Cancer Diabetes Ionosphere Spectral Table: Mean relative improvements in error rate of Bagging, Robust Bagging, Nice Bagging and Bragging with respect to a NMC

16 16 Conclusion 16 We see from the results that the Robag algorithm, 1) Performed nearly similar to bagging and its variants when applied to CART in 2 datasets, performed better in 1 dataset. 2)Performed well in 4 datasets when applied to FLD. 3)Performed well in 4 datasets when applied with NMC. So, we can say that the Robag algorithm when applied with Linear Classifiers performed better than other Bagging variants.

17 References 17 [ 1] L. Breiman, Bagging Predictors, Machine Learning 24, 1996, pp [2] P. Bühlman, Bagging, subbagging and bragging for improving some prediction algorithms, in Recent Advances and Trends in Nonparametric Statistics, M.G. Arkitas, and D. N. Politis(Eds), Elsevier, 2003, pp [3] M. Skurichina, R. P.W. Duin, Bagging for linear classifiers, Pattern Recognition, 31, 1998, pp [4] ~ mlearn/MLRepository.html.


Download ppt "A Robust Bagging Method using Median as a Combination Rule Zaman Md. Faisal and Hideo Hirose Department of Information Design and Informatics Kyushu Institute."

Similar presentations


Ads by Google