Presentation is loading. Please wait.

Presentation is loading. Please wait.

Using Error-Correcting Codes For Text Classification Rayid Ghani Center for Automated Learning & Discovery, Carnegie Mellon University.

Similar presentations


Presentation on theme: "Using Error-Correcting Codes For Text Classification Rayid Ghani Center for Automated Learning & Discovery, Carnegie Mellon University."— Presentation transcript:

1 Using Error-Correcting Codes For Text Classification Rayid Ghani rayid@cs.cmu.edu Center for Automated Learning & Discovery, Carnegie Mellon University This presentation can be accessed at http://www.cs.cmu.edu/~rayid/icmltalk

2 Outline Review of ECOC Previous Work Types of Codes Experimental Results Semi-Theoretical Model Drawbacks Conclusions & Work in Progress

3 Overview of ECOC Decompose a multiclass problem into multiple binary problems The conversion can be independent or dependent of the data (it does depend on the number of classes) Any learner that can learn binary functions can then be used to learn the original multivalued function

4 Training ECOC Given m distinct classes Create an m x n binary matrix M. Each class is assigned ONE row of M. Each column of the matrix divides the classes into TWO groups. Train the Base classifiers to learn the n binary problems.

5 Testing ECOC To test a new instance Apply each of the n classifiers to the new instance Combine the predictions to obtain a binary string(codeword) for the new point Classify to the class with the nearest codeword (usually hamming distance is used as the distance measure)

6 ECOC-Picture AB C

7 Previous Work Combine with Boosting – ADABOOST.OC (Schapire, 1997), (Guruswami & Sahai, 1999) Local Learners Text Classification (Berger, 1999)

8 Experimental Setup Generate the code BCH Codes Choose a Base Learner Naive Bayes Classifier as used in text classification tasks (McCallum & Nigam 1998)

9 Dataset Industry Sector Dataset Consists of company web pages classified into 105 economic sectors Standard stoplist No Stemming Skip all MIME headers and HTML tags Experimental approach similar to McCallum et al. (1998) for comparison purposes.

10 Results Classification Accuracies on five random 50-50 train-test splits of the Industry Sector dataset with a vocabulary size of 10000. ECOC is 88% accurate!

11 Results Industry Sector Data Set Naïve Bayes Shrinkage 1 ME 2 ME/ w Prior 3 ECOC 63-bit 66.1%76%79%81.1%88.5% ECOC reduces the error of the Naïve Bayes Classifier by 66% 1.(McCallum et al. 1998) 2,3. (Nigam et al. 1999)

12 The Longer the Better! Table 2: Average Classification Accuracy on 5 random 50-50 train-test splits of the Industry Sector dataset with a vocabulary size of 10000 words selected using Information Gain. Longer codes mean larger codeword separation The minimum hamming distance of a code C is the smallest distance between any pair of distance codewords in C If minimum hamming distance is h, then the code can correct  (h-1)/2 errors

13 Size Matters?

14 Size does NOT matter!

15 Semi-Theoretical Model Model ECOC by a Binomial Distribution B(n,p) n = length of the code p = probability of each bit being classified incorrectly # of BitsH min E max P ave Accuracy 1552.85.59 1552.89.80 1552.91.84 31115.85.67 31115.89.91 31115.91.94 633115.89.99

16

17 Types of Codes Data-Independent Data-Dependent Algebraic Random Hand-Constructed Adaptive

18 What is a Good Code? Row Separation Column Separation (Independence of errors for each binary classifier) Efficiency (for long codes)

19 Choosing Codes RandomAlgebraic Row SepOn Average For long codes Guaranteed Col SepOn Average For long codes Can be Guaranteed EfficiencyNoYes

20 Experimental Results CodeMin Row HD Max Row HD Min Col HD Max Col HD Error Rate 15-Bit BCH 515496420.6% 19-Bit Hybrid 518156922.3% 15-bit Random 2 (1.5) 13426024.1%

21 Interesting Observations NBC does not give good probabilitiy estimates- using ECOC results in better estimates.

22 Drawbacks Can be computationally expensive Random Codes throw away the real- world nature of the data by picking random partitions to create artificial binary problems

23 Conclusion Improves Classification Accuracy considerably! Can be used when training data is sparse Algebraic codes perform better than random codes for a given code lenth Hand-constructed codes are not the answer

24 Conclusion Improves Classification Accuracy considerably! Can be used when training data is sparse Algebraic codes perform better than random codes for a given code lenth Hand-constructed codes are not the answer

25 Future Work Combine ECOC with Co-Training Automatically construct optimal / adaptive codes Sufficient and Necessary conditions for optimal behavior

26 Future Work Combine ECOC with Co-Training or Shrinkage Methods Sufficient and Necessary conditions for optimal behavior


Download ppt "Using Error-Correcting Codes For Text Classification Rayid Ghani Center for Automated Learning & Discovery, Carnegie Mellon University."

Similar presentations


Ads by Google