Presentation is loading. Please wait.

Presentation is loading. Please wait.

Partially Supervised Classification of Text Documents by Bing Liu, Philip Yu, and Xiaoli Li Presented by: Rick Knowles 7 April 2005.

Similar presentations


Presentation on theme: "Partially Supervised Classification of Text Documents by Bing Liu, Philip Yu, and Xiaoli Li Presented by: Rick Knowles 7 April 2005."— Presentation transcript:

1 Partially Supervised Classification of Text Documents by Bing Liu, Philip Yu, and Xiaoli Li Presented by: Rick Knowles 7 April 2005

2 Agenda Problem Statement Related Work Theoretical Foundations Proposed Technique Evaluation Conclusions

3 Problem Statement: Common Approach Text categorization: automated assigning of text documents to pre-defined classes Common Approach: Supervised Learning Manually label a set of documents to pre-defined classes Use a learning algorithm to build a classifier + + + + + + + + + + + + + + + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

4 Problem Statement: Common Approach (cont.) Problem: bottleneck associated with large number of labeled training documents to build the classifier Nigram, et al, have shown that using a large dose of unlabeled data can help +... + + + + + +... + +.. _ _ _ _ _. _ _ _ _. _......

5 A different approach: Partially supervised classification Two class problem: positive and unlabeled Key feature is that there is no labeled negative document Can be posed as a constrained optimization problem Develop a function that correctly classifies all positive docs and minimizes the number of mixed docs classified as positive will have an expected error rate of no more than  Examplar: Finding matching (i.e., positive documents) from a large collection such as the Web. Matching documents are positive All others are negative + + + + + + + + + + + +.............................

6 Related Work Text Classification techniques Naïve Bayesian K-nearest neighbor Support vector machines Each requires labeled data for all classes Problem similar to traditional information retrieval Rank orders documents according to their similarities to the query document Does not perform document classification

7 Theoretical Foundations Some discussion regarding the theoretical foundations. Focused primarily on Minimization of the probability of error Expected recall and precision of functions Pr[f(X)=Y] = Pr[f(X)=1] - Pr[Y=1] + 2Pr Pr[f(X)=0 | Y=1]Pr[Y=1] Painful, painful… but it did show you can build accurate classifiers with high probability when sufficient documents in P (the positive document set) and M (the unlabeled set) are available. / (1)

8 Theoretical Foundations (cont.) Two serious practical drawbacks to the theoretical method Constrained optimization problem may not be easy to solve for the function class in which we are interested Not easy to choose a desired recall level that will give a good classifier using the function class we are using

9 Proposed Technique Theory be darned! Paper introduces a practical technique based on the naïve Bayes classifier and the Expectation-Maximization (EM) algorithm After introducing a general technique, the authors offer an enhancement using spies

10 Proposed Technique: Terms D is the set of training documents V = is the set of all words considered for classification w di,k is the word in position k in document d i N(w t, d i ) is the number of times w t occurs in d i C = {c 1, c 2 } is the set of predefined classes P is the set of positive documents M is the set of unlabeled set of documents S is the set of spy documents Posterior probability Pr[c j | d i ]  {0,1} depends on the class label of the document

11 Proposed Technique: naïve Bayesian classifer (NB-C) Pr[c j ] =  i Pr[c j |d i ] / |D| Pr[w t |c j ] = 1 +  i=1 P[c j |d i ] N(w t, d i ) |V| +  s=1  i=1 P[c j |d i ] N(w s, d i ) and assuming the words are independent given the class Pr[c j |d i ] = Pr[c j ]  k=1 Pr[w di,k |c j ]  r=1 Pr[c r ]  k=1 Pr[w di,k |c r ] The class with the highest Pr[c j |d i ] is assigned as the class of the doc |D| |V| |C| |d i | (2) (3) (4)

12 Proposed Technique: EM algorithm Popular class of iterative algorithms for maximum likelihood estimation in problems with incomplete data. Two steps Expectation: fills in the missing data Maximization: parameters are estimated Rinse and repeat Using a NB-C, (2) and (3) equate to the E step, and (4) is the M step Probability of a class now takes the value in [0,1] instead of {0,1}

13 Proposed Technique: EM algorithm (cont.) All positive documents have the class value c 1 Need to determine class value of each doc in mixed set. EM can help assign a probabilistic class label to each document d j in the mixed set Pr[c 1 |d j ] and Pr[c 2 |d j ] After a number of iterations, all the probabilities will converge

14 Proposed Technique: Step 1 - Reinitialization (I-EM) Reinitialization Build an initial NB-C using the documents sets M and P For class P, Pr[c 1 |d j ] = 1 and Pr[c 2 |d j ] = 0 For class M, Pr[c 1 |d j ] = 0 and Pr[c 2 |d j ] = 1 Loop while classifier parameters change For each document d j  M Compute Pr[c 1 |d j ] using the current NB-C Pr[c 2 |d j ] = 1 - Pr[c 1 |d j ] Update Pr[w t |c 1 ] and Pr[c 1 ] given the probabilistically assigned class for d j (Pr[c 1 |d j ]) and P (a new NB-C is being built in the process Works well on easy datasets Problem is that our initialization is strongly biased towards positive documents

15 Proposed Technique: Step 1 - Spies Problem is that our initialization is strongly biased towards positive documents Need to identify some very likely negative documents from the mixed set We do this by sending “spy” documents from the positive set P and put in the mixed set M (10% was used) A threshold t is set and those documents with a probabilistic label less than t are identified as negative 15% was the threshold used c2c2 c1c1 positive mix spies c1c1 positive spies c2c2 likely negative unlabeled

16 Proposed Technique: Step 1 - Spies (cont) N (most likely negative docs) = U (unlabeled docs) =  S (spies) = sample(P,s%) MS = M U S P = P - S Assign every document d i in P the class c 1 Assign every document d j in MS the class c 2 Run I-EM(MS,P) Classify each document d j in MS Determine the probability threshold t using S For each document d j in M If its probability Pr[c 1 |d j ] < t N = N U {d j } Else U = U U {d j }

17 Proposed Technique: Step 2 - Building the final classifier Using P, N and U as developed in the previous step Put all the spy documents S back in P Assign Pr[c 1 | d i ] =1 for all documents in P Assign Pr[c 2 | d i ] =1 for all documents in N. This will change with each iteration of EM Each doc d k in U is not assigned a label initially. At the end of the first iteration, it will have a probabilistic label Pr[c 1 | d k ] Run EM using the document sets P, N and U until it converges When EM stops, the final classifier has been produced. This two step technique is called S-EM (Spy EM)

18 Proposed Technique Selecting a classifier The local maximum that the final classifier may not cleanly separate the positive and negative documents Likely if there are many local clusters If so, from the set of classifiers developed over each iteration, select the one with the least probability of error Refer to (1) Pr[f(X)=Y] = Pr[f(X)=1] - Pr[Y=1] + 2Pr Pr[f(X)=0 | Y=1]Pr[Y=1] /

19 Evaluation Measurements Breakeven Point 0 = p - r, where p is precision and r is recall Only evaluates sorting order of class probabilities of documents Not appropriate F score F = 2pr / (p+r) Measures performance on a particular class Reflects average effect of both precision and recall Only when both p and r are large will F be large Accuracy

20 Evaluation Results 2 large document corpora 20NG Removed UseNet headers and subject lines WebKB HTML tags removed 8 iterations Pos SizeM sizePos in MNB(F) Average405447181143.93 NB(A)1-EM8(F)1-EM8(A)S-EM(F)S-EM(A) 84.5268.5887.5476.6192.16

21 Evaluation Results (cont) Also varied the % of positive documents both in P (%a) and in M (%b) Pos SizeM sizePos in MNB(F) a=20% b=20% 405398532460.66 a=50% b=20% 1013386320372.09 a=50% b=50% 1013416750773.81 NB(A)1-EM8(F)1-EM8(A)S-EM(F)S-EM(A) 94.4168.0891.9676.9395.96 95.9463.6386.8173.6195.28 93.1271.2585.7981.8594.32

22 Conclusions This paper studied the problem of classification with only partial information: one class and a set of mixed documents Technique Naïve Bayes classifier Expectation Maximization algorithm Reinitialized using the positive documents and the most likely negative documents to compensate bias Use estimate of classification error to select a good classifier Extremely accurate results


Download ppt "Partially Supervised Classification of Text Documents by Bing Liu, Philip Yu, and Xiaoli Li Presented by: Rick Knowles 7 April 2005."

Similar presentations


Ads by Google