Presentation on theme: "Active Learning to Classify Email 4/22/05. What’s the problem? How will I ever sort all these new emails?"— Presentation transcript:
Active Learning to Classify 4/22/05
What’s the problem? How will I ever sort all these new s?
What’s the problem? To get an idea of what mail I have gotten, I will need to sort these new messages. A great solution would be if I could sort just a few and my computer could sort the rest for me. To make it really accurate, the assistant could even pick which messages I should manually sort, so that it can learn to do the best job possible. (Active Learning)
What’s the solution? To solve this problem, we need a way to choose the most informative training examples. This requires some way of sorting s by how informative they are for classification.
Classification So, what do we know about classification? SVM and Naïve Bayes significantly outperform many other methods (Brutlag 2000, Kiritchenko 2001) Both SVM and Naïve Bayes are suitable for “online” learning required for solving this problem effectively. (Cauwenberghs 2000) Classifier accuracy varies more between users than between algorithms. (Kiritchenko 2001) SVM performs better for users with more in each folder. (Brutlag 2000) Users with more , such as in our example problem, tend to have more in each folder than other users. (Klimt 2004) Thus, we have chosen SVM as the basis for this research.
“Bag-of-Words” Model data “ bag of words ” SVM classification decision
Multiple SVMs Using separate SVMs for each section data SVMs classification decision LLSF
Active Learning with SVM In general, examples closer to the decision boundary hyperplane will cause larger displacement of that boundary. (Schohn and Cohn 2000, Tong 2001)
What if our prediction is right? Labeling the closer example: Labeling the farther example:
And if our prediction is wrong? Picking the closer example: Picking the farther example:
Incorporating Diversity In this example, the instance near the top is intuitively more likely to be informative. This is known as “diversity” (Brinker 2003).
Active Learning with SVM But what about when you have multiple SVMs (like one-vs-rest)? (Yan 2003)
The Enron Corpus 150+ users 200,000 s
Initial Results Trained on 10%, Tested on 90%
Chrono-Diverse Algorithm The way a user sorts changes over time. Pick training data that are maximally different from previous data with respect to time.
Combination Algorithm Combine strengths of Standard and Chrono-Diverse. Take a weighted combination of their results. Adjust weighting with parameter lambda.
Results Trained on 10%, Tested on 90%
Conclusions State-of-the-art algorithm for active learning with text classification performs horribly on data! Choosing s for time diversity works very well. Combining the two works best.
Future Work Improve the efficiency of SVM or find a better alternative Determine when using chronological diversity performs best and worst Adapt the algorithm to online classification