Download presentation
Presentation is loading. Please wait.
1
Rajesh Pampapathi, Boris Mirkin, Mark Levene
A Suffix Tree Approach to Text Classification Applied to Filtering Rajesh Pampapathi, Boris Mirkin, Mark Levene School of Computer Science and Information Systems Birkbeck College, University of London
2
Introduction – Outline
Motivation: Examples of Spam Suffix Tree construction Document scoring and classification Experiments and results Conclusion
3
1. Standard spam mail Buy cheap medications online, no prescription needed. We have Viagra, Pherentermine, Levitra, Soma, Ambien, Tramadol and many more products. No embarrasing trips to the doctor, get it delivered directly to your door. Experienced reliable service. Most trusted name brands. For your solution click here:
4
5. Embedded message (plus word salad)
zygotes zoogenous zoometric zygosphene zygotactic zygoid zucchettos zymolysis zoopathy zygophyllaceous zoophytologist zygomaticoauricular zoogeologist zymoid zoophytish zoospores zygomaticotemporal zoogonous zygotenes zoogony zymosis zuza zoomorphs zythum zoonitic zyzzyva zoophobes zygotactic zoogenous zombies zoogrpahy zoneless zoonic zoom zoosporic zoolatrous zoophilous zymotically zymosterol FreeHYSHKRODMonthQGYIHOCSupply.IHJBUMDSTIPLIBJTJUBIYYXFN * GetJIIXOLDViagraPWXJXFDUUTabletsNXZXVRCBX < zonally zooidal zoospermia zoning zoonosology zooplankton zoochemical zoogloeal zoological zoologist zooid zoosphere zoochemical & Safezoonal andNGASXHBPnatural & TestedQLOLNYQandEAVMGFCapproved zonelike zoophytes zoroastrians zonular zoogloeic zoris zygophore zoograft zoophiles zonulas zygotic zymograms zygotene zootomical zymes zoodendrium zygomata zoometries zoographist zygophoric zoosporangium zygotes zumatic zygomaticus zorillas zoocurrent zooxanthella zyzzyvas zoophobia zygodactylism zygotenes zoopathological noZFYFEPBmas <
5
4. Word salads Buy meds online and get it shipped to your door Find out more here < a publications website accepted definition. known are can Commons the be definition. Commons UK great public principal work Pre-Budget but an can Majesty's many contains statements statements titles (eg includes have website. health, these Committee Select undertaken described may publications
6
Creating a Suffix Tree MEET FEET ROOT F E T M E T T E T (1) (1) (2)
(4) (1) (2) T
7
Levels of Information Characters: the alphabet (and their frequencies) of a class. Matches: between query strings and a class. s =nviaXgraU>Tabl$$$ets t =xv^ia$graTab£££lets Matches(s, t) = {v, ia, gra, Tab, l, ets, $} - But what about overlapping matches? Trees: properties of the class as a whole. ~size ~density (complexity)
8
Document Similarity Measure
The score for a document, d, is the sum of the scores for each suffix: d(i) is the suffix of d beginning at the ith letter tau is a tree normalisation coefficient
9
Substring Similarity Measure
Score for match, m = m0m1m2…mn, is score(m): T is the tree profile of the class. v(m|T) is a normalisation coefficient based on the properties of T. p(mt) is the probability of the character, mt, of the match m. Φ[p] is a significance function.
10
Decision Mechanism
11
Specifications of Φ[p] (character level)
Constant: 1 Linear: p Square: p2 Root: p0.5 Logit: ln(p) – ln(1-p) Sigmoid: (1 + exp(-p))-1 Note: Logit and Sigmoid need to be adjusted to fit in the range [0,1]
12
Significance function
13
Threshold Variation ~ Significance functions ~
14
Threshold Variation ~ Significance functions ~
15
Match normalisation Match unnormalised 1 Match permutation normalised
Match length normalised m* is the set of all strings formed by permutations of m m’ is the set of all strings of length equal to length of m
16
Match normalisation MUN: match unnormalised; MPN: permutation normalised; MLN: length normalised
17
Threshold Variation ~ match normalisation ~
Constant significance function unnormalised Constant significance function match normalised
18
Specifications of tau Unnormalised: 1 Size(T):
The total number of nodes Density(T): The average number of children of internal nodes AvFreq(T): Average frequency of nodes
19
Tree normalisation
20
Androutsopoulos et al. (2000) ~ Ling-Spam Corpus ~
Pre-processing Number of Features Spam Recall Error Spam Precision Error Naïve Bayes (NB) Lemmatizer + Stop-List 100 17.22% 0.51% Suffix Tree (ST) None N/A 2.50% 0.21% Naïve Bayes* (NB*) Unlimited 0.84% 2.86% Pre-processing Number of Features Spam Recall Error Spam Precision Error Naïve Bayes (NB) Lemmatizer + Stop-List 300 36.95% 0% Suffix Tree (ST) None N/A 3.96% Naïve Bayes* (NB*) Unlimited 10.42%
21
~ SpamAssassin Corpus ~
Pre-processing False Positive Rate False Negative Rate Suffix Tree (ST) None 3.50% 3.25% Naïve Bayes* (NB*) Lemmatizer + Stop-List 10.50% 1.50% ~ Ling-BKS Corpus ~ Pre-processing False Positive Rate False Negative Rate Suffix Tree (ST) None 0% Naïve Bayes* (NB*) Lemmatizer + Stop-List 12.25%
22
Conclusions Good overall classifier - improvement on naïve Bayes - but there’s still room for improvement Can one method ever maintain 100% accuracy? Extending the classifier Applications to other domains - web page classification
23
Future Work - ODP
24
Computational Performance
Data Set Training (s) Av. Spam (ms) Av. Ham (ms) Av. Peak Mem. LS-FULL (7.40MB) 63 843 659 765MB LS-11 (1.48MB) 36 221 206 259MB SAeh-11 (5.16MB) 155 504 2528 544MB BKS-LS-11 (1.12MB) 41 161 222 345MB
25
Experimental Data Sets
Ling-Spam (LS) Spam (481) collected by Androutsopoulos et al. Ham (2412) from online linguists’ bulletin board Spam Assassin - Easy (SAe) - Hard (SAh) Spam (1876) and ham (4176) examples donated BBK Spam (652) collected by Birkbeck
26
Androutsopoulos et al. (2000) ~ Ling-Spam Corpus ~
Classifier Configuration Threshold No. of Attrib. Spam Recall Spam Precision Bare 0.5 50 81.10\% 96.85\% Stop-List 82.35% 97.13% Lemmatizer 100 99.02% Lemmatizer + Stop-List 82.78% 99.49% 0.9 200 76.94\% 99.46\% 76.11\% 99.47\% 77.57\% 99.45\% Lemmatizer + Stop-list 78.41\% 0.999 73.82\% 99.43\% 73.40\% 300 63.67\% 100.00\% 63.05\%
27
Androutsopoulos et al. (2000) ~ Ling-Spam Corpus ~
Classifier Configuration Spam Recall Error Spam Precision Error Naïve Bayes (NB) Lemmatizer + Stop-List 17.22% 0.51% Suffix Tree (ST) N/A 2.5% 0.21% Naïve Bayes* (NB*) 0.84% 2.86% Classifier Configuration Spam Recall Error Spam Precision Error Naïve Bayes (NB) Lemmatizer + Stop-List 36.95% 0% Suffix Tree (ST) N/A 3.96% Naïve Bayes* (NB*) 10.42%
28
~ SpamAssassin Corpus ~
Classifier Configuration Spam Recall Spam Precision Naïve Bayes (NB) Lemmatizer + Stop-List 82.78% 99.49% Suffix Tree (ST) N/A 97.50% 99.79% Naïve Bayes* (NB*) 99.16% 97.14% Classifier Configuration Spam Recall Spam Precision Naïve Bayes (NB) Lemmatizer + Stop-List 82.78% 99.49% Suffix Tree (ST) N/A 97.50% 99.79% Naïve Bayes* (NB*) 99.16% 97.14%
32
“What then?” sang Plato’s ghost, “What then?”
Vector Space Model “What then?” sang Plato’s ghost, “What then?” W. B. Yeats what host plate Plato ghost then sang book 1 2 Word Probability = 0.05 P(w = ‘what’) = 50/1000
33
Creating Profiles Mark
34
Profiles Mark Levene engines databases information search data Mike Hu
police intelligence criminal computational data
35
Classification Boris Mirkin Mark Levene Mike Hu SBM SML SMH
36
Naïve Bayes (similarity measure)
For a document d = {d1d2d3 … dm }and set of classes c = {c1, c2 ... cJ}: (1) Where: (2) (3)
37
Criticisms Pre-processing: - Stop-word removal - Word stemming/lemmatisation - Punctuation and formatting Smallest unit of consideration is a word. Classes (and documents) are bags of words, i.e. each word is independent of all others.
38
Word Dependencies Boris Mirkin means intelligence clustering
computational data Mike Hu means intelligence criminal computational data
39
Word Inflections Intelligent Intellig- Intelligence OR intelligent
Intelligentsia Intelligible
40
Success measures Recall is the proportion of correctly classified examples of a class. If SR is spam recall, then (1-SR) gives the proportion of false negatives. Precision is the proportion assigned to a class which are true members of that class. It is a measure of the number of true positives. If SP is spam precision, then (1 – SP) would give the proportion of false positives.
41
Success measures True Positive Rate (TPR) is the proportion of correctly classified examples of the ‘positive’ class. Spam is typically taken as the positive class, so TPR is then the number of spam classified as spam over the total number of spam. False Positive Rate (FPR) is the proportion of the ‘negatve’ class erroneously assigned to the ‘positive’ class. Ham is typically taken as the negative class, so FPR is then the number of ham classified as spam over the total number of ham.
42
Classifier Structure Training Data Profiling Method
Spam Ham Training Data Profiling Method Profile Representation Similarity/Comparison Measure Decision Mechanism or Classification Criterion Decision ? Ham Spam
43
Classification using a suffix tree
Method of profiling is construction of the tree (no pre-processing, no post-processing) The tree is a profile of the class. Similarity measure? Decision mechanism?
44
Threshold Variation ~ match normalisation ~
Constant significance function unnormalised Constant significance function match normalised SPE = spam precision error; HPE = ham precision error
45
Threshold Variation ~ Significance functions ~
Root function, no normalisation Logit function, no normalisation SPE = spam precision error; HPE = ham precision error
46
Constant significance function (unnormalised)
Threshold Variation Constant significance function (unnormalised) SPE = spam precision error; HPE = ham precision error
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.