Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 1 1 Berendt: Advanced databases, 2010, Advanced databases – Inferring implicit/new knowledge from data(bases):

Similar presentations


Presentation on theme: "1 1 1 Berendt: Advanced databases, 2010, Advanced databases – Inferring implicit/new knowledge from data(bases):"— Presentation transcript:

1 1 1 1 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Advanced databases – Inferring implicit/new knowledge from data(bases): Text mining (used, e.g., for Web content mining) Bettina Berendt Katholieke Universiteit Leuven, Department of Computer Science http://www.cs.kuleuven.ac.be/~berendt/teaching/ Last update: 17 November 2010

2 2 2 2 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Agenda Classification: ex. decision-tree learning with ID3 Text classification and Naïve Bayes More on current approaches to (Web) text mining Opinion mining Text mining and WEKA

3 3 3 3 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Input data... Q: when does this person play tennis? NoTrueHighMildRainy YesFalseNormalHotOvercast YesTrueHighMildOvercast YesTrueNormalMildSunny YesFalseNormalMildRainy YesFalseNormalCoolSunny NoFalseHighMildSunny YesTrueNormalCoolOvercast NoTrueNormalCoolRainy YesFalseNormalCoolRainy YesFalseHighMildRainy YesFalseHighHotOvercast NoTrueHighHotSunny NoFalseHighHotSunny PlayWindyHumidityTempOutlook

4 4 4 4 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching The goal: a decision tree for classification / prediction In which weather will someone play (tennis etc.)?

5 5 5 5 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Constructing decision trees Strategy: top down Recursive divide-and-conquer fashion  First: select attribute for root node Create branch for each possible attribute value  Then: split instances into subsets One for each branch extending from the node  Finally: repeat recursively for each branch, using only instances that reach the branch Stop if all instances have the same class

6 6 6 6 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Which attribute to select?

7 7 7 7 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Which attribute to select?

8 8 8 8 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Criterion for attribute selection Which is the best attribute?  Want to get the smallest tree  Heuristic: choose the attribute that produces the “purest” nodes Popular impurity criterion: information gain  Information gain increases with the average purity of the subsets Strategy: choose attribute that gives greatest information gain

9 9 9 9 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Computing information Measure information in bits  Given a probability distribution, the info required to predict an event is the distribution’s entropy  Entropy gives the information required in bits (can involve fractions of bits!)‏ Formula for computing the entropy:

10 10 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Example: attribute Outlook

11 11 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Computing information gain Information gain: information before splitting – information after splitting Information gain for attributes from weather data: gain(Outlook ) = 0.247 bits gain(Temperature ) = 0.029 bits gain(Humidity ) = 0.152 bits gain(Windy ) = 0.048 bits gain(Outlook )= info([9,5]) – info([2,3],[4,0],[3,2])‏ = 0.940 – 0.693 = 0.247 bits

12 12 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Continuing to split gain(Temperature )= 0.571 bits gain(Humidity ) = 0.971 bits gain(Windy )= 0.020 bits

13 13 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Final decision tree Note: not all leaves need to be pure; sometimes identical instances have different classes  Splitting stops when data can’t be split any further

14 14 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Wishlist for a purity measure Properties we require from a purity measure:  When node is pure, measure should be zero  When impurity is maximal (i.e. all classes equally likely), measure should be maximal  Measure should obey multistage property (i.e. decisions can be made in several stages): Entropy is the only function that satisfies all three properties!

15 15 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Properties of the entropy The multistage property: Simplification of computation: Note: instead of maximizing info gain we could just minimize information

16 16 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Discussion / outlook decision trees Top-down induction of decision trees: ID3, algorithm developed by Ross Quinlan  Various improvements, e.g.  C4.5: deals with numeric attributes, missing values, noisy data  Gain ratio instead of information gain [see Witten & Frank slides, ch. 4, pp. 40-45] Similar approach: CART …

17 17 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Agenda Classification: ex. decision-tree learning with ID3 Text classification and Naïve Bayes More on current approaches to (Web) text mining Opinion mining Text mining and WEKA

18 18 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching (Recap: How do the basic ideas of relational-database-table mining transfer to text mining)

19 19 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching What makes people happy?

20 20 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Happiness in the blogosphere

21 21 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Well kids, I had an awesome birthday thanks to you. =D Just wanted to so thank you for coming and thanks for the gifts and junk. =) I have many pictures and I will post them later. hearts current mood: Home alone for too many hours, all week long... screaming child, headache, tears that just won’t let themselves loose.... and now I’ve lost my wedding band. I hate this. current mood: What are the characteristic words of these two moods? [Mihalcea, R. & Liu, H. (2006). A corpus-based approach to finding happiness, In Proceedings of the AAAI Spring Symposium on Computational Approaches to Analyzing Weblogs.] Slide based on Rada Mihalcea‘s slides in the presentation.

22 22 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Data, data preparation and learning LiveJournal.com – optional mood annotation 10,000 blogs: happysad n 5,000 happy entries / 5,000 sad entries n average size 175 words / entry n post-processing – remove SGML tags, tokenization, part-of- speech tagging quality of automatic “mood separation” n naïve bayes text classifier l five-fold cross validation n Accuracy: 79.13% (>> 50% baseline) Based on Rada Mihalcea`s talk at CAAW 2006

23 23 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Results: Corpus-derived happiness factors yay 86.67 shopping79.56 awesome79.71 birthday78.37 lovely77.39 concert74.85 cool73.72 cute73.20 lunch73.02 books73.02 goodbye18.81 hurt17.39 tears14.35 cried11.39 upset11.12 sad11.11 cry10.56 died10.07 lonely 9.50 crying 5.50 Based on Rada Mihalcea`s talk at CAAW 2006

24 24 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Bayes‘ formula and its use for classification 1. Joint probabilities and conditional probabilities: basics n P(A & B) = P(A|B) * P(B) = P(B|A) * P(A) n  P(A|B) = ( P(B|A) * P(A) ) / P(B) (Bayes´ formula) n P(A) : prior probability of A (a hypothesis, e.g. that an object belongs to a certain class) n P(A|B) : posterior probability of A (given the evidence B) 2. Estimation: n Estimate P(A) by the frequency of A in the training set (i.e., the number of A instances divided by the total number of instances) n Estimate P(B|A) by the frequency of B within the class-A instances (i.e., the number of A instances that have B divided by the total number of class-A instances) 3. Decision rule for classifying an instance: n If there are two possible hypotheses/classes (A and ~A), choose the one that is more probable given the evidence n (~A is „not A“) n If P(A|B) > P(~A|B), choose A n The denominators are equal  If ( P(B|A) * P(A) ) > ( P(B|~A) * P(~A) ), choose A

25 25 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Simplifications and Naive Bayes 4. Simplify by setting the priors equal (i.e., by using as many instances of class A as of class ~A) n  If P(B|A) > P(B|~A), choose A 5. More than one kind of evidence n General formula: n P(A | B 1 & B 2 ) = P(A & B 1 & B 2 ) / P(B 1 & B 2 ) = P(B 1 & B 2 | A) * P(A) / P(B 1 & B 2 ) = P(B 1 | B 2 & A) * P(B 2 | A) * P(A) / P(B 1 & B 2 ) n Enter the „naive“ assumption: B 1 and B 2 are independent given A n  P(A | B 1 & B 2 ) = P(B 1 |A) * P(B 2 |A) * P(A) / P(B 1 & B 2 ) n By reasoning as in 3. and 4. above, the last two terms can be omitted n  If (P(B 1 |A) * P(B 2 |A) ) > (P(B 1 |~A) * P(B 2 |~A) ), choose A n The generalization to n kinds of evidence is straightforward. n In machine learning, features are the evidence.

26 26 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Example: Texts as bags of words Common representations of texts n Set: can contain each element (word) at most once n Bag (aka multiset): can contain each word multiple times (most common representation used in text mining) Hypotheses and evidence n A = The blog is a happy blog, the email is a spam email, etc. n ~A = The blog is a sad blog, the email is a proper email, etc. n B i refers to the i th word occurring in the whole corpus of texts Estimation for the bag-of-words representation: n Example estimation of P(B 1 |A) : l number of occurrences of the first word in all happy blogs, divided by the total number of words in happy blogs (etc.)

27 27 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching The „happiness factor“ “Starting with the features identified as important by the Naïve Bayes classifier (a threshold of 0.3 was used in the feature selection process), we selected all those features that had a total corpus frequency higher than 150, and consequently calculate the happiness factor of a word as the ratio between the number of occurrences in the happy blogposts and the total frequency in the corpus.”  What is the relation to the Naïve Bayes estimators?

28 28 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Agenda Classification: ex. decision-tree learning with ID3 Text classification and Naïve Bayes More on current approaches to (Web) text mining Opinion mining Text mining and WEKA

29 29 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching http://www.cs.kuleuven.be/~berendt/Talks/berendt_2009_01_26- webversion.ppt

30 30 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Agenda Classification: ex. decision-tree learning with ID3 Text classification and Naïve Bayes More on current approaches to (Web) text mining Opinion mining Text mining and WEKA

31 31 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching http://people.cs.kuleuven.be/~bettina.berendt/teaching/2010-11- 1stsemester/adb/Lecture/Session9/L3.pdf

32 32 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Agenda Classification: ex. decision-tree learning with ID3 Text classification and Naïve Bayes More on current approaches to (Web) text mining Opinion mining Text mining and WEKA

33 33 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching The steps of text mining 1. Application understanding 2. Corpus generation 3. Data understanding 4. Text preprocessing 5. Search for patterns / modelling l Topical analysis l Sentiment analysis / opinion mining 6. Evaluation 7. Deployment

34 34 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching From HTML to String to ARFF Problem: Given a text file: How to get to an ARFF file? 1. Remove / use formatting l HTML: use html2text (google for it to find an implementation in your favourite language) or a similar filter l XML: Use, e.g., SAX, the API for XML in Java (www.saxproject.org)www.saxproject.org 2. Convert text into a basic ARFF (one attribute: String): http://weka.sourceforge.net/wiki/index.php/ARFF_files_from_Text_Collections http://weka.sourceforge.net/wiki/index.php/ARFF_files_from_Text_Collections 3. Convert String into bag of words (this filter is also available in WEKA‘s own preprocessing filters, look for filters – unsupervised – attribute – StringToWordVector) l Documentation: http://weka.sourceforge.net/doc.dev/weka/filters/unsupervised/attribute/StringToWordVector.html http://weka.sourceforge.net/doc.dev/weka/filters/unsupervised/attribute/StringToWordVector.html

35 35 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Next lecture Classification: ex. decision-tree learning with ID3 Text classification and Naïve Bayes More on current approaches to (Web) text mining Opinion mining Text mining and WEKA How can we make all this scale up?

36 36 Berendt: Advanced databases, 2010, http://www.cs.kuleuven.be/~berendt/teaching Literature To be done


Download ppt "1 1 1 Berendt: Advanced databases, 2010, Advanced databases – Inferring implicit/new knowledge from data(bases):"

Similar presentations


Ads by Google