Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Survey Of Topic And Sentiment Analysis In Unstructured Text

Similar presentations


Presentation on theme: "A Survey Of Topic And Sentiment Analysis In Unstructured Text"— Presentation transcript:

1 A Survey Of Topic And Sentiment Analysis In Unstructured Text
Ashu Gupta, Ayush Jain, Shashank Yaduvanshi

2 Document Sentiment/Rating Prediction

3 Document Sentiment Prediction
Predict sentiment of documents based on the words in the document and any hidden patterns Sentiments can be positive/negative/neutral for datasets such as tweets Sentiments can be a score out of 10 for datasets such as hotel ratings

4 Document Sentiment Prediction Challenges
Text is unstructured. Hard for machines to understand the tone of human language. Constructs such as sarcasms, contradictions, double negations are hard to handle. Example: I did not expect the movie to be good but was surprised The director could have done much better with such a great star cast. The best drink in this bar is Coke.

5 Sentiment Prediction of Movie Reviews
Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. Thumbs up? Sentiment classification using machine learning techniques 2002 Applied standard machine learning techniques such as SVD, Naïve Bayes and MaxEnt traditionally used for topic modeling to predict sentiments Movies reviews a useful dataset as no need to annotate sentiment, ratings already given Features used were presence/absence or the frequencies of unigrams and bigrams, POS tags, presence/absence of certain objectives, and unigrams along with their positioning

6 Results Using bigrams, word positioning and POS tags doesn’t improve the accuracy SVM gives the best accuracy but lower than topic modelling accuracies

7 Sentiment Prediction of Tweets
Alec Go, Richa Bhayani, and Lei Huang. Twitter sentiment classification using distant supervision 2009 – Extended previous work to tweets Collected positive and negative tweets based on positive and negative emoticons Ran SVM, MaxEnt and Naïve Bayes on tweets to predict sentiment. SVM performed best with combination of unigrams and bigrams as features

8 Sentiment Prediction of Tweets
Alexander Pak and Patrick Paroubek. Twitter as a corpus for sentiment analysis and opinion mining Added neutral tweets from New York Times. Ran two different Naives Bayes, one using n-gram features and other using frequencies of POS tags as features Removed n-grams that have high entropy or low salience. Similar accuracy to Go et al. Bigrams gave best results as they provide combination of coverage and context.

9 Results

10 JST Model (C. Lin and Y. He)
This paper proposes a novel generative framework based on Latent Dirichlet Allocation (LDA), called joint sentiment/topic model (JST) JST models both topics and sentiments simultaneously and is fully unsupervised Improves upon the TSM model which treats topic-wise and sentiment-wise word distributions as independent. In JST, documents are associated with sentiments, topic of each word is dependent on sentiment of document and words are associated with both sentiment and topics

11 JST Model (C. Lin and Y. He)
Generative Process First, sample a sentiment label for the document from the document specific sentiment distribution. Sample topic for a word from the topic distribution, where the topic distribution is chosen conditioned on the sentiment of document. Sample the word based from multinomial word distribution conditioned on topic and sentiment

12 Results on Movie review Dataset

13 Problems? It represents each document as a bag of words and thus ignores the word ordering. Considers a document level sentiment, what about sentiment of sentences Topic depends on sentiment of the document (useful topics may not be found). Example - highly negative words will be one topic, highly positive other topic

14 ASUM Model (Yohan Jo and Alice H)
ASUM also extends LDA and proposes a new topic-sentiment model ASUM generative model that assumes all words in a single sentence are generated from one aspect and sentiment. Improves upon JST Sentiments at sentence level granularity

15 ASUM Model (Yohan Jo and Alice H)
Generative Process For each sentence in the document, sentiment is sampled based on document sentiment distribution Then topic for that sentence is sampled based on the topic distribution for the document conditioned on sentence sentiment. Now given topic and sentiment for that sentence, each word of that sentence is sampled from senti-aspect based word distribution.

16 Results on Amazon Reviews

17 Problems? Assumes entire sentence has same topic and sentiment , what about punctuation and conjunctions. Example : “Lots of features but the price is very high” Topic depends on sentiment of the sentence (useful topics may not be found). Doesn’t find sentiment for each topic ( e.g. Price, Durability etc)

18 Aspect Rating Prediction

19 Aspect Rating Prediction
“The food was delicious in spite of the shoddy service.” Assigning and predicting a single rating fails to capture the fact that food was good, but service wasn’t Solution : ‘Aspect’-wise ratings Users rate different aspects differently

20 Aspect Rating Prediction: Topic Modeling

21 TSM Model Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai
Proposed Topic Sentiment Mixture model to retrieve latent topics and their sentiments from various web-blogs One of the first works which tries to estimate both aspects/topics and sentiments related to each aspect simultaneously Their basic methodology is to extend the LDA generative model to encorporate an additional layer of sentiments Very general and can be applied to various applications Search Summarization Rating Prediction Opinion Tracking

22 TSM Model Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai
Generative Process If the sampled word is a common English word then it is sampled from a multinomial distribution on words If not, the reviewer would then sample a topic for that word Then decide whether the word is used to describe the topic neutrally, positively, or negatively. Finally the word is sampled depending on whether neutral (Topic), positive or negative sentiment.

23 Results

24 Problems? Sentiment and Topic word distributions independent to each other, which may not hold always. Example the word “cheap” has different sentiments in topics “price” and “quality” Finally, they didn't consider the user preference over aspects for different users.

25 Aspect Rating Prediction: Topic Modeling
Latent Aspect Rating Prediction [Wang, Lu, Zhai 2010] Choose aspects and words for each aspect

26 Aspect Rating Prediction: Topic Modeling
Latent Aspect Rating Prediction [Wang, Lu, Zhai 2010] Choose aspects and words for each aspect Calculate aspect rating based on aspect words

27 Aspect Rating Prediction: Topic Modeling
Latent Aspect Rating Prediction [Wang, Lu, Zhai 2010] Choose aspects and words for each aspect Calculate aspect rating based on aspect words Overall rating is weighted sum of aspect ratings

28 Aspect Rating Prediction: Topic Modeling
Latent Aspect Rating Prediction: Inference [Wang, Lu, Zhai 2010] E-Step: Infer aspect ratings and aspect weights M-Step: Update

29 Aspect Rating Prediction: Topic Modeling
Latent Aspect Rating Prediction Detects sentiments of the words without supervision [Wang, Lu, Zhai 2010]

30 Aspect Rating Prediction: Topic Modeling
Latent Aspect Rating Prediction Requires aspect keyword seeds – Cannot automatically detect aspects [Wang, Lu, Zhai 2010]

31 Aspect Rating Prediction: Topic Modeling
Latent Aspect Rating Prediction without Aspect Keyword Supervision Aspect Modelling Module included [Wang, Lu, Zhai 2011]

32 Aspect Rating Prediction: Topic Modeling
Latent Aspect Rating Prediction without Aspect Keyword Supervision [Wang, Lu, Zhai 2011]

33 Aspect Rating Prediction: Topic Modeling
Latent Aspect Rating Prediction without Aspect Keyword Supervision [Wang, Lu, Zhai 2011]

34 Aspect Rating Prediction: Topic Modeling
Latent Aspect Rating Prediction without Aspect Keyword Supervision [Wang, Lu, Zhai 2011]

35 Aspect Rating Prediction: Topic Modeling
Latent Aspect Rating Prediction without Aspect Keyword Supervision: Possible Improvements Bag-Of-Words Assumption: Words appearing in proximity to each other are likely to express the same sentiment “nice” will be associated with the aspect it most frequently co-occurs with – Not with the locally relevant aspect [Wang, Lu, Zhai 2011]

36 Aspect Rating Prediction: Text Processing

37 Predicting Product Features and Opinions
Minqing Hu and Bing Liu. Mining and summarizing customer reviews 2004. Frequent itemsets are used as product features Initial seed of positive and negative adjectives are expanded using Wordnet Co-occurrence of adjectives and features in a sentence are used to determine connectivity

38 Results 84% accuracy on 100 reviews across 5 products
Dataset used is too small, need more experiments Pronoun resolution missing Binary sentiments used instead of sentiment scores Verbs, nouns can also be used to express sentiment

39 Aspect Rating Prediction in Movies
Li Zhuang, Feng Jing, and Xiao-Yan Zhu. Movie review mining and summarization 2006. Semi-supervised learning on dataset of movie reviews Human annotated movie reviews for famous 11 movies used to get most frequent aspect words and opinion words. Analysis shows Zipf’s law: very few words cover a major percentage of the total set.

40 Method For a new review, regular expressions used to determine phases referring to persons and they are replaced by actual names from IMDB database. Opinion keyword list expanded using WordNet Connectivity between opinions and aspects determined using dependency grammars instead of context windows.

41 Results Precision better compared to Hu and Liu on the same dataset because of using dependency grammars. Recall is lower as infrequent features are ignored. Accuracy for movie reviews lower than product reviews since movies reviews are more informative and complex and talk about features such as screenplay, direction but also about people such as the director and the actors in it.

42 Finding public opinion about entities from News
Namrata Godbole, Manja Srinivasaiah, and Steven Skiena. Large-scale sentiment analysis for news and blogs 2007. Previous works often use synonym and antonym relations from WordNet to expand on seed words. Sentiment polarity weakens with distance to seed word. Synonym of a positive seeded word likely more positive than synonym of an antonym of a negative seeded word Decrease weight of polarity of a word exponentially with increasing length of the path to a seeded word.

43 Method Paths that flip between positive and negative words more often than a certain threshold are considered spurious and removed from the calculations. The final list of words and their aggregated sentiments from all paths saved as a sentiment lexicon for seven different news fields like health, politics, sports etc. Entity resolution done using Lydia text analysis system to identify multiple references to the same entity Connectivity determined by co-occurrence of sentiment words and entity references in the same sentence

44 Results Results were inline with the prevalent sentiment about famous celebrities at that time

45 Results Newspapers and blogs can be contradictory about controversial public figures due to difference in biases of respective contributors.

46 Summary Aspect rating prediction need two steps: extraction of aspects and prediction of ratings. These two steps can be independent of each other such as in NLP based techniques. Helps in imparting background knowledge but connectivity is vague. Can be combined together such as in topic modeling techniques. Aspects and their opinions are more intuitively captured in a coherent model.

47 Summary Similar challenges as sentiment prediction of documents
Non-trivial to identify context within each sentence Evaluation is harder as aspects not explicitly defined Aspect ratings are also mostly unknown Datasets such as Tripadvisor reviews can help

48 Thanks. Questions?


Download ppt "A Survey Of Topic And Sentiment Analysis In Unstructured Text"

Similar presentations


Ads by Google