Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multimedia Answer Generation for Community Question Answering.

Similar presentations


Presentation on theme: "Multimedia Answer Generation for Community Question Answering."— Presentation transcript:

1 Multimedia Answer Generation for Community Question Answering

2 Problem Statement Textual Answers Multimedia Answers

3 Literature Survey Sr. NoAuthorYearContribution 1.Trec: Text Retrieval Conference [http://trec.nist.gov ] 1990Text based QA 2.S.A. Quarteroni, S. Manandhar2008Text based QA based on type of questions Open Domain QA 3.D. Molla & J.L Vicedo2007Restricted Domain QA 4.H. Cui, M.Y. Kan2007Definitional QA 5.R.C.Wang, W.W. Cohen, E. Nyberg 2008List QA 6.H. Yang, T S chua, S. Wang2003Video QA 7.J.Cao, Y-C- Wu, Y-S Lee2004- 2009 Video QA using OCR & ASR 8.Lot of authors2003- 2013 Content Based Retrieval

4 System Decomposition Answer medium selection, Query generation Multimedia data selection and presentation.

5 Pre-requisites Datasets Image & Video Mining API – Flickr, Picasaweb, Youtube, etc.

6 Process Flow Dataset collection Classification (Conversational & Informational) The answer medium selection and query selection components Query Generation for Multimedia Retrieval Multimedia Data Duplicate Removal Result Re-ranking

7 Answer Medium Selection Classification – only text, – Text + image – text + video – text + image + video Approach: – Question Based Classification – Answer Based Classification – Media Resource Analysis

8 Query Extraction For each QA pair, we generate three queries. 1. Convert the question to a query, 2. Identify several key concepts from verbose answer which will have the major impact on effectiveness. 3. Finally, we combine the two queries that are generated from the question and the answer respectively.

9 Query Generation for Multimedia Search Query Extraction The second step is query selection.

10 Query Selection Three-class classification task, since we need to choose one from the three queries We adopt the following features: – POS Histogram. For the queries that contain a lot of complex verbs it will be difficult to retrieve meaningful multimedia results. We use POS tagger to assign part-of-speech to each word of both question and answer. Here we employ the Stanford Log-linear Part-Of-Speech Tagger and 36 POS are identified. We then generate a 36-dimensional histogram, in which each bin counts the number of words belonging to the corresponding category of part-of-speech.

11 (2) Search performance prediction. – Clarity score for each query based on the KL divergence between the query and collection language models. – We can generate 6-dimensional search performance prediction features in all (there are three queries and search is performed on both image and video search engines). Therefore, for each QA pair, we can generate 42- dimensional features. Based on the extracted features, we train an SVM classifier with a labeled training set for classification i.e., selecting one from the three queries.

12 Clarity function:

13 Multimedia Data Selection & Prediction We perform search using the generated queries to collect image and video data with Google image and video search engines respectively. Most of the current commercial search engines are built upon text-based indexing and usually return a lot of irrelevant results. Therefore, – Re-ranking by exploring visual information is essential to reorder the initial text-based search results. – Here we adopt the graph-based re-ranking method.

14 Graph Based Re-Ranking & Duplicate Removal

15 List of Algorithms Core sentence extraction from question Stemming & stop-words removal on answers Question Type based on Answer Medium ( Naïve Bayes) Head word extraction Media Resource Analysis – Clarity score based on KL Divergence Query Generation Query Selection – POS Feature Extraction – Search Performance Prediction Multimedia Data selection & presentation – Graph based ranking – Face Detection Algorithms – Feature Extraction from images – Key frame identification & extraction

16 References 1.M. Surdeanu, M. Ciaramita, and H. Zaragoza, “Learning to rank answers on large online QA collections,” in Proc. Association for Computational Linguistics, 2008 2.S. Cronen-Townsend, Y. Zhou, andW. B. Croft, “Predicting query performance,” in Proc. ACM Int. SIGIR Conf., 2002. 3.Liqiang Nie, Meng Wang, Yue Gao, Zheng-Jun Zha, and Tat-Seng Chua “Beyond Text QA: Multimedia Answer Generation by Harvesting Web Information” IEEE Multimedia Transaction 2013

17 Thank You….


Download ppt "Multimedia Answer Generation for Community Question Answering."

Similar presentations


Ads by Google