Presentation is loading. Please wait.

Presentation is loading. Please wait.

Which of the two appears simple to you? 1 2.

Similar presentations


Presentation on theme: "Which of the two appears simple to you? 1 2."— Presentation transcript:

1 http://scienceforkids.kidipede.com/chemistry/atoms/proton.htm http://en.wikipedia.org/wiki/Proton Which of the two appears simple to you? 1 2

2 Search for a keyword Results – Sometimes irrelevant and mixed order of readability

3 Our Objective Query Retrieve web pages (considering relevance) Re-rank web pages based on readability Automatically accomplished

4 An Unsupervised Technical Readability Ranking Model by Building a Conceptual Terrain in LSI Shoaib Jameel Xiaojun Qian The Chinese University of Hong Kong This is me!

5 What has been done so far? Heuristic Readability formulae Unsupervised approaches Supervised approaches My focus in this talk would be to cover some popular works in this area. Exhaustive list of references can be found in my paper.

6 Heuristic Readability Methods Have been there since 1940’s Semantic Component – Number of syllables per word, length of the syllables per word etc. Syntactic Component – Length of sentences etc.

7 Example – Flesch Reading Ease Semantic component Syntactic component Manually tuned numerical parameters

8 Supervised Learning Methods Language Models SVMs (Support Vector Machines) Use of query Log and user profiles

9 Smoothed Unigram Model [1] [1] K. Collins-Thompson and J. Callan. (2005.) "Predicting reading difficulty with statistical language models". Journal of the American Society for Information Science and Technology 56(13) (pp. 1448-1462). Recast the well-studied problem of readability in terms of text categorization and used straightforward techniques from statistical language modeling.

10 Smoothed Unigram Model Limitation of their method: Requires training data, which sometimes may be difficult to obtain

11 Domain-specific Readability Jin Zhao and Min-Yen Kan. 2010. Domain-specific iterative readability computation. In Proceedings of the 10 th annual joint conference on Digital libraries (JCDL '10). Based on web-link structure algorithm HITS and SALSA. Xin Yan, Dawei Song, and Xue Li. 2006. Concept-based document readability in domain specific information retrieval. In Proceedings of the 15 th ACM international conference on Information and knowledge management (CIKM '06). Based on an ontology. Tested only in the medical domain Hypertext Induced Topic SearchStochastic Approach for Link-Structure Analysis I will focus on this work.

12 Overview The authors state that Document Scope and Document Cohesion are an important parameters in finding simple texts. The authors have used a controlled vocabulary thesaurus termed as Medical Subject Headings (MeSH). Authors have pointed out the readability based formulae are not directly applicable to web pages.

13 MeSH Ontology Concept difficulty increases Concept difficulty decreases

14 Overall Concept Based Readability Score where, DaCw = Dale-Chall Readability Measure PWD = Percentage of difficult words AvgSL = Average sentence length in d i Their work focused on word level readability, hence considered only the PWD len(c i,c j )=function to compute shortest path between concepts c i c j in the MeSH hierarchy N = total number of domain concepts in document d i Depth(c i )=depth of the concept c i in the concept hierarchy D= Maximum depth of concept hierarchy

15 Our “Terrain-based” Method So, what’s the connection?

16 Latent Semantic Indexing Core component – Singular Value Decomposition SVD(C) = USV T C =

17 SVD(C) = USV T US VTVT

18

19 Three components Term Centrality – Is the term central to the document’s theme? Term Cohesion – Is the term closely related with the other terms in the document? Term Difficulty – Will the reader find it difficult to comprehend the meaning of the term?

20 Term Centrality Closeness of the term vector with the document vector in the latent space. T1 T2 D More central Less central LSI latent space Term Centrality = 1 / {Euclidean distance (T1,D)+small constant}

21 Term Cohesion T2 T1 T3 T4 D LSI Latent Space Distance Normalization is done to standardize the values. Term cohesion is obtained by computing the Euclidean distance between two consecutive terms T1 and T2

22 Term Difficulty Term Difficulty = Term Centrality x Inverse Document Frequency (idf) Idea of idf – If a term is used less often in the document collection, it should be regarded as important. For example, ‘’proton’’ will not occur too often but it is an important term.

23 So, what we now obtain? Term Difficulty Term Cohesion A reader now has to hop from one term to the other in the LSI latent space Something like this ->

24 How ranking is done? Keep aggregating the individual transition scores. Finally, obtain a real number which will be used in ranking.

25 Experiments and Results Collect web pages from domain-specific sites such as Science and Psychology websites. Test in two domains. Used NDCG as an evaluation metric. Retrieve relevant web pages given a query Annotate top ten web pages Re-rank search results based on readability

26 Results - Psychology

27 Results - Science

28 END


Download ppt "Which of the two appears simple to you? 1 2."

Similar presentations


Ads by Google