Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Web Information Retrieval Web Science Course. 2.

Similar presentations


Presentation on theme: "1 Web Information Retrieval Web Science Course. 2."— Presentation transcript:

1 1 Web Information Retrieval Web Science Course

2 2

3 What to Expect Information Retrieval Basics –IR Systems –History of IR Retrieval Models –Vector Space Model Information Retrieval on the Web –Differences to traditional IR Selected Papers 3

4 4 Information Retrieval Basics

5 5 Information Retrieval (IR) The indexing and retrieval of textual documents. Concerned firstly with retrieving relevant documents to a query. Concerned secondly with retrieving from large sets of documents efficiently.

6 6 Typical IR Task Given: –A corpus of textual natural-language documents. –A user query in the form of a textual string. Find: –A ranked set of documents that are relevant to the query.

7 7 IR System IR System Query String Document corpus Ranked Documents 1. Doc1 2. Doc2 3. Doc3.

8 8 Relevance Relevance is a subjective judgment and may include: –Being on the proper subject. –Being timely (recent information). –Being authoritative (from a trusted source). –Satisfying the goals of the user and his/her intended use of the information (information need).

9 9 Keyword Search Simplest notion of relevance is that the query string appears verbatim in the document. Slightly less strict notion is that the words in the query appear frequently in the document, in any order (bag of words).

10 10 Problems with Keywords May not retrieve relevant documents that include synonymous terms. –“restaurant” vs. “café” –“PRC” vs. “China” May retrieve irrelevant documents that include ambiguous terms. –“bat” (baseball vs. mammal) –“Apple” (company vs. fruit) –“bit” (unit of data vs. act of eating)

11 11 Intelligent IR Taking into account the meaning of the words used. Taking into account the order of words in the query. Adapting to the user based on direct or indirect feedback. Taking into account the authority of the source.

12 12 IR System Architecture Text Database Manager Indexing Index Query Operations Searching Ranking Ranked Docs User Feedback Text Operations User Interface Retrieved Docs User Need Text Query Logical View Inverted file

13 13 IR System Components Text Operations forms index words (tokens). –Stopword removal –Stemming Indexing constructs an inverted index of word to document pointers. Searching retrieves documents that contain a given query token from the inverted index. Ranking scores all retrieved documents according to a relevance metric.

14 14 IR System Components (continued) User Interface manages interaction with the user: –Query input and document output. –Relevance feedback. –Visualization of results. Query Operations transform the query to improve retrieval: –Query expansion using a thesaurus. –Query transformation using relevance feedback.

15 15 History of IR 1960-70’s: – Initial exploration of text retrieval systems for “small” corpora of scientific abstracts, and law and business documents. 1980’s: –Large document database systems, many run by companies 1990’s: –Searching FTPable documents on the Internet –Searching the World Wide Web

16 16 Recent IR History 2000’s –Link analysis for Web Search –Automated Information Extraction –Question Answering –Multimedia IR –Cross-Language IR –Document Summarization

17 17 Vector Space Retrieval Model

18 18 Retrieval Models A retrieval model specifies the details of: –Document representation –Query representation –Retrieval function Determines a notion of relevance. Notion of relevance can be binary or continuous (i.e. ranked retrieval).

19 19 Preprocessing Steps Strip unwanted characters/markup (e.g. HTML tags, punctuation, numbers, etc.). Break into tokens (keywords) on whitespace. Stem tokens to “root” words –computational  comput Remove common stopwords (e.g. a, the, it, etc.). Build inverted index (keyword  list of docs containing it).

20 20 The Vector-Space Model Assume t distinct terms after preprocessing; call them index terms or the vocabulary. These “orthogonal” terms form a vector space. Dimension = t = |vocabulary| Each term, i, in a document or query, j, is given a real-valued weight, w ij. Both documents and queries are expressed as t-dimensional vectors: d j = (w 1j, w 2j, …, w tj )

21 21 Graphic Representation Example: D 1 = 2T 1 + 3T 2 + 5T 3 D 2 = 3T 1 + 7T 2 + T 3 Q = 0T 1 + 0T 2 + 2T 3 T3T3 T1T1 T2T2 D 1 = 2T 1 + 3T 2 + 5T 3 D 2 = 3T 1 + 7T 2 + T 3 Q = 0T 1 + 0T 2 + 2T 3 7 32 5 Is D 1 or D 2 more similar to Q? How to measure the degree of similarity? Distance? Angle? Projection?

22 22 Term Weights: Term Frequency More frequent terms in a document are more important, i.e. more indicative of the topic. f ij = frequency of term i in document j May want to normalize term frequency (tf) by dividing by the frequency of the most common term in the document: tf ij = f ij / max i {f ij }

23 23 Term Weights: Inverse Document Frequency Terms that appear in many different documents are less indicative of overall topic. df i = document frequency of term i = number of documents containing term i idf i = inverse document frequency of term i, = log 2 (N/ df i ) (N: total number of documents) An indication of a term’s discrimination power. Log used to dampen the effect relative to tf.

24 24 TF-IDF Weighting A typical combined term importance indicator is tf-idf weighting: w ij = tf ij idf i = tf ij log 2 (N/ df i ) A term occurring frequently in the document but rarely in the rest of the collection is given high weight. Many other ways of determining term weights have been proposed. Experimentally, tf-idf has been found to work well.

25 25 Computing TF-IDF -- An Example Given a document containing terms with given frequencies: A(3), B(2), C(1) Assume collection contains 10,000 documents and document frequencies of these terms are: A(50), B(1300), C(250) Then: A: tf = 3/3; idf = log 2 (10000/50) = 7.6; tf-idf = 7.6 B: tf = 2/3; idf = log 2 (10000/1300) = 2.9; tf-idf = 2.0 C: tf = 1/3; idf = log 2 (10000/250) = 5.3; tf-idf = 1.8

26 26 Query Vector Query vector is typically treated as a document and also tf-idf weighted. Alternative is for the user to supply weights for the given query terms.

27 27 Similarity Measure A similarity measure is a function that computes the degree of similarity between two vectors. Using a similarity measure between the query and each document: –It is possible to rank the retrieved documents in the order of presumed relevance. –It is possible to enforce a certain threshold so that the size of the retrieved set can be controlled.

28 28 Cosine Similarity Measure Cosine similarity measures the cosine of the angle between two vectors. Inner product normalized by the vector lengths. D 1 = 2T 1 + 3T 2 + 5T 3 CosSim(D 1, Q) = 10 /  (4+9+25)(0+0+4) = 0.81 D 2 = 3T 1 + 7T 2 + 1T 3 CosSim(D 2, Q) = 2 /  (9+49+1)(0+0+4) = 0.13 Q = 0T 1 + 0T 2 + 2T 3  t3t3 t1t1 t2t2 D1D1 D2D2 Q  CosSim(d j, q) =

29 29 Naïve Implementation Convert all documents in collection D to tf-idf weighted vectors, d j, for keyword vocabulary V. Convert query to a tf-idf-weighted vector q. For each d j in D do Compute score s j = cosSim(d j, q) Sort documents by decreasing score. Present top ranked documents to the user. Time complexity: O(|V|·|D|) Bad for large V & D ! |V| = 10,000; |D| = 100,000; |V|·|D| = 1,000,000,000

30 Inverted Index 30

31 31 Comments on Vector Space Models Simple, mathematically based approach. Considers both local (tf) and global (idf) word occurrence frequencies. Provides partial matching and ranked results. Tends to work quite well in practice despite obvious weaknesses. Allows efficient implementation for large document collections. Does not require all terms in the query

32 32 Web Search

33 33 Web Search Application of IR to HTML documents on the World Wide Web. Differences: –Must assemble document corpus by spidering the web. –Can exploit the structural layout information in HTML (XML). –Documents change uncontrollably. –Can exploit the link structure of the web.

34 34 Web Search Using IR Query String IR System Ranked Documents 1. Page1 2. Page2 3. Page3. Document corpus Web Spider

35 35 The World Wide Web Developed by Tim Berners-Lee in 1990 at CERN to organize research documents available on the Internet. Combined idea of documents available by FTP with the idea of hypertext to link documents. Developed initial HTTP network protocol, URLs, HTML, and first “web server.”

36 36 Web Search Recent History In 1998, Larry Page and Sergey Brin, Ph.D. students at Stanford, started Google. Main advance is use of link analysis to rank results partially based on authority.

37 37 Web Challenges for IR Distributed Data: Documents spread over millions of different web servers. Volatile Data: Many documents change or disappear rapidly (e.g. dead links). Large Volume: Billions of separate documents. Unstructured and Redundant Data: No uniform structure, HTML errors, up to 30% (near) duplicate documents. Quality of Data: No editorial control, false information, poor quality writing, typos, etc. Heterogeneous Data: Multiple media types (images, video, VRML), languages, character sets, etc.

38 38 Growth of Web Pages Indexed SearchEngineWatch Link to Note from Jan 2004 Assuming 20KB per page, 1 billion pages is about 20 terabytes of data. Billions of Pages Google Inktomi AllTheWeb Teoma Altavista

39 39 Graph Structure in the Web http://www9.org/w9cdrom/160/160.html

40 Selected Papers 40

41 1. A Taxonomy of Web Search Andrei Broder, 2002 Query log analysis & user survey Classify web queries according to their intent into 3 classes –Navigational –Informational –Transactional How global search engines evolved to deal with web-specific needs 41

42 2. Personalizing Search via Automated Analysis of Interests and Activities Jaime Teevan, Susan Dumais, Eric Horvitz, 2005 Formulate and study search personalization algorithms Relevance feedback framework Rich models of user interests built from –Previously issued queries –Previously visited Web pages –Documents and emails the user has read and created 42

43 3. Personalized Query Expansion for the Web Paul Chirita, Claudiu Firan, Wolfgang Nejdl, 2007 Improve Web queries by expanding them Five broad techniques for generating the additional query keywords –Term and compound level analysis –Global co-occurrence statistics –Use external thesauri 43

44 4. Boilerplate Detection using Shallow Text Features Christian Kohlschütter, Peter Fankhauser, Wolfgang Nejdl, 2010 Boilerplate text typically is not related to the main content Analyze a small set of shallow text features for classifying the individual text elements in a Web page Test impact of boilerplate removal to retrieval performance 44

45 For You to Choose: 1.A Taxonomy of Web Search 2.Personalizing Search via Automated Analysis of Interests and Activities 3.Personalized Query Expansion for the Web 4.Boilerplate Detection using Shallow Text Features 45


Download ppt "1 Web Information Retrieval Web Science Course. 2."

Similar presentations


Ads by Google