Document Parsing Paolo Ferragina Dipartimento di Informatica Università di Pisa.

Slides:



Advertisements
Similar presentations
CES 514 Data Mining Feb 17, 2010 Lecture 2: The term vocabulary and postings lists.
Advertisements

IR Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chapter 1 Many slides are revisited from Stanford’s lectures by P.R.
Properties of Text CS336 Lecture 3:. 2 Generating Document Representations Want to automatically generate with little human intervention Use significant.
Intelligent Information Retrieval CS 336 –Lecture 3: Text Operations Xiaoyan Li Spring 2006.
Srihari-CSE535-Spring2008 CSE 535 Information Retrieval Lecture 2: Boolean Retrieval Model.
Inverted Index Construction Tokenizer Token stream. Friends RomansCountrymen Linguistic modules Modified tokens. friend romancountryman Indexer Inverted.
Preparing the Text for Indexing 1. Collecting the Documents Tokenizer Token stream. Friends RomansCountrymen Linguistic modules Modified tokens. friend.
The term vocabulary and postings lists
Information Retrieval and Web Search Lecture 2: The term vocabulary and postings lists Minqi Zhou 1.
CS276A Information Retrieval
Vocabulary size and term distribution: tokenization, text normalization and stemming Lecture 2.
Information Retrieval in Practice
Properties of Text CS336 Lecture 3:. 2 Information Retrieval Searching unstructured documents Typically text –Newspaper articles –Web pages Other documents.
IN350: Text properties, Zipf’s Law,and Heap’s Law. Judith A. Molka-Danielsen September 12, 2002 Notes are based on Chapter 6 of the Article Collection.
IR Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chapter 1 Many slides are revisited from Stanford’s lectures by P.R.
CS276 Information Retrieval and Web Search Lecture 2: The term vocabulary and postings lists.
PrasadL3InvertedIndex1 Inverted Index Construction Adapted from Lectures by Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning (Stanford)
WMES3103 : INFORMATION RETRIEVAL INDEXING AND SEARCHING.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 5: Index Compression 1.
Chapter 5: Information Retrieval and Web Search
CS276A Text Information Retrieval, Mining, and Exploitation Lecture 1.
Overview of Search Engines
Information Retrieval Document Parsing. Basic indexing pipeline Tokenizer Token stream. Friends RomansCountrymen Linguistic modules Modified tokens. friend.
Text Analysis Everything Data CompSci Spring 2014.
PrasadL4DictonaryAndQP1 Dictionary and Postings; Query Processing Adapted from Lectures by Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning.
Basics of Data Compression Paolo Ferragina Dipartimento di Informatica Università di Pisa.
Index Construction David Kauchak cs458 Fall 2012 adapted from:
Information Retrieval and Web Search Text properties (Note: some of the slides in this set have been adapted from the course taught by Prof. James Allan.
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 2: The term vocabulary and postings lists Related to Chapter 2:
Recap Preprocessing to form the term vocabulary Documents Tokenization token and term Normalization Case-folding Lemmatization Stemming Thesauri Stop words.
Introduction to Information Retrieval Introduction to Information Retrieval Adapted from Christopher Manning and Prabhakar Raghavan The term vocabulary,
Information Retrieval Lecture 2: The term vocabulary and postings lists.
Text Pre-processing and Faster Query Processing David Kauchak cs458 Fall 2012 adapted from:
Exploring Text: Zipf’s Law and Heaps’ Law. (a) (b) (a) Distribution of sorted word frequencies (Zipf’s law) (b) Distribution of size of the vocabulary.
Chapter 6: Information Retrieval and Web Search
1 Documents  Last lecture: Simple Boolean retrieval system  Our assumptions were:  We know what a document is.  We can “machine-read” each document.
Information Retrieval and Web Search Lecture 2: Dictionary and Postings.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan.
Information Retrieval Lecture 2: Dictionary and Postings.
1. L01: Corpuses, Terms and Search Basic terminology The need for unstructured text search Boolean Retrieval Model Algorithms for compressing data Algorithms.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Prabhakar.
Exploring Text: Zipf’s Law and Heaps’ Law. (a) (b) (a) Distribution of sorted word frequencies (Zipf’s law) (b) Distribution of size of the vocabulary.
Introduction to Information Retrieval Introduction to Information Retrieval Term vocabulary and postings lists – preprocessing steps 1.
Statistical Properties of Text
Introduction to Information Retrieval Introduction to Information Retrieval Introducing Information Retrieval and Web Search.
Query processing: optimizations Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading 2.3.
CS315 Introduction to Information Retrieval Boolean Search 1.
3: Search & retrieval: Structures. The dog stopped attacking the cat, that lived in U.S.A. collection corpus database web d1…..d n docs processed term-doc.
Information Retrieval and Web Search Lecture 2: The term vocabulary and postings lists Minqi Zhou 1.
Information Retrieval in Practice
Information Retrieval in Practice
Information Retrieval
Search Engine Architecture
Ch 2 Term Vocabulary & Postings List
CS122B: Projects in Databases and Web Applications Winter 2017
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Query processing: phrase queries and positional indexes
Lecture 2: The term vocabulary and postings lists
Boolean Retrieval Term Vocabulary and Posting Lists Web Search Basics
Document ingestion.
CS276: Information Retrieval and Web Search
PageRank GROUP 4.
Content Analysis of Text
Lecture 2: The term vocabulary and postings lists
Lecture 2: The term vocabulary and postings lists
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Query processing: phrase queries and positional indexes
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Basic Text Processing Word tokenization.
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Presentation transcript:

Document Parsing Paolo Ferragina Dipartimento di Informatica Università di Pisa

Tokenizer Token stream. Friends RomansCountrymen Inverted index construction Linguistic modules Modified tokens. friend romancountryman Indexer Inverted index. friend roman countryman Documents to be indexed. Friends, Romans, countrymen. Sec. 1.2

Parsing a document What format is it in? pdf/word/excel/html? What language is it in? What character set is in use? Each of these is a classification problem. But these tasks are often done heuristically …

Tokenization Input: “Friends, Romans and Countrymen” Output: Tokens Friends Romans Countrymen A token is an instance of a sequence of characters Each such token is now a candidate for an index entry, after further processing But what are valid tokens to emit?

Tokenization: terms and numbers Issues in tokenization: Barack Obama: one token or two? San Francisco? Hewlett-Packard: one token or two? B-52, C++, C# Numbers ? Lebensversicherungsgesellschaftsang estellter == life insurance company employee in german!

Stop words We exclude from the dictionary the most common words (called, stopwords). Intuition: They have little semantic content: the, a, and, to, be There are a lot of them: ~30% of postings for top 30 words But the trend is away from doing this: Good compression techniques (lecture!!) means the space for including stopwords in a system is very small Good query optimization techniques (lecture!!) mean you pay little at query time for including stop words. You need them for phrase queries or titles. E.g., “As we may think”

Normalization to terms We need to “normalize” terms in indexed text and query words into the same form We want to match U.S.A. and USA We most commonly implicitly define equivalence classes of terms by, e.g., deleting periods to form a term U.S.A., USA  USA deleting hyphens to form a term anti-discriminatory, antidiscriminatory  antidiscriminatory C.A.T.  cat ?

Case folding Reduce all letters to lower case exception: upper case in midsentence? e.g., General Motors SAIL vs. sail Bush vs. bush Often best to lower case everything, since users will use lowercase regardless of ‘correct’ capitalization…

Thesauri Do we handle synonyms and homonyms? E.g., by hand-constructed equivalence classes car = automobile color = colour We can rewrite to form equivalence-class terms When the document contains automobile, index it under car-automobile (and vice-versa) Or we can expand a query When the query contains automobile, look under car as well

Stemming Reduce terms to their “roots” before indexing “Stemming” suggest crude affix chopping language dependent e.g., automate(s), automatic, automation all reduced to automat. for example compressed and compression are both accepted as equivalent to compress. for exampl compress and compress ar both accept as equival to compress Porter’s algorithm

Lemmatization Reduce inflectional/variant forms to base form E.g., am, are, is  be car, cars, car's, cars'  car Lemmatization implies doing “proper” reduction to dictionary headword form

Language-specificity Many of the above features embody transformations that are Language-specific and Often, application-specific These are “plug-in” addenda to indexing Both open source and commercial plug-ins are available for handling these Sec

Statistical properties of text Paolo Ferragina Dipartimento di Informatica Università di Pisa

Statistical properties of texts Tokens are not distributed uniformly. They follow the so called “Zipf Law” Few tokens are very frequent A middle sized set has medium frequency Many are rare The first 100 tokens sum up to 50% of the text, and many of them are stopwords

An example of “Zipf curve”

A log-log plot for a Zipf’s curve

k-th most frequent token has frequency f(k) approximately 1/k; Equivalently, the product of the frequency f(k) of a token and its rank k is a constant Scale invariant: f(b*k) = b  s * f(k) The Zipf Law, in detail f(k) = c / k s  s k * f(k) = c f(k) = c / k General Law

Distribution vs Cumulative distr Sum after the k-th element is ≤ f(k) * k/(s-1) Sum up to the k-th element is ≥ f(k) * k Power-law with smaller exponent Log-log plot

Other statistical properties of texts The number of distinct tokens grows as The so called “Heaps Law” (n  where  <1, tipically 0.5, where n is the total number of tokens) The average token length grows as  (log n) Interesting words are the ones with medium frequency (Luhn)