Presentation is loading. Please wait.

Presentation is loading. Please wait.

INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID

Similar presentations


Presentation on theme: "INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID"— Presentation transcript:

1 INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Lecture # 22 Spelling Correction

2 ACKNOWLEDGEMENTS The presentation of this lecture has been taken from the underline sources “Introduction to information retrieval” by Prabhakar Raghavan, Christopher D. Manning, and Hinrich Schütze “Managing gigabytes” by Ian H. Witten, ‎Alistair Moffat, ‎Timothy C. Bell “Modern information retrieval” by Baeza-Yates Ricardo, ‎  “Web Information Retrieval” by Stefano Ceri, ‎Alessandro Bozzon, ‎Marco Brambilla

3 Outline Matching trigrams Computing Jaccard coefficient
Context-sensitive spell correction General issues in spell correction

4 Standard postings “merge” will enumerate …
Matching trigrams Consider the query lord – we wish to identify words matching 2 of its 3 bigrams (lo, or, rd) lo alone lore sloth or border lore morbid rd 00:01:35  00:02:30 ardent border card Standard postings “merge” will enumerate … Adapt this to using Jaccard (or another) measure.

5 1. ANSWER LIST Append in the answer list the terms which have 2 or more overlapping bi-grams with the query. This is possible in a single pass. 00:07:30  00:08:05

6 Standard postings “merge” will enumerate …
Matching trigrams Consider the query lord – we wish to identify words matching 2 of its 3 bigrams (lo, or, rd) lo alone lore sloth or border lore morbid rd 00:08:30  00:08:38 00:08:50  00:09:30 00:09:56  00:11:25 00:13:46  00:13:51 00:14:30  00:15:00 00:16:30  00:16:50 ardent border card Standard postings “merge” will enumerate … Adapt this to using Jaccard (or another) measure.

7 2. ANSWER LIST Improve the walk through to use Jaccard Coefficient so as to identify the candidate terms. HEURISTIC While computing J.C. we may disregard the repeating n-grams in query term as well as in current term. The reason is that we are computing a candidate term in any case, which we shall process later using edit distance. 00:19:17  00:19:50

8 Computing Jaccard coefficient
X = Number of bigram in query tem Y = Number of bigram in current term Threshold: if J.C(Query), (Current term) > 0.8 append current term in answer list |X Y| = Number of pointers at current term |X U Y| = |X| + |Y| - |X Y| 00:20:00  00:20:40 00:22:25  00:22:40 00:25:10  00:25:40

9 Joining N-grams with Edit Distance
N-gram will give Answer list which contains candidate terms. These terms can then be checked for Edit Distance. This helps avoiding the checking of edit distance for all dictionary terms and each term, but restricts it to only candidate terms in the answer list 00:29:43  00:30:05

10 Context-sensitive spell correction
Text: I flew from Lahore to Dubai. Consider the phrase query “flew form Lahore” We’d like to respond Did you mean “flew from Lahore”? because no docs matched the query phrase. 00:36:28  00:36:50 00:38:30  00:38:55

11 Context-sensitive correction
Need surrounding context to catch this. NLP too heavyweight for this. First idea: retrieve dictionary terms close (in weighted edit distance) to each query term Now try all possible resulting phrases with one word “fixed” at a time flew from heathrow fled form heathrow flea form heathrow etc. Suggest the alternative that has lots of hits? Hits in corpus vs. Hits in Query Logs It is more appropriate to look for hits in Qurey Logs 00:41:25  00:41:50

12 Context-sensitive correction
Suppose that for “flew form Heathrow” we have 7 alternatives for flew, 19 for form and 3 for heathrow. 7*19*3  Look for most frequent (in corpus/in query log) replacement of first word, combine it with the second to formulate a bigram, then choose the most frequent and then combine it with the third one. Reduces it to much less than 7*19*3 Alternatively, correct each word separately that will result in Another alternative is to only correct the misspelled words. 00:42:18  00:43:10 00:44:30  00:44:41 00:48:45  00:49:00

13 General issues in spell correction
We enumerate multiple alternatives for “Did you mean?” Need to figure out which to present to the user The alternative hitting most docs Query log analysis More generally, rank alternatives probabilistically argmaxcorr P(corr | query) From Bayes rule, this is equivalent to argmaxcorr P(query | corr) * P(corr) 00:50:50  00:51:36 00:51:50  00:52:10 Noisy channel Language model

14 Resources Peter Norvig: How to write a spelling corrector
IIR 3, MG 4.2 Efficient spell retrieval: K. Kukich. Techniques for automatically correcting words in text. ACM Computing Surveys 24(4), Dec 1992. J. Zobel and P. Dart.  Finding approximate matches in large lexicons.  Software - practice and experience 25(3), March Mikael Tillenius: Efficient Generation and Ranking of Spelling Error Corrections. Master’s thesis at Sweden’s Royal Institute of Technology. Nice, easy reading on spell correction: Peter Norvig: How to write a spelling corrector


Download ppt "INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID"

Similar presentations


Ads by Google