ITCS 6265 Information Retrieval and Web Mining Lecture 10: Web search basics.

Slides:



Advertisements
Similar presentations
CS276 Information Retrieval and Web Search
Advertisements

1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
CpSc 881: Information Retrieval. 2 Web search overview.
Near-Duplicates Detection
Latent Semantic Indexing (mapping onto a smaller space of latent concepts) Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading 18.
Crawling, Ranking and Indexing. Organizing the Web The Web is big. Really big. –Over 3 billion pages, just in the indexable Web The Web is dynamic Problems:
INF 2914 Information Retrieval and Web Search Lecture 1: Overview These slides are adapted from Stanford’s class CS276 / LING 286 Information Retrieval.
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 16: Web search basics.

MMDS Secs Slides adapted from: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, October.
Evaluating Search Engine
Anatomy of a Large-Scale Hypertextual Web Search Engine (e.g. Google)
IR 8 Web Information Retrieval. Search use … (iProspect Survey, 4/04,
15-853Page :Algorithms in the Real World Indexing and Searching III (well actually II) – Link Analysis – Near duplicate removal.
Near Duplicate Detection
CS 345 Data Mining Lecture 1 Introduction to Web Mining.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 19: Web Search 1.
CS276B Text Retrieval and Mining Winter 2005 Lecture 1.
Finding Near Duplicates (Adapted from slides and material from Rajeev Motwani and Jeff Ullman)
Chapter 5: Information Retrieval and Web Search
WEB SPAM A By-Product Of The Search Engine Era Web Enhanced Information Management Aniruddha Dutta Department of Computer Science Columbia University.
“ The Initiative's focus is to dramatically advance the means to collect,store,and organize information in digital forms,and make it available for searching,retrieval,and.

Web search basics.
Λ14 Διαδικτυακά Κοινωνικά Δίκτυα και Μέσα
Adversarial Information Retrieval The Manipulation of Web Content.
COMP4210 Information Retrieval and Search Engines Lecture 7: Web search basics.
CSCI 5417 Information Retrieval Systems Jim Martin
Chapter 7 Web Content Mining Xxxxxx. Introduction Web-content mining techniques are used to discover useful information from content on the web – textual.
CS276 Lecture 16 Web characteristics II: Web size measurement Near-duplicate detection.
Web Search Module 6 INST 734 Doug Oard. Agenda The Web  Crawling Web search.
Improving Web Spam Classification using Rank-time Features September 25, 2008 TaeSeob,Yun KAIST DATABASE & MULTIMEDIA LAB.
Data Structures & Algorithms and The Internet: A different way of thinking.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Chris Manning and Pandu Nayak Crawling.
Searching the Web From Chris Manning’s slides for Lecture 8: IIR Chapter 19.
Fall 2006 Davison/LinCSE 197/BIS 197: Search Engine Strategies 2-1 How Search Engines Work Today we show how a search engine works  What happens when.
Web Search. Structure of the Web n The Web is a complex network (graph) of nodes & links that has the appearance of a self-organizing structure  The.
1 University of Qom Information Retrieval Course Web Search (Link Analysis) Based on:
Search Engine Marketing Gay, Charlesworth & Esen Chapter 6.
When Experts Agree: Using Non-Affiliated Experts To Rank Popular Topics Meital Aizen.
| 1 › Gertjan van Noord2014 Search Engines Lecture 7: relevance feedback & query reformulation.
윤언근 DataMining lab.  The Web has grown exponentially in size but this growth has not been isolated to good-quality pages.  spamming and.
1 Search Engine Optimization An introduction to optimizing your web site for best possible search engine results.
Brief (non-technical) history Full-text index search engines Altavista, Excite, Infoseek, Inktomi, ca Taxonomies populated with web page Yahoo.
Search engines are the key to finding specific information on the vast expanse of the World Wide Web. Without sophisticated search engines, it would be.
Improving Cloaking Detection Using Search Query Popularity and Monetizability Kumar Chellapilla and David M Chickering Live Labs, Microsoft.
Chapter 6: Information Retrieval and Web Search
May 30, 2016Department of Computer Sciences, UT Austin1 Using Bloom Filters to Refine Web Search Results Navendu Jain Mike Dahlin University of Texas at.
Search Engine Marketing SEM = Search Engine Marketing SEO = Search Engine Optimization optimizing (altering/changing) your page in order to get a higher.
Web Search Basics (Modified from Stanford CS276 Class Lecture 13: Web search basics)
Search Engines By: Faruq Hasan.
A Brief Digression on Search Engine Optimization (SEO)
Document duplication (exact or approximate) Paolo Ferragina Dipartimento di Informatica Università di Pisa Slides only!
SEO Friendly Website Building a visually stunning website is not enough to ensure any success for your online presence.
Google News Personalization Big Data reading group November 12, 2007 Presented by Babu Pillai.
What is Web Information retrieval from web Search Engine Web Crawler Web crawler policies Conclusion How does a web crawler work Synchronization Algorithms.
Concept-based P2P Search How to find more relevant documents Ingmar Weber Max-Planck-Institute for Computer Science Joint work with Holger Bast Torino,
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Search Engine and Optimization 1. Introduction to Web Search Engines 2.
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 19 Web search basics.
WEB SEARCH BASICS By K.KARTHIKEYAN. Web search basics The Web Ad indexes Web spider Indexer Indexes Search User Sec
Search Engine Optimization
CS276 Information Retrieval and Web Search
CS276A Text Information Retrieval, Mining, and Exploitation
Modified by Dongwon Lee from slides by
Near Duplicate Detection
Information Retrieval Christopher Manning and Prabhakar Raghavan
Lecture 16: Web search/Crawling/Link Analysis
Data Mining Chapter 6 Search Engines
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Presentation transcript:

ITCS 6265 Information Retrieval and Web Mining Lecture 10: Web search basics

Brief (non-technical) history Early keyword-based engines Altavista, Excite, Infoseek, Inktomi, ca Sponsored search ranking: Goto.com (morphed into Overture.com  Yahoo!) Your search ranking depended on how much you paid Auction for keywords: casino was expensive!

Brief (non-technical) history 1998+: Link-based ranking pioneered by Google Blew away all early engines Great user experience in search of a business model Meanwhile Goto/Overture’s annual revenues were nearing $1 billion Result: Google added paid-placement “ads” to the side, independent of search results Yahoo followed suit, acquiring Overture (for paid placement) and Inktomi (for search)

Algorithmic results. Ads

Web search basics The Web Ad indexes Web spider Indexer Indexes Search User

User Needs Need [Brod02, RL04] Informational – want to learn about something (~40% / 65%) Navigational – want to go to that page (~25% / 15%) Transactional – want to do something (web-mediated) (~35% / 20%) Access a service Downloads Shop Low cholesterol diet United Airlines Seattle weather Mars surface images Canon S410

How far do people look for results? (Source: iprospect.com WhitePaper_2006_SearchEngineUserBehavior.pdf)iprospect.com

Users’ empirical evaluation of results Quality of pages varies widely Relevance is not enough Other desirable qualities (non IR!!) Content: Trustworthy, diverse, non-duplicated, well maintained Web readability: display correctly & fast No annoyances: pop-ups, etc Precision vs. recall On the web, recall seldom matters What matters Precision at 1? Precision above the fold? Comprehensiveness – must be able to deal with obscure queries Recall matters when the number of matches is very small User perceptions may be unscientific, but are significant over a large aggregate

Users’ empirical evaluation of engines UI – Simple, no clutter, error tolerant Pre/Post process tools provided Mitigate user errors (auto spell check, search assist,…) Explicit: Search within results, more like this, refine... Anticipative: related searches Deal with idiosyncrasies Web addresses typed in the search box …

The Web document collection No design/co-ordination Distributed content creation, linking, democratization of publishing Content includes truth, lies, obsolete information, contradictions … Unstructured (text, html, …), semi- structured (XML, annotated photos), structured (Databases)… Scale much larger than previous text collections … but corporate records are catching up Growth – slowed down from initial “volume doubling every few months” but still expanding Content can be dynamically generated The Web

Spam Search Engine Optimization

The trouble with sponsored search … It costs money. What’s the alternative? Search Engine Optimization: “Tuning” your web page to rank highly in the algorithmic search results for select keywords Alternative to paying for placement Thus, intrinsically a marketing function Performed by companies, webmasters and consultants (“Search engine optimizers”) for their clients Some perfectly legitimate, some very shady

Simplest forms First generation engines relied heavily on tf/idf The top-ranked pages for the query maui resort were the ones containing the most maui’ s and resort’ s SEOs responded with dense repetitions of chosen terms e.g., maui resort maui resort maui resort Often, the repetitions would be in the same color as the background of the web page Repeated terms got indexed by crawlers But not visible to humans on browsers Pure word density cannot be trusted as an IR signal

Variants of keyword stuffing Misleading meta-tags, excessive repetition Meta-Tags = “… London hotels, hotel, holiday inn, hilton, discount, booking, reservation, mp3, britney spears, …”

Cloaking Serve fake content to search engine spider Is this a Search Engine spider? Y N SPAM Real Doc Cloaking

More spam techniques Doorway pages Pages optimized for a single keyword that re-direct to the real target page Link spamming Mutual admiration societies, hidden links, awards – more on these later Domain flooding: numerous domains that point or re- direct to a target page Robots Millions of submissions via Add-Url (to search eng.)

The war against spam Quality signals - Prefer authoritative pages based on: Votes from authors of other pages (linkage signals) Votes from users (usage signals) Policing of URL submissions Anti robot test Limits on meta- keywords Robust link analysis Ignore statistically implausible linkage (or text) Spam recognition by machine learning Training set based on known spam Editorial intervention Blacklists Top queries audited Complaints addressed Suspect pattern detection

More on spam Web search engines have policies on SEO practices they tolerate/block, e.g., google webmaster central: Adversarial IR: the unending (technical) battle between SEO’s and web search engines Research AIRWeb: Adversarial Information Retrieval on the Web

Size of the web

What is the size of the web/index? Issues The web is really infinite Dynamic content, e.g., calendar Soft 404: is a valid page Static web contains syntactic duplication, mostly due to mirroring (~30%) A search engine could easily claim it indexes 2 billions of web documents (e.g., all are soft 404 from Yahoo) Seem a better idea to measure relative sizes of engines instead

A  B = (1/2) * Size A A  B = (1/6) * Size B (1/2)*Size A = (1/6)*Size B  Size A / Size B = (1/6)/(1/2) = 1/3 Sample URLs randomly from A Check if contained in B and vice versa A BA B Each test involves: (i) Sampling (ii) Checking Relative Size from Overlap Given two engines A and B

Sampling URLs Often not possible to sample engine directly So need indirect sampling Approach 1: Random query Approach 2: Random IP address

Random URLs from random queries Generate random query: how? Lexicon: 400,000+ words from a web crawl Conjunctive Queries: w 1 and w 2 e.g., data AND mining Get 100 result URLs from engine A Randomly choose a document D from results

Query Based Checking Strong Query to check whether an engine B has a document D: Download D. Get list of words. Use 8 low frequency words as AND query to B Check if D is present in result set.

Random IP addresses Generate random IP addresses Find a web server at the given address If there’s one Collect all pages from server From this, choose a page at random Check if the paper is present in either engine

Summary No sampling solution is perfect Have one bias or another Lots of new ideas but the problem is getting harder Quantitative studies are fascinating and a good research problem

Duplicate detection

Duplicate documents The web is full of duplicated content Strict duplicate detection = exact match Not as common But many, many cases of near duplicates E.g., Last modified date the only difference between two copies of a page

Duplicate/Near-Duplicate Detection Duplication: Exact match can be detected with fingerprints Near-Duplication: Approximate match Overview Compute syntactic similarity with an edit-distance measure Use similarity threshold to detect near-duplicates E.g., Similarity > 80% => Documents are “near duplicates” Not transitive, though sometimes used transitively

Computing Similarity Features: Segments of a document (natural or artificial breakpoints) Shingles (Word N-Grams) a rose is a rose is a rose → a_rose_is_a rose_is_a_rose is_a_rose_is a_rose_is_a Similarity Measure between two docs (= sets of shingles) Set intersection Specifically (Size_of_Intersection / Size_of_Union)

Shingles + Set Intersection Computing exact set intersection of shingles between all pairs of documents is expensive/intractable Approximate using a cleverly chosen subset of shingles from each (a sketch) Estimate (size_of_intersection / size_of_union) based on a short sketch Doc A Shingle set A Sketch A Doc B Shingle set A Sketch A Jaccard

Sketch of a document Create a “sketch vector” (of size ~200) for each document Documents that share ≥ t (say 80%) corresponding vector elements are near duplicates For doc D, sketch D [ i ] is as follows: Let f map all shingles in the universe to 0..2 m (f: hash function; m: e.g., 64, so 2 m large enough) Let  i be a random permutation on 0..2 m Pick MIN {  i (f(s))} over all shingles s in D

Computing Sketch[i] for Doc1 Document Start with 64-bit f(shingles) Permute on the number line with  i Pick the min value E.g., before permutation: after permutation: Before: min = 3, after min = 2 shingle

Test if Doc1.Sketch[i] = Doc2.Sketch[i] Document 1 Document Are these equal? Test for 200 random permutations:  ,  ,…  200 AB

However… Document 1 Document A = B iff the MIN value of shingles in Doc1 is the same as the MIN value of shingles in Doc2 Claim: This happens with probability Size_of_intersection / Size_of_union B A Why?

Set similarity of sets C i, C j View sets as columns of a matrix A; one row for each element in the universe. a ij = 1 indicates presence of item i in set j Example C 1 C 2 e1 0 1 e2 1 0 e3 1 1 Jaccard(C 1,C 2 ) = 2/5 = 0.4 e4 0 0 e5 1 1 e6 0 1

Key Observation For columns C i, C j, four types of rows C i C j A 1 1 B 1 0 C 0 1 D 0 0 Overload notation: A = # of rows of type A Claim

“Min” Hashing Randomly permute rows Hash h(C i ) = index of first row with 1 in column C i Surprising Property Why? Both are A/(A+B+C) Look down columns C i, C j until first non-Type-D row h(C i ) = h(C j )  type A row

Min-Hash sketches Pick P random row permutations MinHash sketch Sketch D = list of P indexes of first rows with 1 in column C Similarity of signatures Let sim[sketch(C i ),sketch(C j )] = fraction of permutations where MinHash values agree Observe E[sim(sig(C i ),sig(C j ))] = Jaccard(C i,C j )

Example C 1 C 2 C 3 R R R R R Signatures S 1 S 2 S 3 Perm 1 = (12345) Perm 2 = (54321) Perm 3 = (34512) Similarities Sig-Sig Actual 0 2/4 1/4

All signature pairs Now we have an extremely efficient method for estimating a Jaccard coefficient for a single pair of documents. But we still have to estimate N 2 coefficients where N is the number of web pages. Use clustering There are other more efficient solutions

More resources IIR Chapter 19