Presentation is loading. Please wait.

Presentation is loading. Please wait.

Web Basics Slides adapted from

Similar presentations


Presentation on theme: "Web Basics Slides adapted from"— Presentation transcript:

1 Web Basics Slides adapted from
Information Retrieval and Web Search, Stanford University, Christopher Manning and Prabhakar Raghavan CS345A, Winter 2009: Data Mining. Stanford University, Anand Rajaraman, Jeffrey D. Ullman

2 Web search Due to the large size of the Web, it is not easy to find the needle in the hay. Solutions Classification Early search engines Modern search engines

3 Early solutions to web search
Classification of web pages Yahoo Mostly done by humans. Difficult to scale. Early keyword-based engines ca Altavista, Excite, Infoseek, Inktomi, Lycos Decide how queries match pages Most queries match large amount of pages which page is more authoritative? Paid search ranking: Goto.com (aka overture.com, acquired by yahoo) Your search ranking depended on how much you paid Auction for keywords: casino was expensive!

4 Ranking of web pages 1998+: Link-based ranking pioneered by Google
Blew away all early engines save Inktomi Great user experience in search of a business model Meanwhile Goto/Overture’s annual revenues were nearing $1 billion

5 Web search overall picture
Sec Web search overall picture User Web spider queries Indexer Search links Indexes The Web Ad indexes

6 Key components in web search
Graph User User Crawl Crawl Rank Rank Spam Key components in web search Links and graph: The web is a hyperlinked document collection, a graph. Queries: Web queries are different, more varied and there are a lot of them. How many? 108 every day, approaching 109 Users: Users are different, more varied and there are a lot of them. How many? 109 Documents: Documents are different, more varied and there are a lot of them. How many? 1011. Indexed: 1010 Context: Context is more important on the web than in many other IR applications. Ads and spam

7 Web as graph Web Graph Node: web page Edge: hyperlink Rank Crawl User
Spam Web as graph Web Graph Node: web page Edge: hyperlink

8 Why web graph Example of a large, dynamic and distributed graph
Rank Crawl User Graph Spam Why web graph Example of a large, dynamic and distributed graph Possibly similar to other complex graphs in social, biological and other systems Reflects how humans organize information (relevance, ranking) and their societies Efficient navigation algorithms Study behavior of users as they traverse the web graph (e- commerce)

9 In-degree and out-degree
Rank Crawl User Graph Spam In-degree and out-degree In-degree: number of in-coming edges of a node Out-degree: number of out-going edges of a node E.g., Node 8 has 3 in-degrees, 0 out-degree Node 2 has 2 in-degrees, and 4 out-degrees Degree distribution

10 Graph User Crawl Rank Spam Degree distribution Degree distribution is the fraction of the nodes that have degree i, i.e. Degree of Web graph obeys power law distribution Study at Notre Dame University reported a = 2.45 for out-degree distribution a = 2.1 for in-degree distribution Random graphs have Poisson distribution

11 Graph example, matlab (or Octave)
; ; ; ; ; ]; indegree=sum(G) outdegree=sum(G') bin=0:4; h=hist(indegree,bin); subplot(1,2,1); bar(bin,h); title('indegree'); h=hist(outdegree,bin); subplot(1,2,2); title('outdegree');

12 Graph User Crawl Rank Spam Power law plotted 500 random numbers are generated, following power law with xmin=1, alpah=2 Subplots C and D are produced using equal bin size (bin size=5) To remove the noise in the tail of subplot (D), we need to use log bin size Subplot (F) shows a straight line as desired. Try the matlab program to experience with the power law

13 Generate random numbers
Generate uniform random numbers rand(n,1) Generate power law random numbers using transformation method n=500; alpha=2; xmin=1; %generate n random numbers following power law rawData = xmin*(1-rand(n,1)).^(-1/(alpha-1));

14 Plot the power law data subplot(3,2,1); scatter(1:n, rawData); title('(A) Scatter plot of 500 random data'); subplot(3,2,2); scatter(1:n, rawData, rawData.^(0.5),rawData); title('(B) Crowded dots are plotted in smaller size'); b=5; bins=1:b:n; h=hist(rawData, bins); subplot(3,2,3); plot(h, 'o'); xlabel('value'); ylabel('frequency'); title('(C) Histogram of equal bin size');

15 Loglog plot subplot(3,2,4); Loglog(bins, h, 'o'); xlabel('value');
ylabel('frequency'); binslog(1)=1; for j=1:7 b2(j)=2^j binslog(j+1)=binslog(j)+b2(j); end; subplot(3,2,5); h=hist(rawData, binslog); plot(binslog, h, 'o'); title('(E)Histogram of log bin size'); subplot(3,2,6); h=hist(rawData, binslog); plot(log10(binslog), log10(h), 'o'); xlabel('value'); ylabel('frequency'); title('(F) log-log plot of (E)');

16 Power law of web graph in 1999
Note that the in/out distributions are slightly different Out-degree may be better fitted by Mandelbrot law What about the current web? clueWeb data consist of 4 billion web pages.

17 Graph User Crawl Rank Spam Scale-free networks A network is scale free if the degree distribution follows power law Mathematical model behind: Preferential attachment Many networks obey power law Internet at the router and inter domain level Citation network/co-author network Collaboration network of actors Networks formed by interacting genes and proteins … … Web graph Online social network Semantic web

18 Other graph properties
User Crawl Rank Spam Other graph properties Distance from A to B: the length of the shortest path connecting A to B Distance from node 0 to node 9: 1 Length: the average of the distances between all the pairs of nodes Diameter: the maximum of the distances Strongly connected: for any pair of nodes, there is a path connecting them

19 Small world It is a ‘small world’ Mathematically
Rank Crawl User Graph Spam Small world It is a ‘small world’ Millions of people. Yet, separated by “six degrees” of acquaintance relationships Popularized by Milgram’s famous experiment (1967) Mathematically Diameter of graph is small as compared to overall size N Length is proportional to ln (N) For a fixed average degree The diameter of a complete graph never grows (always 1) This property also holds in random graphs

20 Bow tie structure of Web
Graph User Crawl Rank Spam Bow tie structure of Web Study of 200 million nodes & 1.5 billion links SCC: Strongly connected component (SCC) in the center. Up Stream: Lots of pages that link to other pages, but don’t get linked to (IN) Down stream: Lots of pages that get linked to, but don’t link (OUT) Tendrils, tubes, islands Small-world property not applicable to the entire web Some parts unreachable Others have long paths Power-law connectivity holds though Page in-degree (alpha = 2.1), out-degree (alpha = 2.72)

21 Empirical numbers for bow-tie
Rank Crawl User Graph Spam Empirical numbers for bow-tie Maximal diameter 28 for SCC, 500 for entire graph Probability of a path between any 2 nodes ~1 quarter (0.24) Average length 16 (directed path exists), 7 (undirected) Shortest directed path between 2 nodes in SCC: links on average

22 Component properties Each component is roughly same size
Rank Crawl User Graph Spam Component properties Each component is roughly same size ~50 million nodes Tendrils not connected to SCC But reachable from IN and can reach OUT Tubes: directed paths IN->Tendrils- >OUT Disconnected components Maximal and average diameter is infinite

23 Statistics of web graph
Rank Crawl User Graph Spam Statistics of web graph Distribution of incoming and outgoing connections Diameter of the graph: Average and maximal length of the shortest path between any two vertices Web site and distribution of pages per site Consider in project: Concetps/classes distribution per file/site in semantic web? Size of the web graph Consider in project: What is the size of the semantic web?

24

25 Web site size Simple estimates suggest over billions nodes
Rank Crawl User Graph Spam Web site size Simple estimates suggest over billions nodes Distribution of site sizes measured by the number of pages follow a power law distribution Note that degree distribution also follows power law Observed over several orders of magnitude with an exponent a in the range

26 Web Size The web keeps growing. But growth is no longer exponential?
Rank Crawl User Graph Spam Web Size The web keeps growing. But growth is no longer exponential? Who cares? Media, and consequently the user Engine design Engine crawl policy. Impact on recall. What is size? Number of web servers/web sites? Number of pages? Terabytes of data available? Size of search engine index?

27 Difficulties in defining the web size
Rank Crawl User Graph Spam Sec. 19.5 Difficulties in defining the web size Some servers are seldom connected. Example: Your laptop running a web server Is it part of the web? The “dynamic” web is infinite. Soft 404: is a valid page Dynamic content, e.g., Whether forecast calendar Any sum of two numbers is its own dynamic page on Google. Example: “2+4” Deep web content E.g., all the articles in nytimes. Duplicates Static web contains syntactic duplication, mostly due to mirroring (~30%)

28 What can we attempt to measure?
Rank Crawl User Graph Spam Sec. 19.5 What can we attempt to measure? The relative sizes of search engines The notion of a page being indexed is still reasonably well defined. Already there are problems Document extension: e.g. engines index pages not yet crawled, by indexing anchor text. Document restriction: All engines restrict what is indexed (first n words, only relevant words, etc.) Anchor text Bottom of a doc

29 “Search engine index contains N pages”: Issues
Rank Crawl User Graph Spam “Search engine index contains N pages”: Issues Can I claim a page is in the index if I only index the first bytes? Usually long documents are not fully indexed. Bottom parts are ignored. Can I claim a page is in the index if I only index anchor text pointing to the page? E.g., Apple web site may not contain the key word ‘computer’, but many anchor text pointing to Apple contains ‘computer’. Hence when people search for ‘computer’, Apple page may be returned There used to be (and still are?) billions of pages that are only indexed by anchor text.

30 Rank Crawl User Graph Spam Sec. 19.5 Indexable web The statically indexable web is whatever search engines index. Different engines have different preferences max url depth, max count/host, anti-spam rules, priority rules, etc. Different engines index different things under the same URL: Frames (e.g., some frames are navigational, should be indexed in a different way) meta-keywords, e.g., put more weight on the title document restrictions, document extensions, ...

31 Relative Size from overlap of engines A and B
Rank Crawl User Graph Spam Sec. 19.5 Relative Size from overlap of engines A and B A Ç B Sample URLs randomly from A Check if contained in B and vice versa A Ç B = (1/2) * Size A A Ç B = (1/6) * Size B (1/2)*Size A = (1/6)*Size B \ Size A / Size B = (1/6)/(1/2) = 1/3 Each test involves: (i) Sampling (ii) Checking

32 Rank Crawl User Graph Spam Sec. 19.5 Sampling URLs Ideal strategy: Generate a random URL and check for containment in each index. Problem: Random URLs are hard to find! Enough to generate a random URL contained in a given Engine. Approach 1: Generate a random URL contained in a given engine Suffices for the estimation of relative size Approach 2: Random walks / IP addresses In theory: might give us a true estimate of the size of the web (as opposed to just relative sizes of indexes)

33 Random URLs from random queries
Rank Crawl User Graph Spam Sec. 19.5 Random URLs from random queries Generate random query: how? Lexicon: 400,000+ words from a web crawl Conjunctive Queries: w1 and w2 e.g., vocalists AND rsi Get 100 result URLs from engine A Choose a random URL as the candidate to check for presence in engine B Download D. Get list of words. Use 8 low frequency words as AND query to B Check if D is present in result set. Not an English dictionary

34 Biases induced by random query
Rank Crawl User Graph Spam Sec. 19.5 Biases induced by random query Query Bias: Large documents have higher probability being captured by queries Solution: reject some large documents using, e.g., rejection sampling method Ranking Bias: Search engine ranks the matched documents and returns only top-k documents. Solution: Use conjunctive queries & fetch all Another solution: modify the estimator Checking Bias: Duplicates, impoverished pages omitted Document or query restriction bias: engine might not deal properly with 8 words conjunctive query Malicious Bias: Sabotage by engine Operational Problems: Time-outs, failures, engine inconsistencies, index modification.

35 Random IP addresses Generate random IP addresses
Rank Crawl User Graph Spam Sec. 19.5 Random IP addresses Generate random IP addresses Find a web server at the given address If there’s one Collect all pages from server From this, choose a page at random

36 Rank Crawl User Graph Spam Sec. 19.5 Random IP addresses Ignored: empty or authorization required or excluded [Lawr99] Estimated from observing 2500 servers 2.8 million IP addresses running crawlable web servers 16 million total servers 800 million pages Also estimated use of metadata descriptors: Meta tags (keywords, description) in 34% of home pages, Dublin core metadata in 0.3% OCLC using IP sampling found 8.7 M hosts in 2001 Netcraft [Netc02] accessed 37.2 million hosts in July 2002

37 Advantages & disadvantages
Rank Crawl User Graph Spam Sec. 19.5 Advantages & disadvantages Advantages Clean statistics Independent of crawling strategies Disadvantages Doesn’t deal with duplication Many hosts might share one IP, or not accept requests No guarantee all pages are linked to root page. Eg: employee pages Power law for # pages/hosts generates bias towards sites with few pages. But bias can be accurately quantified IF underlying distribution understood Potentially influenced by spamming (multiple IP’s for same server to avoid IP block)

38 Random walks View the Web as a directed graph
Rank Crawl User Graph Spam Sec. 19.5 Random walks View the Web as a directed graph Build a random walk on this graph Includes various “jump” rules back to visited sites Does not get stuck in spider traps! Can follow all links! Converges to a stationary distribution Must assume graph is finite and independent of the walk. Conditions are not satisfied (cookie crumbs, flooding) Time to convergence not really known (may be too long) Sample from stationary distribution of walk

39 Advantages & disadvantages
Rank Crawl User Graph Spam Sec. 19.5 Advantages & disadvantages Advantages “Statistically clean” method at least in theory! Could work even for infinite web (assuming convergence) under certain metrics. Disadvantages List of seeds is a problem. Practical approximation might not be valid. Non-uniform distribution Subject to link spamming

40 Conclusions No sampling solution is perfect. Lots of new ideas ...
Rank Crawl User Graph Spam Sec. 19.5 Conclusions No sampling solution is perfect. Lots of new ideas ... ....but the problem is getting harder Quantitative studies are fascinating and a good research problem

41 Another estimation method
Rank Crawl User Graph Spam Another estimation method OR-query of frequent words in a number of languages According to such query: Size of web > 21,450,000,000 on > 25,350,000,000 on But page counts of google search results are only rough estimates.

42 The Web document collection
No design/co-ordination Distributed content creation, linking, democratization of publishing Content includes truth, lies, obsolete information, contradictions … Unstructured (text, html, …), semi-structured (XML, annotated photos), structured (Databases)… Scale much larger than previous text collections … but corporate records are catching up Growth – slowed down from initial “volume doubling every few months” but still expanding Content can be dynamically generated See the next slide The Web

43 Documents Dynamically generated content (deep web)
Dynamic pages are generated from scratch when the user requests them – usually from underlying data in a database. Example: current status of flight LH 454 Most (truly) dynamic content is ignored by web spiders. It’s too much to index it all. Actually, a lot of “static” content is also assembled on the fly (asp, php etc.: headers, date, ads etc)

44 Web search overall picture
Sec Web search overall picture User Web spider queries Indexer Search links Indexes The Web Ad indexes

45 Users Use short queries (average < 3) Rarely use operators
Graph User Crawl Rank Spam Users Use short queries (average < 3) Rarely use operators Don’t want to spend a lot of time on composing a query Only look at the first couple of results Want a simple UI, not a search engine start page overloaded with graphics Extreme variability in terms of user needs, user expectations, experience, knowledge, . . . Industrial/developing world, English/Estonian, old/young, rich/poor, differences in culture and class One interface for hugely divergent needs

46 Queries Queries have a power law distribution
Graph User Crawl Rank Spam Queries Queries have a power law distribution Power law again ! a few very frequent queries, a large number of very rare queries Examples of rare queries: search for names, towns, books etc

47 Graph User Crawl Rank Spam Types of queries Informational user needs: I need information on something. (~40% / 65%) “web service”, “information retrieval” Navigational user needs: I want to go to this web site. (~25% / 15%) “hotmail”, “myspace”, “United Airlines” Transactional user needs: I want to make a transaction. (~35% / 20%) Buy something: “MacBook Air” Download something: “Acrobat Reader” Chat with someone: “live soccer chat” Gray areas Find a good hub Exploratory search “see what’s there” Difficult problem: How can the search engine tell what the user need or intent for a particular query is?

48 How far do people look for results?
Graph User Crawl Rank Spam How far do people look for results? 40% users look at first page only (Source: iprospect.com WhitePaper_2006_SearchEngineUserBehavior.pdf)

49 User’s evaluation on result
Graph User Crawl Rank Spam User’s evaluation on result Classic IR relevance (as measured by F, or precision and recall) can also be used for web IR. Precision: fraction of retrieved instances that are relevant, Recall: fraction of relevant instances that are retrieved relevant items are to the left of the straight line the retrieved items are within the oval. The red regions represent errors. On the left these are the relevant items not retrieved (false negatives), while on the right they are the retrieved items that are not relevant (false positives). Precision and recall are the quotient of the left green region by respectively the oval (horizontal arrow) and the left region (diagonal arrow).

50 Users’ empirical evaluation of results (cont.)
Graph User Crawl Rank Spam Users’ empirical evaluation of results (cont.) On the web, precision is more important than recall. Precision is relative to the top k results Precision at page 1 or page 10? Precision for the first 20 results? Comprehensiveness – must be able to deal with obscure queries Recall matters when the number of matches is very small Quality of pages varies widely Relevance is not enough Other desirable qualities (non IR!!) Content: Trustworthy, objective, diverse, non-duplicated, well maintained, coverage of topics for polysemic queries Web readability: display correctly & fast No annoyances: pop-ups, etc User perceptions may be unscientific, but are significant over a large aggregate

51 Users’ empirical evaluation of engines
Graph User Crawl Rank Spam Users’ empirical evaluation of engines Relevance and validity of results (discussed) UI – Simple, no clutter, error tolerant Pre/Post process tools provided Mitigate user errors (auto spell check, search assist,…) Explicit: Search within results, more like this, refine ... Anticipative: related searches Deal with idiosyncrasies Web specific vocabulary Impact on stemming, spell-check, etc Web addresses typed in the search box

52 Web search overall picture
Sec Web search overall picture User Web spider queries Indexer Search links Indexes The Web Ad indexes


Download ppt "Web Basics Slides adapted from"

Similar presentations


Ads by Google