Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Buddhist who understood (your) Desire

Similar presentations


Presentation on theme: "The Buddhist who understood (your) Desire"— Presentation transcript:

1

2 The Buddhist who understood (your) Desire
It’s not the consumers’ job to know what they want.

3 Engineering Issues Crawling Web-scale infrastructure
Connectivity Serving Web-scale infrastructure Commodity Computing/Server Farms Map-Reduce Architecture How to exploit it for Efficient Indexing Efficient Link analysis

4 Crawlers: Main issues General-purpose crawling
Context specific crawiling Building topic-specific search engines…

5

6

7

8 SPIDER CASE STUDY

9 Web Crawling (Search) Strategy
Starting location(s) Traversal order Depth first Breadth first Or ??? Cycles? Coverage? Load? d b e h j f c g i

10 Robot (2) Some specific issues: What initial URLs to use?
Choice depends on type of search engines to be built. For general-purpose search engines, use URLs that are likely to reach a large portion of the Web such as the Yahoo home page. For local search engines covering one or several organizations, use URLs of the home pages of these organizations. In addition, use appropriate domain constraint.

11 Robot (7) Several research issues about robots:
Fetching more important pages first with limited resources. Can use measures of page importance Fetching web pages in a specified subject area such as movies and sports for creating domain-specific search engines. Focused crawling Efficient re-fetch of web pages to keep web page index up-to-date. Keeping track of change rate of a page

12 Storing Summaries Can’t store complete page text
Whole WWW doesn’t fit on any server Stop Words Stemming What (compact) summary should be stored? Per URL Title, snippet Per Word URL, word number But, look at Google’s “Cache” copy ..and its “privacy violations”…

13

14 Mercator’s way of maintaining URL frontier Extracted URLs enter front queue Each URL goes into a front queue based on its Priority. (priority assigned Based on page importance and Change rate) URLs are shifted from Front to back queues. Each Back queue corresponds To a single host. Each queue Has time te at which the host Can be hit again URLs removed from back Queue when crawler wants A page to crawl How to prioritize? --Change rate; --page importance;

15

16 Robot (4) How to extract URLs from a web page?
Need to identify all possible tags and attributes that hold URLs. Anchor tag: <a href=“URL” … > … </a> Option tag: <option value=“URL”…> … </option> Map: <area href=“URL” …> Frame: <frame src=“URL” …> Link to an image: <img src=“URL” …> Relative path vs. absolute path: <base href= …> “Path Ascending Crawlers” – ascend up the path of the URL to see if there is anything else higher up the URL

17

18 (This was an older characterization)

19

20

21 Focused Crawling Classifier: Is crawled page P relevant to the topic?
Algorithm that maps page to relevant/irrelevant Semi-automatic Based on page vicinity.. Distiller:is crawled page P likely to lead to relevant pages? Algorithm that maps page to likely/unlikely Could be just A/H computation, and taking HUBS Distiller determines the priority of following links off of P

22

23

24 Connectivity Server.. All the link-analysis techniques need information on who is pointing to who In particular, need the back-link information Connectivity server provides this. It can be seen as an inverted index Forward: Page id  id’s of forward links Inverted: Page id  id’s of pages linking to it

25 Large Scale Indexing

26

27

28 What is the best way to exploit all these machines?
What kind of parallelism? Can’t be fine-grained Can’t depend on shared-memory (which could fail) Worker machines should be largely allowed to do their work independently We may not even know how many (and which) machines may be available…

29

30 Map-Reduce Parallelism
Named after lisp constructs map and reduce (reduce #’fn2 (map #’fn1 list)) Run function fn1 on every item of the list, and reduce the resulting list using fn2 (reduce #’* (map #’1+ ‘( ))) (reduce #’* ‘( )) (=5*6*7*89*10) (reduce #’+ (map #’primality-test ‘(num1 num2…))) So where is the parallelism? All the map operations can be done in parallel (e.g. you can test the primality of each of the numbers in parallel). The overall reduce operation has to be done after the map operation (but can also be parallelized; e.g. assuming the primality-test returns a 0 or 1, the reduce operation can partition the list into k smaller lists and add the elements of each of the lists in parallel (and add the results) Note that the parallelism in both the above examples depends on the length of input (the larger the input list the more parallel operations you can do in theory). Map-reduce on clusters of computers involve writing your task in a map-reduce form The cluster computing infrastructure will then “parallelize” the map and reduce parts using the available pool of machines (you don’t need to think—while writing the program—as to how many machines and which specific machines are used to do the parallel tasks) An open source environment that provides such an infrastructure is Hadoop Qn: Can we bring map-reduce parallelism to indexing?

31 MapReduce These slides are from Rajaram & Ullman

32 Single-node architecture
CPU Machine Learning, Statistics Memory “Classical” Data Mining Disk

33 Commodity Clusters Web data sets can be very large
Tens to hundreds of terabytes Cannot mine on a single server (why?) Standard architecture emerging: Cluster of commodity Linux nodes Gigabit ethernet interconnect How to organize computations on this architecture? Mask issues such as hardware failure

34 Cluster Architecture … … 2-10 Gbps backbone between racks
1 Gbps between any pair of nodes in a rack Switch Switch Switch Mem Disk CPU Mem Disk CPU Mem Disk CPU Mem Disk CPU Each rack contains nodes

35 Stable storage First order problem: if nodes can fail, how can we store data persistently? Answer: Distributed File System Provides global file namespace Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place Reads and appends are common

36 Distributed File System
Chunk Servers File is split into contiguous chunks Typically each chunk is 16-64MB Each chunk replicated (usually 2x or 3x) Try to keep replicas in different racks Master node a.k.a. Name Nodes in HDFS Stores metadata Might be replicated Client library for file access Talks to master to find chunk servers Connects directly to chunkservers to access data

37 Warm up: Word Count We have a large file of words, one word to a line
Count the number of times each distinct word appears in the file Sample application: analyze web server logs to find popular URLs

38 Word Count (2) Case 1: Entire file fits in memory
Case 2: File too large for mem, but all <word, count> pairs fit in mem Case 3: File on disk, too many distinct words to fit in memory sort datafile | uniq –c

39 Word Count (3) To make it slightly harder, suppose we have a large corpus of documents Count the number of times each distinct word occurs in the corpus words(docs/*) | sort | uniq -c where words takes a file and outputs the words in it, one to a line The above captures the essence of MapReduce Great thing is it is naturally parallelizable

40 MapReduce: The Map Step
Input key-value pairs Intermediate key-value pairs k v map v k map v k k v k v v k

41 MapReduce: The Reduce Step
Output key-value pairs k v Intermediate key-value pairs k v Key-value groups reduce k v reduce k v group k v

42 MapReduce Input: a set of key/value pairs User supplies two functions:
map(k,v)  list(k1,v1) reduce(k1, list(v1))  v2 (k1,v1) is an intermediate key/value pair Output is the set of (k1,v2) pairs

43 Word Count using MapReduce
map(key, value): // key: document name; value: text of document for each word w in value: emit(w, 1) reduce(key, values): // key: a word; value: an iterator over counts result = 0 for each count v in values: result += v emit(result)

44 Distributed Execution Overview
User Program Worker Master fork assign map reduce Split 0 Split 1 Split 2 Input Data local write Output File 0 File 1 write read remote read, sort

45 Data flow Input, final output are stored on a distributed file system
Scheduler tries to schedule map tasks “close” to physical storage location of input data Intermediate results are stored on local FS of map and reduce workers Output is often input to another map reduce task

46 Coordination Master data structures
Task status: (idle, in-progress, completed) Idle tasks get scheduled as workers become available When a map task completes, it sends the master the location and sizes of its R intermediate files, one for each reducer Master pushes this info to reducers Master pings workers periodically to detect failures

47 Failures Map worker failure Reduce worker failure Master failure
Map tasks completed or in-progress at worker are reset to idle Reduce workers are notified when task is rescheduled on another worker Reduce worker failure Only in-progress tasks are reset to idle Master failure MapReduce task is aborted and client is notified

48 How many Map and Reduce jobs?
M map tasks, R reduce tasks Rule of thumb: Make M and R much larger than the number of nodes in cluster One DFS chunk per map is common Improves dynamic load balancing and speeds recovery from worker failure Usually R is smaller than M, because output is spread across R files

49 Combiners Often a map task will produce many pairs of the form (k,v1), (k,v2), … for the same key k E.g., popular words in Word Count Can save network time by pre-aggregating at mapper combine(k1, list(v1))  v2 Usually same as reduce function Works only if reduce function is commutative and associative

50 Partition Function Inputs to map tasks are created by contiguous splits of input file For reduce, we need to ensure that records with the same intermediate key end up at the same worker System uses a default partition function e.g., hash(key) mod R Sometimes useful to override E.g., hash(hostname(URL)) mod R ensures URLs from a host end up in the same output file

51 Exercise 1: Host size Suppose we have a large web corpus
Let’s look at the metadata file Lines of the form (URL, size, date, …) For each host, find the total number of bytes i.e., the sum of the page sizes for all URLs from that host

52 Exercise 2: Distributed Grep
Find all occurrences of the given pattern in a very large set of files

53 Exercise 3: Graph reversal
Given a directed graph as an adjacency list: src1: dest11, dest12, … src2: dest21, dest22, … Construct the graph in which all the links are reversed

54 Exercise 4: Frequent Pairs
Given a large set of market baskets, find all frequent pairs Remember definitions from Association Rules lectures

55 Implementations Google Hadoop Aster Data Not available outside Google
An open-source implementation in Java Uses HDFS for stable storage Download: Aster Data Cluster-optimized SQL Database that also implements MapReduce Made available free of charge for this class

56 Cloud Computing Ability to rent computing by the hour
Additional services e.g., persistent storage We will be using Amazon’s “Elastic Compute Cloud” (EC2) Aster Data and Hadoop can both be run on EC2 In discussions with Amazon to provide access free of charge for class

57 Reading MapReduce: Simplified Data Processing on Large Clusters
Jeffrey Dean and Sanjay Ghemawat, MapReduce: Simplified Data Processing on Large Clusters Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung, The Google File System

58 [From Lin & Dyer book]

59 Partition the set of documents into “blocks”
construct index for each block separately merge the indexes

60 (map workers) (reduce workers)

61

62

63

64

65 Other references on Map-Reduce

66 Distributing indexes over hosts
At web scale, the entire inverted index can’t be held on a single host. How to distribute? Split the index by terms Split the index by documents Preferred method is to split it by docs (!) Each index only points to docs in a specific barrel Different strategies for assigning docs to barrels At retrieval time Compute top-k docs from each barrel Merge the top-k lists to generate the final top-k Result merging can be tricky..so try to punt it Idea Consider putting most “important” docs in top few barrels This way, we can ignore worrying about other barrels unless the top barrels don’t return enough results Another idea Split the top 20 and bottom 80% of the doc occurrences into different indexes.. Short vs. long barrels Do search on short ones first and then go to long ones as needed

67 Dynamic Indexing “simplest” approach

68 Efficient Computation of Pagerank
How to power-iterate on the web-scale matrix?

69 Representing ‘Links’ Table
Stored on disk in binary format 1 2 4 3 5 12, 26, 58, 94 5, 56, 69 1, 9, 10, 36, 78 Source node (32 bit int) Outdegree (16 bit int) Destination nodes Size for Stanford WebBase: 1.01 GB Assumed to exceed main memory How do we split this?

70 Algorithm 1 =  s Source[s] = 1/N while residual > {
Dest Links (sparse) Source source node dest node Algorithm 1 s Source[s] = 1/N while residual > { d Dest[d] = 0 while not Links.eof() { Links.read(source, n, dest1, … destn) for j = 1… n Dest[destj] = Dest[destj]+Source[source]/n } d Dest[d] = c * Dest[d] + (1-c)/N /* dampening */ residual = Source – Dest /* recompute every few iterations */ Source = Dest

71 Analysis of Algorithm 1 If memory is big enough to hold Source & Dest
IO cost per iteration is | Links| Fine for a crawl of 24 M pages But web ~ 800 M pages in 2/ [NEC study] Increase from 320 M pages in [same authors] If memory is big enough to hold just Dest Sort Links on source field Read Source sequentially during rank propagation step Write Dest to disk to serve as Source for next iteration IO cost per iteration is | Source| + | Dest| + | Links| If memory can’t hold Dest Random access pattern will make working set = | Dest| Thrash!!! Precision of floating point nubmers in array --- single precision is good enough.

72 Block-Based Algorithm
Partition Dest into B blocks of D pages each If memory = P physical pages D < P-2 since need input buffers for Source & Links Partition Links into B files Linksi only has some of the dest nodes for each source Linksi only has dest nodes such that DD*i <= dest < DD*(i+1) Where DD = number of 32 bit integers that fit in D pages source node = dest node Dest Links (sparse) Source

73 Partitioned Link File

74 Block-based Page Rank algorithm

75 Analysis of Block Algorithm
IO Cost per iteration = B*| Source| + | Dest| + | Links|*(1+e) e is factor by which Links increased in size Typically Depends on number of blocks Algorithm ~ nested-loops join

76 Comparing the Algorithms

77 Efficient Computation of Page Rank: Preprocess
Remove ‘dangling’ nodes Pages w/ no children Then repeat process Since now more danglers Stanford WebBase 25 M pages 81 M URLs in the link graph After two prune iterations: 19 M nodes Does it really make sense to remove dangling nodes? Are they “less important” nodes? We have to reintroduce dangling nodes and compute their page ranks in terms of their “fans” One issue with this model is dangling links. Dangling links are simply links that point to any page with no outgoing links. They a ect the model because it is not clear where their weight should be distributed, and there are a large number of them. Often these dangling links are simply pages that we have not downloaded yet, since it is hard to sample the entire web (in our 24 million pages currently downloaded, we have 51 million URLs not downloaded yet, and hence dangling). Because dangling links do not a ect the ranking of any other page directly, we simply remove them from the system until all the PageRanks are calculated. After all the PageRanks are calculated, they can be added back in, without a ecting things signi cantly. Notice the normalization of the other links on the same page as a link which was removed will change slightly, but this should not have a large e ect.

78 Efficient computation: Prioritized Sweeping
We can use asynchronous iterations where the iteration uses some of the values updated in the current iteration

79 Summary of Key Points PageRank Iterative Algorithm Rank Sinks
Efficiency of computation – Memory! Single precision Numbers. Don’t represent M* explicitly. Break arrays into Blocks. Minimize IO Cost. Number of iterations of PageRank. Weighting of PageRank vs. doc similarity.

80

81 System Anatomy High Level Overview
בשקף זה מבט מלמעלה של איך המערכת פועלת. רב הקוד של google נכתב ב-C או C++ ליעילות ומורץ על Solaris או Linux. ההורדה של עמודי web נעשית ע”י web crawler-ים מבוזרים (שלשה), ה-URL Server שולח רשימות של URL-ים לcrawlers. דפי ה-web מובאים ונשלחים ל-Store Server. שם הוא דוחס אותם ושומר אותם ב-Repository. לכל URL ניתן זיהוי מיוחד לו - docID כאשר הוא מוצא כלינק חדש מדף. פונקצית הindexing נעשית ע”י ה-Indexer וה-Sorter. ה-Indexer מבצע מספר דברים, הוא קורא מה-Repository, פותח את הכיווץ ומפרסס את הדף. כל דף נהפך לאוסף occurrences של מילים שנקראים hits. ה-Indexer מחלק את המילים לתוך אוסף של חביות ויוצר forward index ממוין חלקית. תפקיד נוסף של ה-Indexer הוא מוציא את כל הלינקים מדף ושומר אינפורמציה עליהם ב-Anchor file , קובץ זה מכיל כל מה שצריך כדי לדעת מאיפה,לאן והטקסט של הלינק. ה-URLresolver קורא את ה-Anchor file והופך לינק יחסי למוחלט ואח”כ ל-docID , בנוסף הוא שם את ה-Anchor text לתוך ה-forward index מקושר ללינק עליו הוא הצביע. בנוסף הוא יוצר את הDB של הלינקים שהם זוגות docID. ה-DB של הלינקים משמש לחישוב PageRank של כל המסמכים. ה-Sorter לוקח את החביות שממוינות לפי docID וממיין אותן מחדש לפי wordID ליצירת ה-inverted index. פעולה זו נעשית במקום כך שמעט זיכרון זמני דרוש לפעולה. הוא גם יוצר רשימה של wordID-ים ומיקומם ב-inverted index. ה-Searcher רץ על server ומשתמש ב-PageRank, Lexicon, inverted High Level Overview

82 Google Search Engine Architecture
URL Server- Provides URLs to be fetched Crawler is distributed Store Server - compresses and stores pages for indexing Repository - holds pages for indexing (full HTML of every page) Indexer - parses documents, records words, positions, font size, and capitalization Lexicon - list of unique words found HitList – efficient record of word locs+attribs Barrels hold (docID, (wordID, hitList*)*)* sorted: each barrel has range of words Anchors - keep information about links found in web pages URL Resolver - converts relative URLs to absolute Sorter - generates Doc Index Doc Index - inverted index of all words in all documents (except stop words) Links - stores info about links to each page (used for Pagerank) Pagerank - computes a rank for each page retrieved Searcher - answers queries SOURCE: BRIN & PAGE

83 Major Data Structures Big Files
virtual files spanning multiple file systems addressable by 64 bit integers handles allocation & deallocation of File Descriptions since the OS’s is not enough supports rudimentary compression מבני הנתונים של google עברו אופטימיזציה כדי שאפשר יהיה לזחול לאנדקס ולחפש בעלות נמוכה. בעוד אך בעוד זמן CPU קטן disk seek time נשאר 10 ms לכן google מתוכנן להימנע ככל האפשר מגישה לדיסק, ולפיכך הייתה לזה השפעה רבה על תכנון מבני הנתונים.

84 Major Data Structures (2)
Repository tradeoff between speed & compression ratio choose zlib (3 to 1) over bzip (4 to 1) requires no other data structure to access it ה-Repository מכיל את מלא הHTML של כל דף, כל דף מכווץ ע”י zlib , בבחירת שיטת הכיווץ יש trade-off בין מהירות לבין יחס כיווץ, הם בחרו ב-zlib למרות ש-bzip היה הרבה יותר מכווץ 4-1 לעומת הדפים נשמרים אחד אח ר השני כפי שרואים בציור docID , errorcode , אורך וURL. ה-Repositery אינו דורש מבנים נוספים כדי לגשת איליו, ניתן לבנות את כל שאר המבנים רק ממנו.

85 Major Data Structures (3)
Document Index keeps information about each document fixed width ISAM (index sequential access mode) index includes various statistics pointer to repository, if crawled, pointer to info lists compact data structure we can fetch a record in 1 disk seek during search הרשומה מכילה מידע אם הדף נזחל, אם כן פוינטר לרשימה המכילה את ה-URL שלו וה-title שלו, אחרת פוינטר לרשימה המכילה רק את ה-URL שלו.

86 Major Data Structures (4)
URL’s - docID file used to convert URLs to docIDs list of URL checksums with their docIDs sorted by checksums given a URL a binary search is performed conversion is done in batch mode מבנה נוסף הוא קובץ בו משתמשים כדי להפוךURL-ים ל docID-ים. הקובץ מכיל רשימה של checksum עם ה-docID המתאימים. כדי למצוא docID מתבצע חיפוש בינארי על הרשימה. ה-URLresolver משתמש בקובץ זה ב-batch mode אז כל הרשימה נמצאת בזיכרון, mode זה הוא חיוני בגלל שאים נבצע בקשה מהדיסק כל לינק, זה ייקח יותר מחודש להפוך את ה-DB של 322 מיליון לינקים.

87 Major Data Structures (4)
Lexicon can fit in memory for reasonable price currently 256 MB contains 14 million words 2 parts a list of words a hash table

88 Major Data Structures (4)
Hit Lists includes position font & capitalization account for most of the space used in the indexes 3 alternatives: simple, Huffman , hand-optimized hand encoding uses 2 bytes for every hit הקידוד הידני מתאר 2 סוגים של hits,והם fancy & plain. Fancy כוללים כאילו בתוך URL, title, anchor text or meta tags, רגילים כוללים כל השאר. גודל הפונט הוא יחסי לגודל שאר המסמך ע”י 3 ביטים, 111 מייצג fancy hit.

89 Major Data Structures (4)
Hit Lists (2) כמו שרואים כדי לייצג את מספר ה-Hits , יש 8 ו-5 ביטים ב-2 סוגי האינדקסים. לשמור מקום, אורך הרשימה, משולב עם wordID ב-forward index, ועם ה-docID ב-inverted index, יש מספר טריקים כיצד לייצג מספר גדול יותר ע”י escape code ואז 2 הבייטים הבאים מייצגים את האורך.

90 Major Data Structures (5)
Forward Index partially ordered used 64 Barrels each Barrel holds a range of wordIDs requires slightly more storage each wordID is stored as a relative difference from the minimum wordID of the Barrel saves considerable time in the sorting

91 Major Data Structures (6)
Inverted Index 64 Barrels (same as the Forward Index) for each wordID the Lexicon contains a pointer to the Barrel that wordID falls into the pointer points to a doclist with their hit list the order of the docIDs is important by docID or doc word-ranking Two inverted barrels—the short barrel/full barrel דבר חשוב הוא הסדר לפיו מסודרים המסמכים בכל רשימת-מילה, אפשרות אחת היא למיין לפי docID , זה מאפשר מיזוג מהיר של רשימות של כמה מילים, אופציה אחרת היא למיין לפי דרוג של הופעת המילה במסמך, זה מאפשר לענות לשאילתות של מילה אחת בצורה טריביאלית (לוקחים את הראשונות) וכנראה שהתשובה לשאילתות של כמה מילים יהיו קרובות להתחלה. בשיטה זו מיזוג הרבה יותר קשה וזה אף מקשה על הפיתוח מכיוון ששינוי בפונקצית הדירוג מחייב בניה מחדש של האינדקס. ב-google בחרו בפשרה בין 2 הגישות כדי להבחין בצורה מוגבלת בין טיב של דפים למילה, הם שומרים 2 רשימות של hits . הראשונה ל-hits שהם fancy כלומר חשובות מ-title, anchor ושניה לשאר, כך הם בודקים את הראשונה ואים אין מספיק תוצאות הם בודקים את השניה, בצורה זו זה לא תלוי בפונקצית הדרוג. שימו לב כדי להחזיר תוצאות מהר מחזירים תוצאות תת-אופטימליות

92 Major Data Structures (7)
Crawling the Web fast distributed crawling system URLserver & Crawlers are implemented in phyton each Crawler keeps about 300 connection open at peek time the rate pages, 600K per second uses: internal cached DNS lookup synchronized IO to handle events number of queues Robust & Carefully tested זחילה היא האפליקציה השבירה ביותר כיוון שהיא מעורבת באינטראקציה עם מאות אלפי סרברים אחרים שהם מעבר לשליטת המערכת. כדי לטפל בכזו כמות של דפים למנוע יש מערכת מהירה ומבוזרת לזחילה. ב-google הם מריצים 3 crawlers כל אחד מחזיק בו-זמנית 300 התקשרויות. בזמן השיא הם מצליחים להוריד 100 דפים בשניה או 600 KB. כדי לייעל כל crawler שומר טבלת DNS LOOKUP משל עצמו ב-cache. אובייקטים כאילו הם מאוד מסובכים כל דף יכול להימצא במספר מצבים לפני חיפוש שם, אחרי,התקשרות, אחרי שליחת הבקשה, בזמן קבלת תשובה, לכן הcrawler משתמש ב-IO מסונכרן ובמספר תורים. כנראה ששיטה זו עדיפה על לתת למערכת הC++ לטפל למקבילות. להריץ מערכת כזו מקבלים הרבה טלפונים מבעלי אתרים, חלקם לא יודע על פרוטוקול שמאפשר שלא יזחלו על אתריו. דברים לא צפויים יקרו למשל לזחול על משחק online בעיה שקרתה רק אחרי מיליון דפים

93 Major Data Structures (8)
Indexing the Web Parsing should know to handle errors HTML typos kb of zeros in a middle of a TAG non-ASCII characters HTML Tags nested hundreds deep Developed their own Parser involved a fair amount of work did not cause a bottleneck

94 Major Data Structures (9)
Indexing Documents into Barrels turning words into wordIDs in-memory hash table - the Lexicon new additions are logged to a file parallelization shared lexicon of 14 million pages log of all the extra words

95 Major Data Structures (10)
Indexing the Web Sorting creating the inverted index produces two types of barrels for titles and anchor (Short barrels) for full text (full barrels) sorts every barrel separately running sorters at parallel the sorting is done in main memory Ranking looks at Short barrels first And then full barrels

96 Searching Algorithm 5. Compute the rank of that document
1. Parse the query 2. Convert word into wordIDs 3. Seek to the start of the doclist in the short barrel for every word 4. Scan through the doclists until there is a document that matches all of the search terms 5. Compute the rank of that document 6. If we’re at the end of the short barrels start at the doclists of the full barrel, unless we have enough 7. If were not at the end of any doclist goto step 4 8. Sort the documents by rank return the top K (May jump here after 40k pages)

97 The Ranking System The information Hits Types
Position, Font Size, Capitalization Anchor Text PageRank Hits Types title ,anchor , URL etc.. small font, large font etc..

98 The Ranking System (2) Each Hit type has it’s own weight
Counts weights increase linearly with counts at first but quickly taper off this is the IR score of the doc (IDF weighting??) the IR is combined with PageRank to give the final Rank For multi-word query A proximity score for every set of hits with a proximity type weight 10 grades of proximity

99 Feedback A trusted user may optionally evaluate the results
The feedback is saved When modifying the ranking function we can see the impact of this change on all previous searches that were ranked

100 Results Produce better results than major commercial search engines for most searches Example: query “bill clinton” return results from the “Whitehouse.gov” addresses of the president all the results are high quality pages no broken links no bill without clinton & no clinton without bill

101 Storage Requirements Using Compression on the repository
about 55 GB for all the data used by the SE most of the queries can be answered by just the short inverted index with better compression, a high quality SE can fit onto a 7GB drive of a new PC

102 Storage Statistics Web Page Statistics

103 System Performance It took 9 days to download 26million pages
48.5 pages per second The Indexer & Crawler ran simultaneously The Indexer runs at 54 pages per second The sorters run in parallel using 4 machines, the whole process took 24 hours


Download ppt "The Buddhist who understood (your) Desire"

Similar presentations


Ads by Google