Presentation is loading. Please wait.

Presentation is loading. Please wait.

2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday.

Similar presentations


Presentation on theme: "2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday."— Presentation transcript:

1 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday 10:30 am - 12:00 pm Spring 2006 http://www.sims.berkeley.edu/academics/courses/is240/s06/ Principles of Information Retrieval Lecture 26: Web Searching

2 2006.04.20 - SLIDE 2IS 240 – Spring 2006 Mini-TREC Proposed Schedule –February 14-16 – Database and previous Queries –March 2 – report on system acquisition and setup –March 2, New Queries for testing… –April 20, Results due –April 25, Results and system rankings –May 9, Group reports and discussion

3 2006.04.20 - SLIDE 3IS 240 – Spring 2006 Term Paper Should be about 10-15 pages on: – some area of IR research (or practice) that you are interested in and want to study further –Experimental tests of systems or IR algorithms –Build an IR system, test it, and describe the system and its performance Due May 9 th (Last day of class)

4 2006.04.20 - SLIDE 4IS 240 – Spring 2006 Today Review –Web Crawling and Search Issues –Web Search Engines and Algorithms Web Search Processing –Parallel Architectures (Inktomi - Brewer) –Cheshire III Design Credit for some of the slides in this lecture goes to Marti Hearst and Eric Brewer

5 2006.04.20 - SLIDE 5IS 240 – Spring 2006 Web Crawlers How do the web search engines get all of the items they index? More precisely: –Put a set of known sites on a queue –Repeat the following until the queue is empty: Take the first page off of the queue If this page has not yet been processed: –Record the information found on this page »Positions of words, links going out, etc –Add each link on the current page to the queue –Record that this page has been processed In what order should the links be followed?

6 2006.04.20 - SLIDE 6IS 240 – Spring 2006 Page Visit Order Animated examples of breadth-first vs depth-first search on trees: http://www.rci.rutgers.edu/~cfs/472_html/AI_SEARCH/ExhaustiveSearch.html

7 2006.04.20 - SLIDE 7IS 240 – Spring 2006 Sites Are Complex Graphs, Not Just Trees Page 1 Page 3 Page 2 Page 1 Page 2 Page 1 Page 5 Page 6 Page 4 Page 1 Page 2 Page 1 Page 3 Site 6 Site 5 Site 3 Site 1 Site 2

8 2006.04.20 - SLIDE 8IS 240 – Spring 2006 Web Crawling Issues Keep out signs –A file called robots.txt tells the crawler which directories are off limits Freshness –Figure out which pages change often –Recrawl these often Duplicates, virtual hosts, etc –Convert page contents with a hash function –Compare new pages to the hash table Lots of problems –Server unavailable –Incorrect html –Missing links –Infinite loops Web crawling is difficult to do robustly!

9 2006.04.20 - SLIDE 9IS 240 – Spring 2006 Search Engines Crawling Indexing Querying

10 2006.04.20 - SLIDE 10IS 240 – Spring 2006 Web Search Engine Layers From description of the FAST search engine, by Knut Risvik http://www.infonortics.com/searchengines/sh00/risvik_files/frame.htm

11 2006.04.20 - SLIDE 11IS 240 – Spring 2006 Standard Web Search Engine Architecture crawl the web create an inverted index Check for duplicates, store the documents Inverted index Search engine servers user query Show results To user DocIds

12 2006.04.20 - SLIDE 12IS 240 – Spring 2006 More detailed architecture, from Brin & Page 98. Only covers the preprocessing in detail, not the query serving.

13 2006.04.20 - SLIDE 13IS 240 – Spring 2006 Indexes for Web Search Engines Inverted indexes are still used, even though the web is so huge Most current web search systems partition the indexes across different machines –Each machine handles different parts of the data (Google uses thousands of PC-class processors and keeps most things in main memory) Other systems duplicate the data across many machines –Queries are distributed among the machines Most do a combination of these

14 2006.04.20 - SLIDE 14IS 240 – Spring 2006 Search Engine Querying In this example, the data for the pages is partitioned across machines. Additionally, each partition is allocated multiple machines to handle the queries. Each row can handle 120 queries per second Each column can handle 7M pages To handle more queries, add another row. From description of the FAST search engine, by Knut Risvik http://www.infonortics.com/searchengines/sh00/risvik_files/frame.htm

15 2006.04.20 - SLIDE 15IS 240 – Spring 2006 Querying: Cascading Allocation of CPUs A variation on this that produces a cost- savings: –Put high-quality/common pages on many machines –Put lower quality/less common pages on fewer machines –Query goes to high quality machines first –If no hits found there, go to other machines

16 2006.04.20 - SLIDE 16IS 240 – Spring 2006 Google Google maintains (probably) the worlds largest Linux cluster (over 15,000 servers) These are partitioned between index servers and page servers –Index servers resolve the queries (massively parallel processing) –Page servers deliver the results of the queries Over 8 Billion web pages are indexed and served by Google

17 2006.04.20 - SLIDE 17IS 240 – Spring 2006 Search Engine Indexes Starting Points for Users include Manually compiled lists –Directories Page “popularity” –Frequently visited pages (in general) –Frequently visited pages as a result of a query Link “co-citation” –Which sites are linked to by other sites?

18 2006.04.20 - SLIDE 18IS 240 – Spring 2006 Starting Points: What is Really Being Used? Todays search engines combine these methods in various ways –Integration of Directories Today most web search engines integrate categories into the results listings Lycos, MSN, Google –Link analysis Google uses it; others are also using it Words on the links seems to be especially useful –Page popularity Many use DirectHit’s popularity rankings

19 2006.04.20 - SLIDE 19IS 240 – Spring 2006 Web Page Ranking Varies by search engine –Pretty messy in many cases –Details usually proprietary and fluctuating Combining subsets of: –Term frequencies –Term proximities –Term position (title, top of page, etc) –Term characteristics (boldface, capitalized, etc) –Link analysis information –Category information –Popularity information

20 2006.04.20 - SLIDE 20IS 240 – Spring 2006 Ranking: Hearst ‘96 Proximity search can help get high- precision results if >1 term –Combine Boolean and passage-level proximity –Proves significant improvements when retrieving top 5, 10, 20, 30 documents –Results reproduced by Mitra et al. 98 –Google uses something similar

21 2006.04.20 - SLIDE 21IS 240 – Spring 2006 Ranking: Link Analysis Assumptions: –If the pages pointing to this page are good, then this is also a good page –The words on the links pointing to this page are useful indicators of what this page is about –References: Page et al. 98, Kleinberg 98

22 2006.04.20 - SLIDE 22IS 240 – Spring 2006 Ranking: Link Analysis Why does this work? –The official Toyota site will be linked to by lots of other official (or high-quality) sites –The best Toyota fan-club site probably also has many links pointing to it –Less high-quality sites do not have as many high-quality sites linking to them

23 2006.04.20 - SLIDE 23IS 240 – Spring 2006 Ranking: PageRank Google uses the PageRank We assume page A has pages T1...Tn which point to it (i.e., are citations). The parameter d is a damping factor which can be set between 0 and 1. d is usually set to 0.85. C(A) is defined as the number of links going out of page A. The PageRank of a page A is given as follows: PR(A) = (1-d) + d (PR(T1)/C(T1) +... + PR(Tn)/C(Tn)) Note that the PageRanks form a probability distribution over web pages, so the sum of all web pages' PageRanks will be one

24 2006.04.20 - SLIDE 24IS 240 – Spring 2006 PageRank T2Pr=1 T1Pr=.725 T6Pr=1 T5Pr=1 T4Pr=1 T3Pr=1 T7Pr=1 T8Pr=2.46625 X1 X2 APr=4.2544375 Note: these are not real PageRanks, since they include values >= 1

25 2006.04.20 - SLIDE 25IS 240 – Spring 2006 PageRank Similar to calculations used in scientific citation analysis (e.g., Garfield et al.) and social network analysis (e.g., Waserman et al.) Similar to other work on ranking (e.g., the hubs and authorities of Kleinberg et al.) How is Amazon similar to Google in terms of the basic insights and techniques of PageRank? How could PageRank be applied to other problems and domains?

26 2006.04.20 - SLIDE 26IS 240 – Spring 2006 Today Review –Web Crawling and Search Issues –Web Search Engines and Algorithms Web Search Processing –Parallel Architectures (Inktomi – Eric Brewer) –Cheshire III Design Credit for some of the slides in this lecture goes to Marti Hearst and Eric Brewer

27 2006.04.20 - SLIDE 27IS 240 – Spring 2006

28 2006.04.20 - SLIDE 28IS 240 – Spring 2006

29 2006.04.20 - SLIDE 29IS 240 – Spring 2006

30 2006.04.20 - SLIDE 30IS 240 – Spring 2006

31 2006.04.20 - SLIDE 31IS 240 – Spring 2006

32 2006.04.20 - SLIDE 32IS 240 – Spring 2006

33 2006.04.20 - SLIDE 33IS 240 – Spring 2006

34 2006.04.20 - SLIDE 34IS 240 – Spring 2006

35 2006.04.20 - SLIDE 35IS 240 – Spring 2006

36 2006.04.20 - SLIDE 36IS 240 – Spring 2006

37 2006.04.20 - SLIDE 37IS 240 – Spring 2006

38 2006.04.20 - SLIDE 38IS 240 – Spring 2006

39 2006.04.20 - SLIDE 39IS 240 – Spring 2006

40 2006.04.20 - SLIDE 40IS 240 – Spring 2006

41 2006.04.20 - SLIDE 41IS 240 – Spring 2006

42 2006.04.20 - SLIDE 42IS 240 – Spring 2006

43 2006.04.20 - SLIDE 43IS 240 – Spring 2006

44 2006.04.20 - SLIDE 44IS 240 – Spring 2006

45 2006.04.20 - SLIDE 45IS 240 – Spring 2006

46 2006.04.20 - SLIDE 46IS 240 – Spring 2006

47 2006.04.20 - SLIDE 47IS 240 – Spring 2006 Digital Library Grid Initiatives: Cheshire3 and the Grid Ray R. Larson University of California, Berkeley School of Information Management and Systems Rob Sanderson University of Liverpool Dept. of Computer Science Thanks to Dr. Eric Yen and Prof. Michael Buckland for parts of this presentation Presentation from DLF Forum April 2005

48 2006.04.20 - SLIDE 48IS 240 – Spring 2006 Overview The Grid, Text Mining and Digital Libraries –Grid Architecture –Grid IR Issues Cheshire3: Bringing Search to Grid-Based Digital Libraries –Overview –Grid Experiments –Cheshire3 Architecture –Distributed Workflows

49 2006.04.20 - SLIDE 49IS 240 – Spring 2006 Grid middleware Chemical Engineering Applications Application Toolkits Grid Services Grid Fabric Climate Data Grid Remote Computing Remote Visualization Collaboratories High energy physics Cosmology Astrophysics Combustion.…. Portals Remote sensors..… Protocols, authentication, policy, instrumentation, Resource management, discovery, events, etc. Storage, networks, computers, display devices, etc. and their associated local services Grid Architecture -- (Dr. Eric Yen, Academia Sinica, Taiwan.)

50 2006.04.20 - SLIDE 50IS 240 – Spring 2006 Chemical Engineering Applications Application Toolkits Grid Services Grid Fabric Grid middleware Climate Data Grid Remote Computing Remote Visualization Collaboratories High energy physics Cosmology Astrophysics Combustion Humanities computing Digital Libraries … Portals Remote sensors Text Mining Metadata management Search & Retrieval … Protocols, authentication, policy, instrumentation, Resource management, discovery, events, etc. Storage, networks, computers, display devices, etc. and their associated local services Grid Architecture (ECAI/AS Grid Digital Library Workshop) Bio-Medical

51 2006.04.20 - SLIDE 51IS 240 – Spring 2006 Grid-Based Digital Libraries Large-scale distributed storage requirements and technologies Organizing distributed digital collections Shared Metadata – standards and requirements Managing distributed digital collections Security and access control Collection Replication and backup Distributed Information Retrieval issues and algorithms

52 2006.04.20 - SLIDE 52IS 240 – Spring 2006 Grid IR Issues Want to preserve the same retrieval performance (precision/recall) while hopefully increasing efficiency (I.e. speed) Very large-scale distribution of resources is a challenge for sub-second retrieval Different from most other typical Grid processes, IR is potentially less computing intensive and more data intensive In many ways Grid IR replicates the process (and problems) of metasearch or distributed search

53 2006.04.20 - SLIDE 53IS 240 – Spring 2006 Cheshire3 Overview XML Information Retrieval Engine –3rd Generation of the UC Berkeley Cheshire system, as co-developed at the University of Liverpool. –Uses Python for flexibility and extensibility, but imports C/C++ based libraries for processing speed –Standards based: XML, XSLT, CQL, SRW/U, Z39.50, OAI to name a few. –Grid capable. Uses distributed configuration files, workflow definitions and PVM (currently) to scale from one machine to thousands of parallel nodes. –Free and Open Source Software. (GPL Licence) –http://www.cheshire3.org/ (under development!)

54 2006.04.20 - SLIDE 54IS 240 – Spring 2006 Cheshire3 Server Overview API INDEXINGINDEXING T R R X E A S C N L O S T R F D O R M S SEARCHSEARCH P H R A O N T D O L C E O R L DB API REMOTE SYSTEMS (any protocol) XML CONFIG & Metadata INFO INDEXES LOCAL DB STAFF UI CONFIG NETWORKNETWORK RESULT SETS SCANSCAN USER INFO CONFIG&CONTROLCONFIG&CONTROL ACCESS INFO AUTHENTICATIONAUTHENTICATION CLUSTERINGCLUSTERING Native calls Z39.50 SOAP OAI JDBC Fetch ID Put ID OpenURL APACHEINTERFACEAPACHEINTERFACE SERVER CONTROL UDDI WSRP SRW Normalization Client User/ Clients OGIS Cheshire3 SERVER

55 2006.04.20 - SLIDE 55IS 240 – Spring 2006 Cheshire3 Grid Tests Running on an 30 processor cluster in Liverpool using PVM (parallel virtual machine) Using 16 processors with one “master” and 22 “slave” processes we were able to parse and index MARC data at about 13000 records per second On a similar setup 610 Mb of TEI data can be parsed and indexed in seconds

56 2006.04.20 - SLIDE 56IS 240 – Spring 2006 SRB and SDSC Experiments We are working with SDSC to include SRB support We are planning to continue working with SDSC and to run further evaluations using the TeraGrid server(s) through a “small” grant for 30000 CPU hours – SDSC's TeraGrid cluster currently consists of 256 IBM cluster nodes, each with dual 1.5 GHz Intel® Itanium® 2 processors, for a peak performance of 3.1 teraflops. The nodes are equipped with four gigabytes (GBs) of physical memory per node. The cluster is running SuSE Linux and is using Myricom's Myrinet cluster interconnect network. Planned large-scale test collections include NSDL, the NARA repository, CiteSeer and the “million books” collections of the Internet Archive

57 2006.04.20 - SLIDE 57IS 240 – Spring 2006 Cheshire3 Object Model UserStore User ConfigStore Object Database Query Record Transformer Records Protocol Handler Normaliser IndexStore Terms Server Document Group Ingest Process Documents Index RecordStore Parser Document Query ResultSet DocumentStore Document PreParser Extracter

58 2006.04.20 - SLIDE 58IS 240 – Spring 2006 Cheshire3 Data Objects DocumentGroup: –A collection of Document objects (e.g. from a file, directory, or external search) Document: –A single item, in any format (e.g. PDF file, raw XML string, relational table) Record: –A single item, represented as parsed XML Query: –A search query, in the form of CQL (an abstract query language for Information Retrieval) ResultSet: –An ordered list of pointers to records Index: –An ordered list of terms extracted from Records

59 2006.04.20 - SLIDE 59IS 240 – Spring 2006 Cheshire3 Process Objects PreParser: –Given a Document, transform it into another Document (e.g. PDF to Text, Text to XML) Parser: –Given a Document as a raw XML string, return a parsed Record for the item. Transformer: –Given a Record, transform it into a Document (e.g. via XSLT, from XML to PDF, or XML to relational table) Extracter: –Extract terms of a given type from an XML sub-tree (e.g. extract Dates, Keywords, Exact string value) Normaliser: –Given the results of an extracter, transform the terms, maintaining the data structure (e.g. CaseNormaliser)

60 2006.04.20 - SLIDE 60IS 240 – Spring 2006 Cheshire3 Abstract Objects Server: –A logical collection of databases Database: –A logical collection of Documents, their Record representations and Indexes of extracted terms. Workflow: –A 'meta-process' object that takes a workflow definition in XML and converts it into executable code.

61 2006.04.20 - SLIDE 61IS 240 – Spring 2006 Workflow Objects Workflows are first class objects in Cheshire3 (though not represented in the model diagram) All Process and Abstract objects have individual XML configurations with a common base schema with extensions We can treat configurations as Records and store in regular RecordStores, allowing access via regular IR protocols.

62 2006.04.20 - SLIDE 62IS 240 – Spring 2006 Workflow References Workflows contain a series of instructions to perform, with reference to other Cheshire3 objects Reference is via pseudo-unique identifiers … Pseudo because they are unique within the current context (Server vs Database) Workflows are objects, so this enables server level workflows to call database specific workflows with the same identifier

63 2006.04.20 - SLIDE 63IS 240 – Spring 2006 Distributed Processing Each node in the cluster instantiates the configured architecture, potentially through a single ConfigStore. Master nodes then run a high level workflow to distribute the processing amongst Slave nodes by reference to a subsidiary workflow As object interaction is well defined in the model, the result of a workflow is equally well defined. This allows for the easy chaining of workflows, either locally or spread throughout the cluster.

64 2006.04.20 - SLIDE 64IS 240 – Spring 2006 Workflow Example1 workflow.SimpleWorkflow Starting Load

65 2006.04.20 - SLIDE 65IS 240 – Spring 2006 Workflow Example2 workflow.SimpleWorkflow Unparsable Record Loaded Record

66 2006.04.20 - SLIDE 66IS 240 – Spring 2006 Workflow Standards Cheshire3 workflows do not conform to any standard schema Intentional: –Workflows are specific to and dependent on the Cheshire3 architecture –Replaces the distribution of lines of code for distributed processing –Replaces many lines of code in general Needs to be easy to understand and create GUI workflow builder coming (web and standalone)

67 2006.04.20 - SLIDE 67IS 240 – Spring 2006 External Integration Looking at integration with existing cross- service workflow systems, in particular Kepler/Ptolemy Possible integration at two levels: –Cheshire3 as a service (black box)... Identify a workflow to call. –Cheshire3 object as a service (duplicate existing workflow function) … But recall the access speed issue.

68 2006.04.20 - SLIDE 68IS 240 – Spring 2006 Conclusions Scalable Grid-Based digital library services can be created and provide support for very large collections with improved efficiency The Cheshire3 IR and DL architecture can provide Grid (or single processor) services for next-generation DLs Available as open source via: http://cheshire3.sourceforge.net or http://www.cheshire3.org/


Download ppt "2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday."

Similar presentations


Ads by Google