2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday.

Slides:



Advertisements
Similar presentations
Provenance-Aware Storage Systems Margo Seltzer April 29, 2005.
Advertisements

Crawling, Ranking and Indexing. Organizing the Web The Web is big. Really big. –Over 3 billion pages, just in the indexable Web The Web is dynamic Problems:
SLIDE 1FIST Shanghai Digging Into Data: Data Mining for Information Access Ray R. Larson University of California, Berkeley Paul Watry.
The Search Engine Architecture CSCI 572: Information Retrieval and Search Engines Summer 2010.
Principles of IR Hacettepe University Department of Information Management DOK 324: Principles of IR.
Grid & Libraries, 10/18/04.1 Second Invitational Berkeley – Academia Sinica Grid Digital Libraries Workshop, Taipei, October 18, 2004 Grid Middleware Application.
SLIDE 1IS 240 – Spring 2007 Prof. Ray Larson University of California, Berkeley School of Information Tuesday and Thursday 10:30 am - 12:00.
“ The Anatomy of a Large-Scale Hypertextual Web Search Engine ” Presented by Ahmed Khaled Al-Shantout ICS
Information Retrieval in Practice
1 CS 430 / INFO 430: Information Retrieval Lecture 16 Web Search 2.
SLIDE 1IS 202 – FALL 2004 Lecture 05: Web Search Issues and Algorithms Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday.
SLIDE 1IS 240 – Spring 2009 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.
SIMS 202 Information Organization and Retrieval Prof. Marti Hearst and Prof. Ray Larson UC Berkeley SIMS Tues/Thurs 9:30-11:00am Fall 2000.
Anatomy of a Large-Scale Hypertextual Web Search Engine (e.g. Google)
1 CS 502: Computing Methods for Digital Libraries Lecture 16 Web search engines.
© nCode 2000 Title of Presentation goes here - go to Master Slide to edit - Slide 1 Anatomy of a Large-Scale Hypertextual Web Search Engine ECE 7995: Term.
SLIDE 1IS 202 – FALL 2003 Lecture 21: Web Search Issues and Algorithms Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday.
SLIDE 1IS 240 – Spring 2009 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.
Standard Web Search Engine Architecture
ISP 433/633 Week 7 Web IR. Web is a unique collection Largest repository of data Unedited Can be anything –Information type –Sources Changing –Growing.
SLIDE 1IS 240 – Spring 2007 Prof. Ray Larson University of California, Berkeley School of Information Tuesday and Thursday 10:30 am - 12:00.
1 An Empirical Study on Large-Scale Content-Based Image Retrieval Group Meeting Presented by Wyman
SLIDE 1IS 240 – Spring 2007 Prof. Ray Larson University of California, Berkeley School of Information Tuesday and Thursday 10:30 am - 12:00.
SLIDE 1IS 240 – Spring 2010 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.
SLIDE 1IS 240 – Spring 2010 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.
SIMS 202 Information Organization and Retrieval Prof. Marti Hearst and Prof. Ray Larson UC Berkeley SIMS Tues/Thurs 9:30-11:00am Fall 2000.
The Anatomy of a Large-Scale Hypertextual Web Search Engine Sergey Brin and Lawrence Page Distributed Systems - Presentation 6/3/2002 Nancy Alexopoulou.
Search engines fdm 20c introduction to digital media lecture warren sack / film & digital media department / university of california, santa.
Overview of Search Engines
CONTI’2008, 5-6 June 2008, TIMISOARA 1 Towards a digital content management system Gheorghe Sebestyen-Pal, Tünde Bálint, Bogdan Moscaliuc, Agnes Sebestyen-Pal.
SLIDE 1IS 240 – Spring 2013 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.
Using SRB and iRODS with the Cheshire3 Information Framework Building Data Grids with iRODS May, 2008 National e-Science Centre Edinburgh Dr Robert.
A Metadata Based Approach For Supporting Subsetting Queries Over Parallel HDF5 Datasets Vignesh Santhanagopalan Graduate Student Department Of CSE.
The Anatomy of a Large-Scale Hypertextual Web Search Engine Presented By: Sibin G. Peter Instructor: Dr. R.M.Verma.
SLIDE 1IS 240 – Spring 2013 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval Lecture.
Chapter 2 Architecture of a Search Engine. Search Engine Architecture n A software architecture consists of software components, the interfaces provided.
Chapter 6: Information Retrieval and Web Search
Search Engines. Search Strategies Define the search topic(s) and break it down into its component parts What terms, words or phrases do you use to describe.
SLIDE 1DID Meeting - Montreal Integrating Data Mining and Data Management Technologies for Scholarly Inquiry Ray R. Larson University of California,
CSM06 Information Retrieval Lecture 6: Visualising the Results Set Dr Andrew Salway
استاد : مهندس حسین پور ارائه دهنده : احسان جوانمرد Google Architecture.
The Anatomy of a Large-Scale Hypertextual Web Search Engine Kevin Mauricio Apaza Huaranca San Pablo Catholic University.
ICDL 2004 Improving Federated Service for Non-cooperating Digital Libraries R. Shi, K. Maly, M. Zubair Department of Computer Science Old Dominion University.
The Anatomy of a Large-Scale Hyper textual Web Search Engine S. Brin, L. Page Presenter :- Abhishek Taneja.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
1 GRID Based Federated Digital Library K. Maly, M. Zubair, V. Chilukamarri, and P. Kothari Department of Computer Science Old Dominion University February,
CNI, 3rd April 2006 Slide 1 UK National Centre for Text Mining: Activities and Plans Dr. Robert Sanderson Dept. of Computer Science University of Liverpool.
SLIDE 1INFOSCALE Hong Kong Integrating Data Mining and Data Management Technologies for Scholarly Inquiry Paul Watry Richard Marciano.
Web Information Retrieval Prof. Alessandro Agostini 1 Context in Web Search Steve Lawrence Speaker: Antonella Delmestri IEEE Data Engineering Bulletin.
Feb 24-27, 2004ICDL 2004, New Dehli Improving Federated Service for Non-cooperating Digital Libraries R. Shi, K. Maly, M. Zubair Department of Computer.
Commission on Cyberinfrastructure for the Humanities and Social Sciences Metadata as Infrastructure, Interoperability, and the Larger Context Michael Buckland,
Copyright 2007, Information Builders. Slide 1 iWay Web Services and WebFOCUS Consumption Michael Florkowski Information Builders.
1 CS 430: Information Discovery Lecture 26 Architecture of Information Retrieval Systems 1.
SLIDE 1IS 202 – FALL 2002 Lecture 20: Web Search Issues and Algorithms Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday.
SLIDE 1IS 240 – Spring 2010 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.
The Anatomy of a Large-Scale Hypertextual Web Search Engine S. Brin and L. Page, Computer Networks and ISDN Systems, Vol. 30, No. 1-7, pages , April.
1 Web Search Engines. 2 Search Engine Characteristics  Unedited – anyone can enter content Quality issues; Spam  Varied information types Phone book,
1 CS 430 / INFO 430: Information Retrieval Lecture 20 Web Search 2.
SLIDE 1NaCTeM Launch -Manchester National Center for Text Mining Launch Event Ray R. Larson University of California, Berkeley School of Information.
Information Retrieval in Practice
Search Engine Architecture
An Overview of Data-PASS Shared Catalog
CHAPTER 3 Architectures for Distributed Systems
The Anatomy of a Large-Scale Hypertextual Web Search Engine
Data Mining Chapter 6 Search Engines
Introduction to Information Retrieval
Web Search Engines.
The Search Engine Architecture
Instructor : Marina Gavrilova
Presentation transcript:

SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday 10:30 am - 12:00 pm Spring Principles of Information Retrieval Lecture 26: Web Searching

SLIDE 2IS 240 – Spring 2006 Mini-TREC Proposed Schedule –February – Database and previous Queries –March 2 – report on system acquisition and setup –March 2, New Queries for testing… –April 20, Results due –April 25, Results and system rankings –May 9, Group reports and discussion

SLIDE 3IS 240 – Spring 2006 Term Paper Should be about pages on: – some area of IR research (or practice) that you are interested in and want to study further –Experimental tests of systems or IR algorithms –Build an IR system, test it, and describe the system and its performance Due May 9 th (Last day of class)

SLIDE 4IS 240 – Spring 2006 Today Review –Web Crawling and Search Issues –Web Search Engines and Algorithms Web Search Processing –Parallel Architectures (Inktomi - Brewer) –Cheshire III Design Credit for some of the slides in this lecture goes to Marti Hearst and Eric Brewer

SLIDE 5IS 240 – Spring 2006 Web Crawlers How do the web search engines get all of the items they index? More precisely: –Put a set of known sites on a queue –Repeat the following until the queue is empty: Take the first page off of the queue If this page has not yet been processed: –Record the information found on this page »Positions of words, links going out, etc –Add each link on the current page to the queue –Record that this page has been processed In what order should the links be followed?

SLIDE 6IS 240 – Spring 2006 Page Visit Order Animated examples of breadth-first vs depth-first search on trees:

SLIDE 7IS 240 – Spring 2006 Sites Are Complex Graphs, Not Just Trees Page 1 Page 3 Page 2 Page 1 Page 2 Page 1 Page 5 Page 6 Page 4 Page 1 Page 2 Page 1 Page 3 Site 6 Site 5 Site 3 Site 1 Site 2

SLIDE 8IS 240 – Spring 2006 Web Crawling Issues Keep out signs –A file called robots.txt tells the crawler which directories are off limits Freshness –Figure out which pages change often –Recrawl these often Duplicates, virtual hosts, etc –Convert page contents with a hash function –Compare new pages to the hash table Lots of problems –Server unavailable –Incorrect html –Missing links –Infinite loops Web crawling is difficult to do robustly!

SLIDE 9IS 240 – Spring 2006 Search Engines Crawling Indexing Querying

SLIDE 10IS 240 – Spring 2006 Web Search Engine Layers From description of the FAST search engine, by Knut Risvik

SLIDE 11IS 240 – Spring 2006 Standard Web Search Engine Architecture crawl the web create an inverted index Check for duplicates, store the documents Inverted index Search engine servers user query Show results To user DocIds

SLIDE 12IS 240 – Spring 2006 More detailed architecture, from Brin & Page 98. Only covers the preprocessing in detail, not the query serving.

SLIDE 13IS 240 – Spring 2006 Indexes for Web Search Engines Inverted indexes are still used, even though the web is so huge Most current web search systems partition the indexes across different machines –Each machine handles different parts of the data (Google uses thousands of PC-class processors and keeps most things in main memory) Other systems duplicate the data across many machines –Queries are distributed among the machines Most do a combination of these

SLIDE 14IS 240 – Spring 2006 Search Engine Querying In this example, the data for the pages is partitioned across machines. Additionally, each partition is allocated multiple machines to handle the queries. Each row can handle 120 queries per second Each column can handle 7M pages To handle more queries, add another row. From description of the FAST search engine, by Knut Risvik

SLIDE 15IS 240 – Spring 2006 Querying: Cascading Allocation of CPUs A variation on this that produces a cost- savings: –Put high-quality/common pages on many machines –Put lower quality/less common pages on fewer machines –Query goes to high quality machines first –If no hits found there, go to other machines

SLIDE 16IS 240 – Spring 2006 Google Google maintains (probably) the worlds largest Linux cluster (over 15,000 servers) These are partitioned between index servers and page servers –Index servers resolve the queries (massively parallel processing) –Page servers deliver the results of the queries Over 8 Billion web pages are indexed and served by Google

SLIDE 17IS 240 – Spring 2006 Search Engine Indexes Starting Points for Users include Manually compiled lists –Directories Page “popularity” –Frequently visited pages (in general) –Frequently visited pages as a result of a query Link “co-citation” –Which sites are linked to by other sites?

SLIDE 18IS 240 – Spring 2006 Starting Points: What is Really Being Used? Todays search engines combine these methods in various ways –Integration of Directories Today most web search engines integrate categories into the results listings Lycos, MSN, Google –Link analysis Google uses it; others are also using it Words on the links seems to be especially useful –Page popularity Many use DirectHit’s popularity rankings

SLIDE 19IS 240 – Spring 2006 Web Page Ranking Varies by search engine –Pretty messy in many cases –Details usually proprietary and fluctuating Combining subsets of: –Term frequencies –Term proximities –Term position (title, top of page, etc) –Term characteristics (boldface, capitalized, etc) –Link analysis information –Category information –Popularity information

SLIDE 20IS 240 – Spring 2006 Ranking: Hearst ‘96 Proximity search can help get high- precision results if >1 term –Combine Boolean and passage-level proximity –Proves significant improvements when retrieving top 5, 10, 20, 30 documents –Results reproduced by Mitra et al. 98 –Google uses something similar

SLIDE 21IS 240 – Spring 2006 Ranking: Link Analysis Assumptions: –If the pages pointing to this page are good, then this is also a good page –The words on the links pointing to this page are useful indicators of what this page is about –References: Page et al. 98, Kleinberg 98

SLIDE 22IS 240 – Spring 2006 Ranking: Link Analysis Why does this work? –The official Toyota site will be linked to by lots of other official (or high-quality) sites –The best Toyota fan-club site probably also has many links pointing to it –Less high-quality sites do not have as many high-quality sites linking to them

SLIDE 23IS 240 – Spring 2006 Ranking: PageRank Google uses the PageRank We assume page A has pages T1...Tn which point to it (i.e., are citations). The parameter d is a damping factor which can be set between 0 and 1. d is usually set to C(A) is defined as the number of links going out of page A. The PageRank of a page A is given as follows: PR(A) = (1-d) + d (PR(T1)/C(T1) PR(Tn)/C(Tn)) Note that the PageRanks form a probability distribution over web pages, so the sum of all web pages' PageRanks will be one

SLIDE 24IS 240 – Spring 2006 PageRank T2Pr=1 T1Pr=.725 T6Pr=1 T5Pr=1 T4Pr=1 T3Pr=1 T7Pr=1 T8Pr= X1 X2 APr= Note: these are not real PageRanks, since they include values >= 1

SLIDE 25IS 240 – Spring 2006 PageRank Similar to calculations used in scientific citation analysis (e.g., Garfield et al.) and social network analysis (e.g., Waserman et al.) Similar to other work on ranking (e.g., the hubs and authorities of Kleinberg et al.) How is Amazon similar to Google in terms of the basic insights and techniques of PageRank? How could PageRank be applied to other problems and domains?

SLIDE 26IS 240 – Spring 2006 Today Review –Web Crawling and Search Issues –Web Search Engines and Algorithms Web Search Processing –Parallel Architectures (Inktomi – Eric Brewer) –Cheshire III Design Credit for some of the slides in this lecture goes to Marti Hearst and Eric Brewer

SLIDE 27IS 240 – Spring 2006

SLIDE 28IS 240 – Spring 2006

SLIDE 29IS 240 – Spring 2006

SLIDE 30IS 240 – Spring 2006

SLIDE 31IS 240 – Spring 2006

SLIDE 32IS 240 – Spring 2006

SLIDE 33IS 240 – Spring 2006

SLIDE 34IS 240 – Spring 2006

SLIDE 35IS 240 – Spring 2006

SLIDE 36IS 240 – Spring 2006

SLIDE 37IS 240 – Spring 2006

SLIDE 38IS 240 – Spring 2006

SLIDE 39IS 240 – Spring 2006

SLIDE 40IS 240 – Spring 2006

SLIDE 41IS 240 – Spring 2006

SLIDE 42IS 240 – Spring 2006

SLIDE 43IS 240 – Spring 2006

SLIDE 44IS 240 – Spring 2006

SLIDE 45IS 240 – Spring 2006

SLIDE 46IS 240 – Spring 2006

SLIDE 47IS 240 – Spring 2006 Digital Library Grid Initiatives: Cheshire3 and the Grid Ray R. Larson University of California, Berkeley School of Information Management and Systems Rob Sanderson University of Liverpool Dept. of Computer Science Thanks to Dr. Eric Yen and Prof. Michael Buckland for parts of this presentation Presentation from DLF Forum April 2005

SLIDE 48IS 240 – Spring 2006 Overview The Grid, Text Mining and Digital Libraries –Grid Architecture –Grid IR Issues Cheshire3: Bringing Search to Grid-Based Digital Libraries –Overview –Grid Experiments –Cheshire3 Architecture –Distributed Workflows

SLIDE 49IS 240 – Spring 2006 Grid middleware Chemical Engineering Applications Application Toolkits Grid Services Grid Fabric Climate Data Grid Remote Computing Remote Visualization Collaboratories High energy physics Cosmology Astrophysics Combustion.…. Portals Remote sensors..… Protocols, authentication, policy, instrumentation, Resource management, discovery, events, etc. Storage, networks, computers, display devices, etc. and their associated local services Grid Architecture -- (Dr. Eric Yen, Academia Sinica, Taiwan.)

SLIDE 50IS 240 – Spring 2006 Chemical Engineering Applications Application Toolkits Grid Services Grid Fabric Grid middleware Climate Data Grid Remote Computing Remote Visualization Collaboratories High energy physics Cosmology Astrophysics Combustion Humanities computing Digital Libraries … Portals Remote sensors Text Mining Metadata management Search & Retrieval … Protocols, authentication, policy, instrumentation, Resource management, discovery, events, etc. Storage, networks, computers, display devices, etc. and their associated local services Grid Architecture (ECAI/AS Grid Digital Library Workshop) Bio-Medical

SLIDE 51IS 240 – Spring 2006 Grid-Based Digital Libraries Large-scale distributed storage requirements and technologies Organizing distributed digital collections Shared Metadata – standards and requirements Managing distributed digital collections Security and access control Collection Replication and backup Distributed Information Retrieval issues and algorithms

SLIDE 52IS 240 – Spring 2006 Grid IR Issues Want to preserve the same retrieval performance (precision/recall) while hopefully increasing efficiency (I.e. speed) Very large-scale distribution of resources is a challenge for sub-second retrieval Different from most other typical Grid processes, IR is potentially less computing intensive and more data intensive In many ways Grid IR replicates the process (and problems) of metasearch or distributed search

SLIDE 53IS 240 – Spring 2006 Cheshire3 Overview XML Information Retrieval Engine –3rd Generation of the UC Berkeley Cheshire system, as co-developed at the University of Liverpool. –Uses Python for flexibility and extensibility, but imports C/C++ based libraries for processing speed –Standards based: XML, XSLT, CQL, SRW/U, Z39.50, OAI to name a few. –Grid capable. Uses distributed configuration files, workflow definitions and PVM (currently) to scale from one machine to thousands of parallel nodes. –Free and Open Source Software. (GPL Licence) – (under development!)

SLIDE 54IS 240 – Spring 2006 Cheshire3 Server Overview API INDEXINGINDEXING T R R X E A S C N L O S T R F D O R M S SEARCHSEARCH P H R A O N T D O L C E O R L DB API REMOTE SYSTEMS (any protocol) XML CONFIG & Metadata INFO INDEXES LOCAL DB STAFF UI CONFIG NETWORKNETWORK RESULT SETS SCANSCAN USER INFO CONFIG&CONTROLCONFIG&CONTROL ACCESS INFO AUTHENTICATIONAUTHENTICATION CLUSTERINGCLUSTERING Native calls Z39.50 SOAP OAI JDBC Fetch ID Put ID OpenURL APACHEINTERFACEAPACHEINTERFACE SERVER CONTROL UDDI WSRP SRW Normalization Client User/ Clients OGIS Cheshire3 SERVER

SLIDE 55IS 240 – Spring 2006 Cheshire3 Grid Tests Running on an 30 processor cluster in Liverpool using PVM (parallel virtual machine) Using 16 processors with one “master” and 22 “slave” processes we were able to parse and index MARC data at about records per second On a similar setup 610 Mb of TEI data can be parsed and indexed in seconds

SLIDE 56IS 240 – Spring 2006 SRB and SDSC Experiments We are working with SDSC to include SRB support We are planning to continue working with SDSC and to run further evaluations using the TeraGrid server(s) through a “small” grant for CPU hours – SDSC's TeraGrid cluster currently consists of 256 IBM cluster nodes, each with dual 1.5 GHz Intel® Itanium® 2 processors, for a peak performance of 3.1 teraflops. The nodes are equipped with four gigabytes (GBs) of physical memory per node. The cluster is running SuSE Linux and is using Myricom's Myrinet cluster interconnect network. Planned large-scale test collections include NSDL, the NARA repository, CiteSeer and the “million books” collections of the Internet Archive

SLIDE 57IS 240 – Spring 2006 Cheshire3 Object Model UserStore User ConfigStore Object Database Query Record Transformer Records Protocol Handler Normaliser IndexStore Terms Server Document Group Ingest Process Documents Index RecordStore Parser Document Query ResultSet DocumentStore Document PreParser Extracter

SLIDE 58IS 240 – Spring 2006 Cheshire3 Data Objects DocumentGroup: –A collection of Document objects (e.g. from a file, directory, or external search) Document: –A single item, in any format (e.g. PDF file, raw XML string, relational table) Record: –A single item, represented as parsed XML Query: –A search query, in the form of CQL (an abstract query language for Information Retrieval) ResultSet: –An ordered list of pointers to records Index: –An ordered list of terms extracted from Records

SLIDE 59IS 240 – Spring 2006 Cheshire3 Process Objects PreParser: –Given a Document, transform it into another Document (e.g. PDF to Text, Text to XML) Parser: –Given a Document as a raw XML string, return a parsed Record for the item. Transformer: –Given a Record, transform it into a Document (e.g. via XSLT, from XML to PDF, or XML to relational table) Extracter: –Extract terms of a given type from an XML sub-tree (e.g. extract Dates, Keywords, Exact string value) Normaliser: –Given the results of an extracter, transform the terms, maintaining the data structure (e.g. CaseNormaliser)

SLIDE 60IS 240 – Spring 2006 Cheshire3 Abstract Objects Server: –A logical collection of databases Database: –A logical collection of Documents, their Record representations and Indexes of extracted terms. Workflow: –A 'meta-process' object that takes a workflow definition in XML and converts it into executable code.

SLIDE 61IS 240 – Spring 2006 Workflow Objects Workflows are first class objects in Cheshire3 (though not represented in the model diagram) All Process and Abstract objects have individual XML configurations with a common base schema with extensions We can treat configurations as Records and store in regular RecordStores, allowing access via regular IR protocols.

SLIDE 62IS 240 – Spring 2006 Workflow References Workflows contain a series of instructions to perform, with reference to other Cheshire3 objects Reference is via pseudo-unique identifiers … Pseudo because they are unique within the current context (Server vs Database) Workflows are objects, so this enables server level workflows to call database specific workflows with the same identifier

SLIDE 63IS 240 – Spring 2006 Distributed Processing Each node in the cluster instantiates the configured architecture, potentially through a single ConfigStore. Master nodes then run a high level workflow to distribute the processing amongst Slave nodes by reference to a subsidiary workflow As object interaction is well defined in the model, the result of a workflow is equally well defined. This allows for the easy chaining of workflows, either locally or spread throughout the cluster.

SLIDE 64IS 240 – Spring 2006 Workflow Example1 workflow.SimpleWorkflow Starting Load

SLIDE 65IS 240 – Spring 2006 Workflow Example2 workflow.SimpleWorkflow Unparsable Record Loaded Record

SLIDE 66IS 240 – Spring 2006 Workflow Standards Cheshire3 workflows do not conform to any standard schema Intentional: –Workflows are specific to and dependent on the Cheshire3 architecture –Replaces the distribution of lines of code for distributed processing –Replaces many lines of code in general Needs to be easy to understand and create GUI workflow builder coming (web and standalone)

SLIDE 67IS 240 – Spring 2006 External Integration Looking at integration with existing cross- service workflow systems, in particular Kepler/Ptolemy Possible integration at two levels: –Cheshire3 as a service (black box)... Identify a workflow to call. –Cheshire3 object as a service (duplicate existing workflow function) … But recall the access speed issue.

SLIDE 68IS 240 – Spring 2006 Conclusions Scalable Grid-Based digital library services can be created and provide support for very large collections with improved efficiency The Cheshire3 IR and DL architecture can provide Grid (or single processor) services for next-generation DLs Available as open source via: or