Presentation is loading. Please wait.

Presentation is loading. Please wait.

Wide Area Data Replication for Scientific Collaborations Ann Chervenak, Robert Schuler, Carl Kesselman USC Information Sciences Institute Scott Koranda.

Similar presentations


Presentation on theme: "Wide Area Data Replication for Scientific Collaborations Ann Chervenak, Robert Schuler, Carl Kesselman USC Information Sciences Institute Scott Koranda."— Presentation transcript:

1 Wide Area Data Replication for Scientific Collaborations Ann Chervenak, Robert Schuler, Carl Kesselman USC Information Sciences Institute Scott Koranda Univa Corporation Brian Moe University of Wisconsin Milwaukee

2 Motivation l Scientific application domains spend considerable effort managing large amounts of experimental and simulation data l Have developed customized, higher-level Grid data management services l Examples: u Laser Interferometer Gravitational Wave Observatory (LIGO) Lightweight Data Replicator System u High Energy Physics projects: EGEE system, gLite, LHC Computing Grid (LCG) middleware u Portal-based coordination of services (E.g., Earth System Grid)

3 Motivation (cont.) l Data management functionality varies by application l Share several requirements: u Publish and replicate large datasets (millions of files) u Register data replicas in catalogs and discover them u Perform metadata-based discovery of datasets u May require ability to validate correctness of replicas u In general, data updates and replica consistency services not required (i.e., read-only accesses) l Systems provide production data management services to individual scientific domains l Each project spends considerable resources to design, implement & maintain data management system u Typically cannot be re-used by other applications

4 Motivation (cont.) l Long-term goals: u Generalize functionality provided by these data management systems u Provide suite of application-independent services l Paper describes one higher-level data management service: the Data Replication Service (DRS) l DRS functionality based on publication capability of the LIGO Lightweight Data Replicator (LDR) system l Ensures that a set of files exists on a storage site u Replicates files as needed, registers them in catalogs l DRS builds on lower-level Grid services, including: u Globus Reliable File Transfer (RFT) service u Replica Location Service (RLS)

5 Outline l Description of LDR data publication capability l Generalization of this functionality u Define characteristics of an application-independent Data Replication Service (DRS) l DRS Design l DRS Implementation in GT4 environment l Evaluation of DRS performance in a wide area Grid l Related work l Future work

6 A Data-Intensive Application Example: The LIGO Project l Laser Interferometer Gravitational Wave Observatory (LIGO) collaboration l Seeks to measure gravitational waves predicted by Einstein l Collects experimental datasets at two LIGO instrument sites in Louisiana and Washington State l Datasets are replicated at other LIGO sites l Scientists analyze the data and publish their results, which may be replicated l Currently LIGO stores more than 40 million files across ten locations

7 The Lightweight Data Replicator l LIGO scientists developed the Lightweight Data Replicator (LDR) System for data management l Built on top of standard Grid data services: u Globus Replica Location Service u GridFTP data transport protocol l LDR provides a rich set of data management functionality, including u a pull-based model for replicating necessary files to a LIGO site u efficient data transfer among LIGO sites u a distributed metadata service architecture u an interface to local storage systems u a validation component that verifies that files on a storage system are correctly registered in a local RLS catalog

8 LIGO Data Publication and Replication Two types of data publishing 1. Detectors at Livingston and Hanford produce data sets u Approx. a terabyte per day during LIGO experimental runs u Each detector produces a file every 16 seconds u Files range in size from 1 to 100 megabytes u Data sets are copied to main repository at CalTech, which stores them in tape-based mass storage system u LIGO sites can acquire copies from CalTech or one another 2. Scientists also publish new or derived data sets as they perform analysis on existing data sets u E.g., data filtering or calibration may create new files u These new files may also be replicated at LIGO sites

9 Some Terminology l A logical file name (LFN) is a unique identifier for the contents of a file u Typically, a scientific collaboration defines and manages the logical namespace u Guarantees uniqueness of logical names within that organization l A physical file name (PFN) is the location of a copy of the file on a storage system. u The physical namespace is managed by the file system or storage system l The LIGO environment currently contains: u More than six million unique logical files u More than 40 million physical files stored at ten sites

10 Components at Each LDR Site l Local storage system l GridFTP server for file transfer l Metadata Catalog: associations between logical file names and metadata attributes l Replica Location Service: u Local Replica Catalog (LRCs) stores mappings from logical names to storage locations u Replica Location Index (RLI) collects state summaries from LRCs l Scheduler and transfer daemons l Prioritized queue of requested files

11 LDR Data Publishing l Scheduling daemon runs at each LDR site u Queries site’s metadata catalog to identify logical files with specified metadata attributes u Checks RLS Local Replica Catalog to determine whether copies of those files already exist locally u If not, puts logical file names on priority-based scheduling queue l Transfer daemon also runs at each site u Checks queue and initiates data transfers in priority order u Queries RLS Replica Location Index to find sites where desired files exists u Randomly selects source file from among available replicas u Use GridFTP transport protocol to transfer file to local site u Registers newly-copied file in RLS Local Replica Catalog

12 Generalizing the LDR Publication Scheme l Want to provide a similar capability that is u Independent of LIGO infrastructure u Useful for a variety of application domains l Capabilities include: u Interface to specify which files are required at local site u Use of Globus RLS to discover whether replicas exist locally and where they exist in the Grid u Use of a selection algorithm to choose among available replicas u Use of Globus Reliable File Transfer service and GridFTP data transport protocol to copy data to local site u Use of Globus RLS to register new replicas

13 Relationship to Other Globus Services At requesting site, deploy: l WS-RF Services u Data Replication Service u Delegation Service u Reliable File Transfer Service l Pre WS-RF Components u Replica Location Service (Local Replica Catalog, Replica Location Index) u GridFTP Server

14 DRS Functionality l Initiate a DRS Request l Create a delegated credential l Create a Replicator resource l Monitor Replicator resource l Discover replicas of desired files in RLS, select among replicas l Transfer data to local site with Reliable File Transfer Service l Register new replicas in RLS catalogs l Allow client inspection of DRS results l Destroy Replicator resource DRS implemented in Globus Toolkit Version 4, complies with Web Services Resource Framework (WS-RF)

15 WSRF in a Nutshell l Service l State Management: u Resource u Resource Property l State Identification: u Endpoint Reference l State Interfaces: u GetRP, QueryRPs, GetMultipleRPs, SetRP l Lifetime Interfaces: u SetTerminationTime u ImmediateDestruction l Notification Interfaces u Subscribe u Notify l ServiceGroups RPs Resource Service GetRP GetMultRPs SetRP QueryRPs Subscribe SetTermTime Destroy EPR

16 Service Container Create Delegated Credential Client Delegation Data Rep. RFT Replica Index Replica Catalog GridFTP Server GridFTP Server Replica Catalog Replica Catalog Replica Catalog MDS Credential RP proxy Initialize user proxy cert. Create delegated credential resource Set termination time Credential EPR returned EPR

17 Service Container Create Replicator Resource Client Delegation Data Rep. RFT Replica Index Replica Catalog GridFTP Server GridFTP Server Replica Catalog Replica Catalog Replica Catalog MDS Credential RP Create Replicator resource Pass delegated credential EPR Set termination time Replicator EPR returned EPR Replicator RP Access delegated credential resource

18 Service Container Monitor Replicator Resource Client Delegation Data Rep. RFT Replica Index Replica Catalog GridFTP Server GridFTP Server Replica Catalog Replica Catalog Replica Catalog MDS Credential RP Replicator RP Periodically polls Replicator RP via GetRP or GetMultRP Add Replicator resource to MDS Information service Index Index RP Subscribe to ResourceProperty changes for “Status” RP and “Stage” RP Conditions may trigger alerts or other actions (Trigger service not pictured) EPR

19 Service Container Query Replica Information Client Delegation Data Rep. RFT Replica Index Replica Catalog GridFTP Server GridFTP Server Replica Catalog Replica Catalog Replica Catalog MDS Credential RP Replicator RP Index RP Notification of “Stage” RP value changed to “discover” Replicator queries RLS Replica Index to find catalogs that contain desired replica information Replicator queries RLS Replica Catalog(s) to retrieve mappings from logical name to target name (URL)

20 Service Container Transfer Data Client Delegation Data Rep. RFT Replica Index Replica Catalog GridFTP Server GridFTP Server Replica Catalog Replica Catalog Replica Catalog MDS Credential RP Replicator RP Index RP Notification of “Stage” RP value changed to “transfer” Create Transfer resource Pass credential EPR Set Termination Time Transfer resource EPR returned Transfer RP EPR Access delegated credential resource Setup GridFTP Server transfer of file(s) Data transfer between GridFTP Server sites Periodically poll “ResultStatus” RP via GetRP When “Done”, get state information for each file transfer

21 Service Container Register Replica Information Client Delegation Data Rep. RFT Replica Index Replica Catalog GridFTP Server GridFTP Server Replica Catalog Replica Catalog Replica Catalog MDS Credential RP Replicator RP Index RP Notification of “Stage” RP value changed to “register” RLS Replica Catalog sends update of new replica mappings to the Replica Index Transfer RP Replicator registers new file mappings in RLS Replica Catalog

22 Service Container Client Inspection of State Client Delegation Data Rep. RFT Replica Index Replica Catalog GridFTP Server GridFTP Server Replica Catalog Replica Catalog Replica Catalog MDS Credential RP Replicator RP Index RP Notification of “Status” RP value changed to “Finished” Transfer RP Client inspects Replicator state information for each replication in the request

23 Service Container Resource Termination Client Delegation Data Rep. RFT Replica Index Replica Catalog GridFTP Server GridFTP Server Replica Catalog Replica Catalog Replica Catalog MDS Credential RP Replicator RP Index RP Termination time (set by client) expires eventually Transfer RP Resources destroyed (Credential, Transfer, Replicator) TIME

24 Performance Measurements: Wide Area Testing l The destination for the pull-based transfers is located in Los Angeles u Dual-processor, 1.1 GHz Pentium III workstation with 1.5 GBytes of memory and a 1 Gbit Ethernet u Runs a GT4 container and deploys services including RFT and DRS as well as GridFTP and RLS l The remote site where desired data files are stored is located at Argonne National Laboratory in Illinois u Dual-processor, 3 GHz Intel Xeon workstation with 2 gigabytes of memory with 1.1 terabytes of disk u Runs a GT4 container as well as GridFTP and RLS services

25 DRS Operations Measured l Create the DRS Replicator resource l Discover source files for replication using local RLS Replica Location Index and remote RLS Local Replica Catalogs l Initiate an Reliable File Transfer operation by creating an RFT resource l Perform RFT data transfer(s) l Register the new replicas in the RLS Local Replica Catalog

26 Experiment 1: Replicate 10 Files of Size 1 Gigabyte Component of Operation Time (milliseconds) Create Replicator Resource317.0 Discover Files in RLS 449.0 Create RFT Resource 808.6 Transfer Using RFT 1186796.0 Register Replicas in RLS 3720.8 l Data transfer time dominates l Wide area data transfer rate of 67.4 Mbits/sec

27 Experiment 2: Replicate 1000 Files of Size 10 Megabytes Component of Operation Time (milliseconds) Create Replicator Resource1561.0 Discover Files in RLS 9.8 Create RFT Resource 1286.6 Transfer Using RFT 963456.0 Register Replicas in RLS 11278.2 l Time to create Replicator and RFT resources is larger u Need to store state for 1000 outstanding transfers l Data transfer time still dominates l Wide area data transfer rate of 85 Mbits/sec

28 Future Work l We will continue performance testing of DRS: u Increasing the size of the files being transferred u Increasing the number of files per DRS request l Add and refine DRS functionality as it is used by applications u E.g., add a push-based replication capability l We plan to develop a suite of general, configurable, composable, high-level data management services


Download ppt "Wide Area Data Replication for Scientific Collaborations Ann Chervenak, Robert Schuler, Carl Kesselman USC Information Sciences Institute Scott Koranda."

Similar presentations


Ads by Google