Presentation is loading. Please wait.

Presentation is loading. Please wait.

Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.

Similar presentations


Presentation on theme: "Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008."— Presentation transcript:

1 Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008

2 Outline Introduction Grid Replica Optimisation The OptorSim grid simulator OptorSim architecture Experimental setup Results Conclusions

3 Introduction Large Hadron Collider (LHC) at CERN will have raw data rate of ~15 PB/year LHC Computing Grid (LCG) for data storage and computing infrastructure 2008 will be first full year of LHC running Actual analysis behaviour still unknown  use simulation to investigate behaviour  investigate dynamic data replication

4 Grid Replica Optimisation Many variables determine overall grid performance –Impossible to reach one optimal solution! Possible to optimise variables which are part of grid middleware –Job scheduling, data management etc This talk considers data management only… …and dynamic replica optimisation in particular

5 Dynamic Replica Optimisation = optimisation of the placement of file replicas on grid sites… …in a dynamic, automated fashion

6 Design of a Replica Optimisation Service Centralised, hierarchical or distributed? Pull or push? Choosing a replication trigger –On file request? –On file popularity? Aim to achieve global optimisation as a result of local optimisation

7 OptorSim OptorSim is a grid simulator with a focus on data management Developed as part of European DataGrid Work Package 2 Based on EDG architecture Used to examine automated decisions about replica placement and deletion http://edg-wp2.web.cern.ch/edg-wp2/optimization/optorsim.html

8 Architecture Sites with Computing Element (CE) and/or Storage Element (SE) Replica Optimiser decides replications for its site Resource Broker schedules jobs Replica Catalogue maps logical to physical filenames Replica Manager controls and registers replications

9 Algorithms Job scheduling –Details not covered in this talk –“QueueAccessCost” scheduler used in these results Data replication –No replication –Simple replication:“always replicate, delete existing files if necessary” Least Recently Used (LRU) Least Frequently Used (LFU) –Economic model: “replicate only if profitable” Sites “buy” and “sell” files using auction mechanism Files deleted if less valuable than new file

10 Experimental Setup - Jobs & Files Job types based on computing models “Dataset” for each experiment ~1 year’s AOD (analysis data) 2GB files Placed at CERN and Tier-1s at start JobEvent Size (kB) Total no. of files Files per job alice-pp502500025 alice-hi25012500125 atlas10010000050 cms503750025 lhcb-small753750038 lhcb-big7537500375

11 Experimental Setup - Storage Resources CERN & Tier 1 site capacities from LCG Technical Design Report “Canonical” Tier 2 capacity of 197 TB each (18.8 PB / 95 sites) Define storage metric D = (average SE size) (total dataset size) Memory limitations -> scale down Tier 2 SE sizes to 500 GB –Allows file deletion to start quickly –Disadvantage of small D

12 Experimental Setup - Computing & Network Most (chaotic) analysis jobs run at Tier 2s –Tier 1s not given CE, except those running LHCb jobs –CERN Analysis Facility with CE of 7840 kSI2k –Tier 2s with averaged CE of 645 kSI2k each (61.3 MSI2k / 95 sites) Network based on NREN topologies –Sites connected to closest router –Default of 155 Mbps if published value not available

13 Network Topology

14 Parameters Job scheduler “QueueAccessCost” –Combines data location and queue information Sequential access pattern 1000 jobs per simulation Site policies set according to LCG Memorandum of Understanding

15 Evaluation Metrics Different grid users will have different criteria of evaluation Used in these summary results are: –Mean job time Average time taken for job to run, from scheduling to completion –Effective Network Usage (ENU) (File requests which use network resources) (Total number of file requests)

16 Results: Data Replication Performance of algorithms measured with varying D D varied by reducing dataset size 20-25% gain in mean job time as D approaches realistic value

17 Results: Data Replication ENU shows similar gain Allows clearer distinction between strategies

18 Results: Data Replication Number of jobs increased to 4000 Mean job time increases linearly Relative improvement as D increases will hold for higher numbers of jobs Realistic number of jobs is >O(10000)

19 Results: Site Policies Vary site policies: –All Job Types Sites accept jobs from any VO –One Job Type Sites accept jobs from one VO –Mixed default All Job Types is ~60% faster than One Job Type

20 Results: Site Policies All Job Types also give ~25% lower ENU than other policies Egalitarian approach benefits all grid users

21 Results: Access Patterns Sequential access likely for many physics applications Zipf-like access will also occur –Some files accessed frequently, many infrequently Replication gives performance gain of ~75% when Zipf access pattern used

22 Results: Access Patterns ENU also ~75% lower with Zipf access Any Zipf-like element makes replication highly desirable Size of efficiency gain depends on streaming model, etc

23 Conclusions OptorSim used to simulate LCG in 2008 Dynamic data replication reduces running time of simulated grid jobs: –20% reduction with sequential access –75% reduction with Zipf-like access –Similar reductions in network usage Little difference between replication strategies –Simpler LRU, LFU 20-30% faster than economic model Site policy which allows all experiments to share resources gives most effective grid use

24 The End

25 Backup Slides

26 Replica optimiser architecture Access Mediator (AM) - contacts replica optimisers to locate the cheapest copies of files and makes them available locally Storage Broker (SB) - manages files stored in SE, trying to maximise profit for the finite amount of storage space available P2P Mediator (P2PM) - establishes and maintains P2P communication between grid sites


Download ppt "Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008."

Similar presentations


Ads by Google