Presentation is loading. Please wait.

Presentation is loading. Please wait.

Oct 24, 2002 Michael Ernst, Fermilab DRM for Tier1 and Tier2 centers Michael Ernst Fermilab February 3, 2003.

Similar presentations


Presentation on theme: "Oct 24, 2002 Michael Ernst, Fermilab DRM for Tier1 and Tier2 centers Michael Ernst Fermilab February 3, 2003."— Presentation transcript:

1 Oct 24, 2002 Michael Ernst, Fermilab DRM for Tier1 and Tier2 centers Michael Ernst Fermilab February 3, 2003

2 Michael Ernst, Fermilab 2 Production Data Flow Today … Tier1 at FNAL dCache/ Enstore CATALOG Local Disk CATALOG Tier2 Worker Nodes Tier2 Head/Storage Node CATALOG MOP Master

3 February 3, 2003 Michael Ernst, Fermilab 3 Typical Tier2 Today … GridFTP, bbcp Head/Storage Node Worker Nodes

4 February 3, 2003 Michael Ernst, Fermilab 4 Tier2 w/dCache dCache Admin Node GridFTP

5 February 3, 2003 Michael Ernst, Fermilab 5 Tier2 w/dCache dCache GridFTP dCache GridFTP dCache/ Enstore Catalog GridFTP CASTOR FNAL CERN UCSD Florida, … Catalog

6 February 3, 2003 Michael Ernst, Fermilab 6 dCache Placement Application (e.g. dccp) dCap library xxxFTP GRID Access Method(s) Applications dCache Enstore OSM HSM X Local Disk PNFS Namespace Manager Tertiary Storage Systems

7 February 3, 2003 Michael Ernst, Fermilab 7 Distributed Pool Architecture Topic Caches Tertiary Storage Enstore, OSM, … Super Cluster Caches (Experiment) Central Cache Host Caches Cluster Caches (Working Group) externally enforced attraction destination determined attraction

8 February 3, 2003 Michael Ernst, Fermilab 8 What is dCache ? dCache is/provides Compact system making access to HSM systems more efficientCompact system making access to HSM systems more efficient Generic tool for caching, storing and easily accessing peta byte scale datasets distributed among a large set of heterogeneous caching nodesGeneric tool for caching, storing and easily accessing peta byte scale datasets distributed among a large set of heterogeneous caching nodes Able to run with and without HSM backend (can be used as scalable file store)Able to run with and without HSM backend (can be used as scalable file store) Very flexible HSM BackendVery flexible HSM Backend èSupports multiple instances of same HSM or different HSMs within same dCache instance èRate Adaption (tape  disk) èDeferred Write (aggregates small files up to threshold (time, space)) èStaging èRead Ahead èdCache allows to relax performance requirements on HSM storage components (e.g. tape drives and robots) w/o overall performance penalty Attraction schemes used to optimize data placementAttraction schemes used to optimize data placement Load BalancingLoad Balancing Automated Replication (to prevent hot spots)Automated Replication (to prevent hot spots)

9 February 3, 2003 Michael Ernst, Fermilab 9 dCache Data Access LocalLocal èdccp (req. mounted pnfs fs), kerberized and non-kerberized èdcap lib (API) èdcap Preload lib èURL-style Addressing (e.g. dccp dcap://door.do.name:port#/pnfs/do.name/path/to/file) èGridFTP (Server embedded in dCache, client with globus-url-copy) RemoteRemote èURL-style Addressing èGridFTP

10 February 3, 2003 Michael Ernst, Fermilab 10 The dCap Library Posix open/read/write/close dCache System Door Node Mover Node Door Node NFS Node pnfs native FS libc.o libdcache.so Application Data Namespace Operations

11 February 3, 2003 Michael Ernst, Fermilab 11 Current Status Access to Mass Storage Systems (dCache/Enstore) at FNAL LocalLocal èdccp / encp (req. mounted pnfs fs) èURL-style Addressing èGridFTP (Server embedded in dCache, client with globus-url-copy) RemoteRemote èURL-style Addressing èGridFTP u in Service today at –CERN (lxcmsa) –UCSD (cms-dcache-serv, t2cms0) –Using kerberized certificates when communicating w/dCache @ FNAL »requires installation/configuration of »NMI kx509/KCA bins,libs »Kerberos bins, libs, krb5.conf Mass Storage elsewhereMass Storage elsewhere èdCache up&running at UCSD èdCache @ CERN w/interface to CASTOR will be next Still MissingStill Missing èGlobal Dataset Catalog, Replication Management

12 February 3, 2003 Michael Ernst, Fermilab 12 Storage Management Evaluating Catalogs and Storage ManagersEvaluating Catalogs and Storage Managers èThere is no LHC Data/Storage Management System, nor a Global Dataset Catalog as of yet èUS CMS is looking into SRB (developed by SDSC) èSRB servers are now running on US CMS servers at UCSD, Caltech, Fermilab, and at CERN soon Distributed Storage Resources (database systems, archival storage systems, file systems, (Grid)FTP, http, …) Application SRB Server MCAT HRM DB2, Oracle, Illustra, ObjectStore HPSS, ADSM, dCache The Storage Resource Broker is a middleware SRB is a Distributed Filesystem It virtualises resource access It mediates access to distributed heterogeneous resources It uses a MetaCATalog to facilitate the brokering It integrates data and metadata

13 February 3, 2003 Michael Ernst, Fermilab 13 Global Data Management w/SRB … GridFTP Local MCAT dCache GridFTP Local MCAT dCache GridFTP Local MCAT dCache Global Catalog Site A Site B Site N


Download ppt "Oct 24, 2002 Michael Ernst, Fermilab DRM for Tier1 and Tier2 centers Michael Ernst Fermilab February 3, 2003."

Similar presentations


Ads by Google