Presentation is loading. Please wait.

Presentation is loading. Please wait.

The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

Similar presentations


Presentation on theme: "The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice."— Presentation transcript:

1

2 The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice

3 2003-11-03 oxana.smirnova@hep.lu.se 2 Some facts NorduGrid is:  A Globus-based Grid middleware solution for Linux clusters  A large international 24/7 production quality Grid facility  A resource routinely used by researchers since summer 2002  A freely available software  A project in development NorduGrid is NOT:  Derived from other Grid solutions (e.g. EU DataGrid)  An application-specific tool  A testbed anymore  A finalized solution

4 2003-11-03 oxana.smirnova@hep.lu.se 3 Some history Initiated by several Nordic universities  Copenhagen, Lund, Stockholm, Oslo, Bergen, Helsinki Started in January 2001  Initial budget: 2 years, 3 new positions  Initial goal: to deploy EU DataGrid middleware to run “ATLAS Data Challenge” Cooperation with EU DataGrid  Common Certification Authority and Virtual Organization tools, Globus2 configuration  Common applications (high-energy physics research) Switched from deployment to R&D in February 2002  Forced by the necessity to execute “ATLAS Data Challenges”  Deployed a light-weight and yet reliable and robust Grid solution in time for the ATLAS DC tests in May 2002 Will continue for 4-5 years more (and more?..)  Form the ”North European Grid Federation” together with the Dutch Grid, Belgium and Estonia  Will provide middleware for the ”Nordic Data Grid Facility”  …as well as for the Swedish Grid facility SWEGRID, Danish Center for Grid Computing, Finnish Grid projects etc

5 2003-11-03 oxana.smirnova@hep.lu.se 4 The resources Almost everything the Nordic academics can provide (ca 1000 CPUs in total):  4 dedicated test clusters (3-4 CPUs)  Some junkyard-class second-hand clusters (4 to 80 CPUs)  Few university production-class facilities (20 to 60 CPUs)  Two world-class clusters in Sweden, listed in Top500 (238 and 398 CPUs) Other resources come and go  Canada, Japan – test set-ups  CERN, Dubna – clients  It’s open so far, anybody can join or part  Number of other installations unknown People:  the “core” team keeps growing  local sysadmins are only called up when users need an upgrade

6 2003-11-03 oxana.smirnova@hep.lu.se 5 Who needs Grid NorduGrid relies on academic resources of various ownership  National HPC centers  Universities  Research groups All parts of the “spectrum” are interested in Grid development  For different reasons though At this stage, very vague accounting, if any Resources: supply/demand ratio 1±ε1±ε Grid Resources Users Technology

7 2003-11-03 oxana.smirnova@hep.lu.se 6 Middleware In order to build a Grid from a set of geographically distributed clusters you need:  Secure authentication and authorization  Access to information about available resources  Fast and reliable file transfers These services are provided by the so called middleware Most Grid projects have built their middleware using the Globus Toolkit 2 TM as a starting point

8 2003-11-03 oxana.smirnova@hep.lu.se 7 Components Information System

9 2003-11-03 oxana.smirnova@hep.lu.se 8 NorduGrid specifics 1. It is stable by design: a) The nervous system: distributed yet stable Information System (Globus’ MDS 2.2+patches) b) The heart(s): Grid Manager, the service to be installed at master nodes (based on Globus, replaces GRAM) c) The brain(s): User Interface, the client/broker that can be installed anywhere as a standalone module (makes use of Globus) 2. It is light-weight, portable and non-invasive: a) Resource owners retain full control; Grid Manager is effectively a yet another user (with many faces though) b) Nothing has to be installed on worker nodes c) No requirements w.r.t. OS, resource configuration, etc. d) Clusters need not be dedicated e) Runs on top of existing Globus installation (e.g. VDT) f) Works with any Linux flavor, Solaris, Tru64 3. Strategy: start with something simple that works for users and add functionality gradually

10 2003-11-03 oxana.smirnova@hep.lu.se 9 How does it work Information system knows everything Information system  Substantially re-worked and patched Globus MDS  Distributed and multi-rooted  Allows for a pseudo-mesh topology  No need for a centralized broker The server (“Grid manager”) on each gatekeeper does most of the jobGrid manager  Pre- and post- stages files  Interacts with LRMS  Keeps track of job status  Cleans up the mess  Sends mails to users The client (“User Interface”) does the brokering, Grid job submission, monitoring, termination, retrieval, cleaning etcUser Interface  Interprets user’s job task  Gets the testbed status from the information system  Forwards the task to the best Grid Manager  Does some file uploading, if requested

11 2003-11-03 oxana.smirnova@hep.lu.se 10 Information System Uses Globus’ MDS 2.2  Soft-state registration allows creation of any dynamic structure  Multi-rooted tree  GIIS caching is not used by the clients  Several patches and bug fixes are applied A new schema is developed, to serve clusters  Clusters are expected to be fairly homogeneous

12 2003-11-03 oxana.smirnova@hep.lu.se 11 Front-end and the Grid Manager Grid Manager replaces Globus’ GRAM, still using Globus Toolkit TM 2 libraries All transfers are made via GridFTP Added a possibility to pre- and post-stage files, optionally using Replica Catalog information Caching of pre-staged files is enabled Runtime environment support

13 2003-11-03 oxana.smirnova@hep.lu.se 12 Summary of Grid services on the front-end machine GridFTP server  Plugin for job submission via a virtual directory  Conventional file access with Grid access control LDAP server for information services Grid Manager  Forks “downloaders” and “uploaders” for file transfer

14 2003-11-03 oxana.smirnova@hep.lu.se 13 The User Interface Provides a set of utilities to be invoked from the command line: Contains a broker that polls MDS and decides to which queue at which cluster a job should be submitted  The user must be authorized to use the cluster and the queue  The cluster’s and queue’s characteristics must match the requirements specified in the xRSL string (max CPU time, required free disk space, installed software etc)  If the job requires a file that is registered in a Replica Catalog, the brokering gives priority to clusters where a copy of the file is already present  From all queues that fulfills the criteria one is chosen randomly, with a weight proportional to the number of free CPUs available for the user in each queue  If there are no available CPUs in any of the queues, the job is submitted to the queue with the lowest number of queued job per processor ngsubto submit a task ngstatto obtain the status of jobs and clusters ngcatto display the stdout or stderr of a running job nggetto retrieve the result from a finished job ngkillto cancel a job request ngcleanto delete a job from a remote cluster ngrenewto renew user’s proxy ngsyncto synchronize the local job info with the MDS ngcopyto transfer files to, from and between clusters ngremoveto remove files

15 2003-11-03 oxana.smirnova@hep.lu.se 14 Job Description: extended Globus RSL (&(executable="recon.gen.v5.NG") (arguments="dc1.002000.lumi02.01101.hlt.pythia_jet_17.zebra" "dc1.002000.lumi02.recon.007.01101.hlt.pythia_jet_17.eg7.602.ntuple" "eg7.602.job" “999") (stdout="dc1.002000.lumi02.recon.007.01101.hlt.pythia_jet_17.eg7.602.log") (stdlog="gridlog.txt")(join="yes") ( |(&(|(cluster="farm.hep.lu.se")(cluster="lscf.nbi.dk")(*cluster="seth.hpc2n.umu.se"*)(cluster="login-3.monolith.nsc.liu.se")) (inputfiles= ("dc1.002000.lumi02.01101.hlt.pythia_jet_17.zebra" "rc://grid.uio.no/lc=dc1.lumi02.002000,rc=NorduGrid,dc=nordugrid,dc=org/zebra/dc1.002000.lumi02.01101.hlt.pythia_jet_17.ze bra") ("recon.gen.v5.NG" "http://www.nordugrid.org/applications/dc1/recon/recon.gen.v5.NG.db") ("eg7.602.job" "http://www.nordugrid.org/applications/dc1/recon/eg7.602.job.db") ("noisedb.tgz" "http://www.nordugrid.org/applications/dc1/recon/noisedb.tgz")) ) (inputfiles= ("dc1.002000.lumi02.01101.hlt.pythia_jet_17.zebra" "rc://grid.uio.no/lc=dc1.lumi02.002000,rc=NorduGrid,dc=nordugrid,dc=org/zebra/dc1.002000.lumi02.01101.hlt.pythia_jet_17.ze bra") ("recon.gen.v5.NG" "http://www.nordugrid.org/applications/dc1/recon/recon.gen.v5.NG") ("eg7.602.job" "http://www.nordugrid.org/applications/dc1/recon/eg7.602.job")) ) (outputFiles= ("dc1.002000.lumi02.recon.007.01101.hlt.pythia_jet_17.eg7.602.log" "rc://grid.uio.no/lc=dc1.lumi02.recon.002000,rc=NorduGrid,dc=nordugrid,dc=org/log/dc1.002000.lumi02.recon.007.01101.hlt.py thia_jet_17.eg7.602.log") ("histo.hbook" "rc://grid.uio.no/lc=dc1.lumi02.recon.002000,rc=NorduGrid,dc=nordugrid,dc=org/histo/dc1.002000.lumi02.recon.007.01101.hlt. pythia_jet_17.eg7.602.histo") ("dc1.002000.lumi02.recon.007.01101.hlt.pythia_jet_17.eg7.602.ntuple" "rc://grid.uio.no/lc=dc1.lumi02.recon.002000,rc=NorduGrid,dc=nordugrid,dc=org/ntuple/dc1.002000.lumi02.recon.007.01101.hlt.pythia_jet_17.eg7.602.ntuple")) (jobname="dc1.002000.lumi02.recon.007.01101.hlt.pythia_jet_17.eg7.602") (runTimeEnvironment="ATLAS-6.0.2") (CpuTime=1440)(Disk=3000)(ftpThreads=10))

16 2003-11-03 oxana.smirnova@hep.lu.se 15 Task flow Grid Manager Gatekeeper GridFTP RSL Front-end Cluster B Cluster A B!

17 2003-11-03 oxana.smirnova@hep.lu.se 16 A snapshot

18 2003-11-03 oxana.smirnova@hep.lu.se 17 Performance The main load: “ATLAS Data Challenge 1” (DC1)  Major load from May 2002 to August 2003  DC1, phase1 (detector simulation): Total number of jobs: 1300, ca. 24 hours of processing 2 GB of input each Total output size: 762 GB All files uploaded to Storage Elements and registered in the Replica Catalog.  DC1, phase2 (pile-up of data): Piling up the events above with a background signal 1300 jobs, ca. 4 hours each  DC1, phase3 (reconstruction of signal) 2150 jobs, 5-6 hours of processing 1 GB of input each Other applications:  Calculations for string fragmentation models (Quantum Chromodynamics)  Quantum lattice models calculations (sustained load of 150+ long jobs at any given moment for several days)  Particle physics analysis and modeling  Biology applications At peak production, up to 500 jobs were managed by the NorduGrid at the same time

19 2003-11-03 oxana.smirnova@hep.lu.se 18 What is needed for installation A cluster or even a single machine For a server:  Any Linux flavor (binary RPMs exist for RedHat and Mandrake, ev. for Debian)  A local resource management system, e.g., PBS  Globus installation (NorduGrid has an own distribution in a single RPM)  Host certificate (and user certificates)  Some open ports (depends on the cluster size)  One day to go through all the configuration details The owner always retains a full control  Installing NorduGrid does not give automatic access to the resources  And other way around  But with a bit of negotiations, one can get access to very considerable resources on a very good network Current stable release is 0.3.30; daily CVS snapshots are available

20 2003-11-03 oxana.smirnova@hep.lu.se 19 Summary NorduGrid pre-release (currently 0.3.30) works reliably Release 1.0 is slowly but surely on its way; many fixes are still needed Developers are welcomed: much functionality is still missing, such as:  Bookkeeping, accounting  Group- and role-based authorization  Scalable resource discovery and monitoring service  Interactive tasks  Integrated, scalable and reliable data management  Interfaces to other resource management systems New users and resources are welcomed


Download ppt "The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice."

Similar presentations


Ads by Google