Presentation is loading. Please wait.

Presentation is loading. Please wait.

Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.

Similar presentations


Presentation on theme: "Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource."— Presentation transcript:

1 Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource Broker and boxed set executables. The test took only 1 week, 1 operator and 3000 SpecInt95 days. The success rate was higher than 90 %. While not yet suitable for production, this is an encouraging step towards a brokered production system. There are three main developments within GridPP: Grid software (middleware); Grid-enabled applications; and provision of computing infrastructure in the UK and CERN. GridPP will enable testing of a prototype Grid of significant scale, providing resources for the LHC experiments ALICE, ATLAS, CMS and LHCb, the US-based experiments BaBar, CDF and D0, and lattice theorists from UKQCD. Prototype middleware is being developed in the UK as part of the EU DataGrid project and is illustrated by: Dynamic Grid Optimisation where strategies are developed to make best use of all the available resources across the Grid; R-GMA used to access information services essential to Grid operation. The Relational Grid Monitoring Architecture. Producers register with the registry and describe the type and structure of information they want to make available to the Grid. Consumers query the registry then contact the producer directly to obtain the relevant data. GridSite which enables members to update the GridPP central web service using Grid certificate authentication. ATLAS and GridPP GridPP Middleware GridPP Applications for ATLAS European Data Grid Integration The UK has been integrating and validating the EDG grid middleware for ATLAS. Initial tests of EDG release 1.2 on the core sites at CERN, CNAF, Lyon, NIKHEF, RAL by the ATLAS-EDG group revealed many problems with Resource Broker saturation and the use of a single Replica Catalogue (solved with RLS). The job success rate was only 70%. GANGA/GRAPPA is a project working to produce an interface between the user, the Grid Middleware and the experimental software framework.It takes advantage of the shared software framework in the two experiments. It is being developed jointly with the LHCb experiment, and as it is using component technologies will allow reuse elsewhere It was started within GridPP but is a vital partnership with US ATLAS. GridPP is a collaboration of Particle Physicists and Computing Scientists from the UK and CERN, who are building a Grid for Particle Physics. UK Physicists are currently preparing for the Large Hadron Collider (LHC) that will turn on in 2007 at CERN and produce an enormous stream of data (millions of Gigabytes per year) that must be stored and processed (using up to one hundred thousand processors). No single computer centre will be able to provide both the storage and computing facilities for the entire LHC operation. Hence distribution of computation and data via the Grid are essential. The UK expects to play a major role in the analysis of data from the LHC and the exciting discoveries that are anticipated. UK Physicists are also participating in a number of US-based experiments that are already producing data. Although not yet on the scale expected from the LHC, these experiments are using early Grid developments as a practical tool for doing real analysis today. GridPP Packaging, Installation, Configuration… An important issue is user software installation. The large number of sites, many serving diverse user groups, requires automated and scalable Installation Tools. We create coherent rpms and tar files from CMT (which maintains the software and the runtime environment), which exposes the package dependencies in the form of cache files. These are then used by PACMAN which can either pull or push complete installations to remote sites. Scripts are available to make the process semi-automatic. Simulation of GridPP Testbed Job times for different replication policies & 10000 simulated jobs. GridPPGridPP also funds the Hardware for the prototype Tier1A facility based at RAL. This has 400kSI2k at present, 80Tb of usable RAID disk and a 180Tb tape- based tapestore. In the recent Data Challenge 1 for ATLAS, the UK Tier-1 and Tier-2 facilities represented the second largest available CPU resource. JobHandler Requirements JobAttributes Credentials 1 1… Job JobsRegistryJobsCatalog Application GaudiApplicationHandle r Configuration ConfDataBase ParameterExecutable 1 0… 1 1 1 1 1… GridPP also contributed to AtCom, tool used for DC1 production, and which acts as a testbed for GANGA developments. The UK are responsible for the plug-ins for each batch and Grid system. The GANGA internal architecture SiteRALCambridgeI CBirmingham Jobs Allocated 601804020 Success571704017 Sites in UK With outbound connectivity > 1 day Processing time >= 500 Mb memory >= 5 Worker Nodes User input has proven essential to building the system. Jobs must make as few assumptions as possible about the system. Configuration and integration takes as long as writing the middleware. Inter-operability between Grids will be a challenge for us all.


Download ppt "Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource."

Similar presentations


Ads by Google