Steve LloydIoP Dublin March 2005 Slide 1 The Grid for Particle Physic(ist)s Steve Lloyd Queen Mary, University of London What is it and how do I use it?

Slides:



Advertisements
Similar presentations
EGEE-II INFSO-RI Enabling Grids for E-sciencE Practical using EGEE middleware: AA and simple job submission.
Advertisements

An open source approach for grids Bob Jones CERN EU DataGrid Project Deputy Project Leader EU EGEE Designated Technical Director
Slide 1 Steve Lloyd Grid Brokering Meeting - 4 Dec 2006 GridPP Steve Lloyd Queen Mary, University of London Grid Brokering Meeting December 2006.
S.L.LloydATSE e-Science Visit April 2004Slide 1 GridPP – A UK Computing Grid for Particle Physics GridPP 19 UK Universities, CCLRC (RAL & Daresbury) and.
1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
Tony Doyle GridPP – From Prototype To Production, HEPiX Meeting, Edinburgh, 25 May 2004.
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
S.L.LloydGrid Steering Committee 8 March 2002 Slide 1 Status of GridPP Overview Financial Summary Recruitment Status EU DataGrid UK Grid Status GridPP.
GridPP, The Grid & Industry Who we are, what it is and what we can do. Tony Doyle, Project Leader Steve Lloyd, Collaboration Board Chairman Robin Middleton,
1 ALICE Grid Status David Evans The University of Birmingham GridPP 16 th Collaboration Meeting QMUL June 2006.
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
The Grid Professor Steve Lloyd Queen Mary, University of London.
Southgrid Status Pete Gronbech: 21 st March 2007 GridPP 18 Glasgow.
Tony Doyle Executive Summary, PPARC, MRC London, 15 May 2003.
Your university or experiment logo here What is it? What is it for? The Grid.
B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
The Grid What is it? what is it for?. Your university or experiment logo here Web: information sharing Invented at CERN by Tim Berners-Lee No. of Internet.
S.L.LloydGridPP Collaboration Meeting IC Sept 2002Slide 1 Introduction Welcome to the 5 th GridPP Collaboration Meeting Steve Lloyd, Chair of GridPP.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow GridPP delivering The UK Grid.
GLite adoption and opportunities for collaboration with industry Tony Doyle Distributed Computing Workshop Westminster, 21 May 2008.
GridPP Building a UK Computing Grid for Particle Physics A PPARC funded project.
Slide 1 of 24 Steve Lloyd NW Grid Seminar - 11 May 2006 GridPP and the Grid for Particle Physics Steve Lloyd Queen Mary, University of London NW Grid Seminar.
Slide 1 Steve Lloyd London Tier-2 Workshop - 16 Apr 2007 Introduction to Grids and GridPP Steve Lloyd Queen Mary, University of London London Tier-2 Workshop.
Particle physics – the computing challenge CERN Large Hadron Collider –2007 –the worlds most powerful particle accelerator –10 petabytes (10 million billion.
UK Agency for the support of: High Energy Physics - the nature of matter and mass Particle Astrophysics - laws from natural phenomena Astronomy - the.
Tony Doyle GridPP2 Proposal, BT Meeting, Imperial, 23 July 2003.
The National Grid Service Mike Mineter.
The National Grid Service and OGSA-DAI Mike Mineter
GridPP Deployment Status Steve Traylen 28th October 2004 GOSC Face to Face, NESC, UK.
Andrew McNab - Manchester HEP - 22 April 2002 EU DataGrid Testbed EU DataGrid Software releases Testbed 1 Job Lifecycle Authorisation at your site More.
GridPP From Prototype to Production David Britton 21/Sep/06 1.Context – Introduction to GridPP 2.Performance of the GridPP/EGEE/wLCG Grid 3.Some Successes.
12th EELA Tutorial, Lima, FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America.
15 May 2006Collaboration Board GridPP3 Planning Executive Summary Steve Lloyd.
John Kewley CCLRC Daresbury Laboratory NW-GRID Training Event 26 th January 2007 GROWL Scripts and Web Services John Kewley Grid Technology Group E-Science.
WMS+LB Server Installation and Configuration Carlos Fuentes Bermejo IRIS-CERT/RedIRIS 11th EELA Tutorial, Madrid de Septiembre de 2007.
Enabling e-Research over GridPP Dan Tovey University of Sheffield.
EU 2nd Year Review – Jan – Title – n° 1 WP1 Speaker name (Speaker function and WP ) Presentation address e.g.
INFSO-RI Enabling Grids for E-sciencE Workload Management System and Job Description Language.
Tony Doyle “GridPP2 Proposal”, GridPP7 Collab. Meeting, Oxford, 1 July 2003.
S.L.LloydGridPP CB 29 Oct 2002Slide 1 Agenda 1.Introduction – Steve Lloyd 2.Minutes of Previous Meeting (23 Oct 2001) 3.Matters Arising 4.Project Leader's.
Steve LloydGridPP13 Durham July 2005 Slide 1 Using the Grid Steve Lloyd Queen Mary, University of London.
Steve LloydInaugural Lecture - 24 November 2004 Slide 1 The Data Deluge and the Grid Steve Lloyd Professor of Experimental Particle Physics Inaugural Lecture.
Basic Grid Job Submission Alessandra Forti 28 March 2006.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow GridPP Vendor Day 30 th April.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
The Grid Prof Steve Lloyd Queen Mary, University of London.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
Tony Doyle GridPP – From Prototype To Production, GridPP10 Meeting, CERN, 2 June 2004.
Computational grids and grids projects DSS,
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
1 st EGEE Conference – April UK and Ireland Partner Dave Kant Deputy ROC Manager.
Nadia LAJILI User Interface User Interface 4 Février 2002.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Group 1 : Grid Computing Laboratory of Information Technology Supervisors: Alexander Ujhinsky Nikolay Kutovskiy.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
GridPP Building a UK Computing Grid for Particle Physics Professor Steve Lloyd, Queen Mary, University of London Chair of the GridPP Collaboration Board.
…building the next IT revolution From Web to Grid…
Your university or experiment logo here What is it? What is it for? The Grid.
Enabling Grids for E-sciencE Workload Management System on gLite middleware - commands Matthieu Reichstadt CNRS/IN2P3 ACGRID School, Hanoi.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
LCG2 Tutorial Viet Tran Institute of Informatics Slovakia.
J Jensen/J Gordon RAL Storage Storage at RAL Service Challenge Meeting 27 Jan 2005.
Slide § David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow GridPP delivering The UK Grid.
18/12/03PPD Christmas Lectures 2003 Grid in the Department A Guide for the Uninvolved PPD Computing Group Christmas Lecture 2003 Chris Brew.
GridPP, The Grid & Industry
Moving the LHCb Monte Carlo production system to the GRID
Understanding the nature of matter -
Building a UK Computing Grid for Particle Physics
Presentation transcript:

Steve LloydIoP Dublin March 2005 Slide 1 The Grid for Particle Physic(ist)s Steve Lloyd Queen Mary, University of London What is it and how do I use it?

Steve LloydIoP Dublin March 2005 Slide 2 Starting from this event… We are looking for this signature Selectivity: 1 in Like looking for 1 person in a thousand world populations Or for a needle in 20 million haystacks! LHC Data Challenge ~100,000,000 electronic channels 800,000,000 proton- proton interactions per second Higgs per second 10 PBytes of data a year (10 Million GBytes = 14 Million CDs) Computing Solution: The Grid

Steve LloydIoP Dublin March 2005 Slide 3 The Grid Ian Foster / Carl Kesselman: "A computational Grid is a hardware and software infrastructure that provides dependable, consistent, pervasive and inexpensive access to high-end computational capabilities." 'Grid' means different things to different people All agree its a funding opportunity!

Steve LloydIoP Dublin March 2005 Slide 4 Electricity Grid Analogy with the Electricity Power Grid 'Standard Interface' Power Stations Distribution Infrastructure

Steve LloydIoP Dublin March 2005 Slide 5 Computing Grid Computing and Data Centres Fibre Optics of the Internet

Steve LloydIoP Dublin March 2005 Slide 6 Middleware MIDDLEWARE CPU Disks, CPU etc PROGRAMS OPERATING SYSTEM Word/Excel /Web Your Program Games CPU Cluster User Interface Machine CPU Cluster CPU Cluster Resource Broker Information Service Single PC Grid Disk Server Your Program Middleware is the Operating System of a distributed computing system Replica Catalogue Bookkeeping Service

Steve LloydIoP Dublin March 2005 Slide 7 GridPP 19 UK Universities, CCLRC (RAL & Daresbury) and CERN Funded by the Particle Physics and Astronomy Research Council (PPARC) GridPP £17m "From Web to Grid" GridPP £16m "From Prototype to Production" Not planning GridPP3 – aim is to incorporate Grid activities and facilities into baseline programme.

Steve LloydIoP Dublin March 2005 Slide 8 International Collaboration EU DataGrid (EDG) –Middleware Development Project US and other Grid projects Interoperability LHC Computing Grid (LCG) –Grid Deployment Project for LHC EU Enabling Grids for e-Science (EGEE) –Grid Deployment Project for all disciplines GridPP LCG EGEE

Steve LloydIoP Dublin March 2005 Slide 9 GridPP Support Manpower for Experiments: Manpower for Middleware Development: Hardware and Manpower at RAL (LHC Tier-1, BaBar Tier-A) Manpower for System Support at Institutes (Tier-2s) Manpower for LCG at CERN (under discussion) Metadata Storage Workload Management Security Information and Monitoring Networking (Not directly supported but using LCG)

Steve LloydIoP Dublin March 2005 Slide 10 Paradigm Shift? Jun: 80%:20% Aug: 27%:73% May: 89%:11% Jul: 77%:23% LHCb Monte Carlo Production Grid Non- Grid Grid

Steve LloydIoP Dublin March 2005 Slide 11 Tier Structure Tier 0 Tier 1 National centres Tier 2 Regional groups Tier 3 Institutes Tier 4 Workstations Offline farm Online system CERN computer centre RAL,UK ScotGridNorthGridSouthGridLondon FranceItalyGermanyUSA GlasgowEdinburghDurham

Steve LloydIoP Dublin March 2005 Slide 12 Resource Discovery at Tier-1 1 July October 2000 Pre-Grid GRID Load July 2004 Full again in 8 hours!

Steve LloydIoP Dublin March 2005 Slide 13 UK Tier-2 Centres ScotGrid Durham, Edinburgh, Glasgow NorthGrid Daresbury, Lancaster, Liverpool, Manchester, Sheffield SouthGrid Birmingham, Bristol, Cambridge, Oxford, RAL PPD, Warwick London Brunel, Imperial, QMUL, RHUL, UCL Mostly funded by HEFCE

Steve LloydIoP Dublin March 2005 Slide 14 Tier-2 Resources CPUDisk ALICEATLASCMSLHCbALICEATLASCMSLHCb London NorthGrid ScotGrid SouthGrid Committed Resources at each Tier-2 in 2007 Experiments Requirement of a Tier-2 in 2008 Need SRIF3 Resources! Doesnt include SRIF3. Experiment shares determined by Institutes who bought the kit Overall LCG shortfall ~30% in CPU ~50% in Disk (All Tiers)

Steve LloydIoP Dublin March 2005 Slide 15 The LCG Grid 123 Sites 33 Countries 10,314 CPUs 3.3 PBytes Disk

Steve LloydIoP Dublin March 2005 Slide 16 Grid Demo

Steve LloydIoP Dublin March 2005 Slide 17 Getting Started 1. Get a digital certificate 2. Join a Virtual Organisation (VO) For LHC join LCG and choose a VO 3. Get access to a local User Interface Machine (UI) and copy your files and certificate there Authentication – who you are Authorisation – what you are allowed to do

Steve LloydIoP Dublin March 2005 Slide 18 Job Preparation ############# athena.jdl ################# Executable = "athena.sh"; StdOutput = "athena.out"; StdError = "athena.err"; InputSandbox = {"athena.sh", "MyJobOptions.py", "MyAlg.cxx", "MyAlg.h", "MyAlg_entries.cxx", "MyAlg_load.cxx", "login_requirements", "requirements", "Makefile"}; OutputSandbox = {"athena.out","athena.err", "ntuple.root", "histo.root", "CLIDDBout.txt"}; Requirements = Member("VO-atlas-release-9.0.4", other.GlueHostApplicationSoftwareRunTimeEnvironment); ################################################ Input files Output Files Choose ATLAS Version (Satisfied by ~32 Sites) Prepare a file of Job Description Language (JDL): My C++ Code Job Options Script to run

Steve LloydIoP Dublin March 2005 Slide 19 Job Submission ~/atlas]$ grid-proxy-init Your identity: /C=UK/O=eScience/OU=QueenMaryLondon/L=Physics/CN=steve lloyd Enter GRID pass phrase for this identity: Creating proxy Done Your proxy is valid until: Thu Mar 17 03:25: ~/atlas]$ Make a copy of your certificate to send out (~ once a day): ~/atlas]$ edg-job-submit --vo atlas -o jobIDfile athena.jdl Selected Virtual Organisation name (from --vo option): atlas Connecting to host lxn1188.cern.ch, port 7772 Logging to host lxn1188.cern.ch, port 9002 ================================ edg-job-submit Success ==================================== The job has been successfully submitted to the Network Server. Use edg-job-status command to check job current status. Your job identifier (edg_jobId) is: - The edg_jobId has been saved in the following file: /home/lloyd/atlas/jobIDfile ============================================================================================ ~/atlas]$ Submit the Job: VOJDLFile to hold job IDs

Steve LloydIoP Dublin March 2005 Slide 20 ~/atlas]$ edg-job-status -i jobIDfile : 2 : 3 : 4 : 5 : 6 : 7 : a : all q : quit Choose one or more edg_jobId(s) in the list - [1-7]all:7 ************************************************************* BOOKKEEPING INFORMATION: Status info for the Job : Current Status: Done (Success) Exit code: 0 Status Reason: Job terminated successfully Destination: lcg00125.grid.sinica.edu.tw:2119/jobmanager-lcgpbs-short reached on: Wed Mar 16 17:45: ************************************************************* ~/atlas]$ RAL Valencia CERN Taiwan Job Status Taiwan Find out its status: Ran at:

Steve LloydIoP Dublin March 2005 Slide 21 Job Retrieval ~/atlas]$ edg-job-get-output -dir. -i jobIDfile Retrieving files from host: lxn1188.cern.ch ( for ) ********************************************************************************* JOB GET OUTPUT OUTCOME Output sandbox files for the job: - have been successfully retrieved and stored in the directory: /home/lloyd/atlas/lloyd_0uDjtwbBbj8DTRetxYxoqQ ********************************************************************************* ~/atlas]$ ls -lt /home/lloyd/atlas/lloyd_0uDjtwbBbj8DTRetxYxoqQ total rw-r--r-- 1 lloyd hep 224 Mar 17 10:47 CLIDDBout.txt -rw-r--r-- 1 lloyd hep Mar 17 10:47 ntuple.root -rw-r--r-- 1 lloyd hep 5372 Mar 17 10:47 athena.err -rw-r--r-- 1 lloyd hep Mar 17 10:47 athena.out Retrieve the Output:

Steve LloydIoP Dublin March 2005 Slide 22 Conclusions The Grid is here – it works! Currently difficult to install and maintain the middleware and the experiments software It is straightforward to use There are huge resources available: Last week LXBATCH had 6500 ATLAS Jobs queued - LCG had 3017 free CPUs Need to scale to full size ~10, ,000 CPUs Need Stability, Robustness, Security (Hackers Paradise!) etc Need continued funding beyond start of LHC! Use it!

Steve LloydIoP Dublin March 2005 Slide 23 Further Info