D. Olson, L B N L 1 STAR Collab. Mtg. 13 Aug 2003 Grid Enabling a small Cluster Doug Olson Lawrence Berkeley National Laboratory STAR Collaboration Meeting.

Slides:



Advertisements
Similar presentations
L ondon e-S cience C entre Application Scheduling in a Grid Environment Nine month progress talk Laurie Young.
Advertisements

International Grid Communities Dr. Carl Kesselman Information Sciences Institute University of Southern California.
FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
GUMS status Gabriele Carcassi PPDG Common Project 12/9/2004.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
GRID workload management system and CMS fall production Massimo Sgaravatto INFN Padova.
Parallel Programming on the SGI Origin2000 With thanks to Moshe Goldberg, TCC and Igor Zacharov SGI Taub Computer Center Technion Mar 2005 Anne Weill-Zrahia.
The LHC Computing Grid Project Tomi Kauppi Timo Larjo.
Experience with ATLAS Data Challenge Production on the U.S. Grid Testbed Kaushik De University of Texas at Arlington CHEP03 March 27, 2003.
GRID Workload Management System Massimo Sgaravatto INFN Padova.
Interfacing Interactive Data Analysis Tools with the Grid: PPDG CS-11 Activity Doug Olson, LBNL Joseph Perl, SLAC ACAT 2002, Moscow 24 June 2002.
Grid Services at NERSC Shreyas Cholia Open Software and Programming Group, NERSC NERSC User Group Meeting September 17, 2007.
Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
Service, Grid Service and Workflow Xian-He Sun Scalable Computing Software Laboratory Illinois Institute of Technology Nov. 30, 2006 Fermi.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
Peer to Peer & Grid Computing Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science The University.
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
Slide 1 Experiences with NMI R2 Grids Software at Michigan Shawn McKee April 8, 2003 Internet2 Spring Meeting.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
VOX Project Status T. Levshina. Talk Overview VOX Status –Registration –Globus callouts/Plug-ins –LRAS –SAZ Collaboration with VOMS EDG team Preparation.
ESP workshop, Sept 2003 the Earth System Grid data portal presented by Luca Cinquini (NCAR/SCD/VETS) Acknowledgments: ESG.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Ruth Pordes, Fermilab CD, and A PPDG Coordinator Some Aspects of The Particle Physics Data Grid Collaboratory Pilot (PPDG) and The Grid Physics Network.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
MAGDA Roger Jones UCL 16 th December RWL Jones, Lancaster University MAGDA  Main authors: Wensheng Deng, Torre Wenaus Wensheng DengTorre WenausWensheng.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
VDT 1 The Virtual Data Toolkit 7.th EU DataGrid Internal Project Conference Heidelberg / Germany Todd Tannenbaum (Miron Livny) (Alain.
Open Science Grid OSG CE Quick Install Guide Siddhartha E.S University of Florida.
June 24-25, 2008 Regional Grid Training, University of Belgrade, Serbia Introduction to gLite gLite Basic Services Antun Balaž SCL, Institute of Physics.
The Open Science Grid OSG Ruth Pordes Fermilab. 2 What is OSG? A Consortium of people working together to Interface Farms and Storage to a Grid and Researchers.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Packaging & Testing: NMI & VDT.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
The Particle Physics Data Grid Collaboratory Pilot Richard P. Mount For the PPDG Collaboration DOE SciDAC PI Meeting January 15, 2002.
Eine Einführung ins Grid Andreas Gellrich IT Training DESY Hamburg
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
PPDGLHC Computing ReviewNovember 15, 2000 PPDG The Particle Physics Data Grid Making today’s Grid software work for HENP experiments, Driving GRID science.
The DZero/PPDG Test Bed Test bed composition as of Feb 2002: 3 PC at Fermilab (sammy, samadams, sameggs) Contact: Gabriele Garzoglio 1 PC at Imperial College.
January 26, 2003Eric Hjort HRMs in STAR Eric Hjort, LBNL (STAR/PPDG Collaborations)
Portal Update Plan Ashok Adiga (512)
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
Alain Roy Computer Sciences Department University of Wisconsin-Madison Condor & Middleware: NMI & VDT.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
VDT 1 The Virtual Data Toolkit Todd Tannenbaum (Alain Roy)
Miron Livny Computer Sciences Department University of Wisconsin-Madison The Role of Scientific Middleware in the Future of HEP Computing.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Magda Distributed Data Manager Prototype Torre Wenaus BNL September 2001.
Open Science Grid Build a Grid Session Siddhartha E.S University of Florida.
Status of Globus activities Massimo Sgaravatto INFN Padova for the INFN Globus group
US ATLAS – new grid initiatives John Huth Harvard University US ATLAS Software Meeting: BNL Aug 03.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL DOE/NSF Review of US LHC Software and Computing Fermilab Nov 29, 2001.
Middleware and the Grid Steven Tuecke Mathematics and Computer Science Division Argonne National Laboratory.
VOX Project Status T. Levshina. 5/7/2003LCG SEC meetings2 Goals, team and collaborators Purpose: To facilitate the remote participation of US based physicists.
Storage Management on the Grid Alasdair Earl University of Edinburgh.
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
Defining the Technical Roadmap for the NWICG – OSG Ruth Pordes Fermilab.
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
] Open Science Grid Ben Clifford University of Chicago
CS258 Spring 2002 Mark Whitney and Yitao Duan
Status of Grids for HEP and HENP
The DZero/PPDG D0/PPDG mission is to enable fully distributed computing for the experiment, by enhancing SAM as the distributed data handling system of.
Presentation transcript:

D. Olson, L B N L 1 STAR Collab. Mtg. 13 Aug 2003 Grid Enabling a small Cluster Doug Olson Lawrence Berkeley National Laboratory STAR Collaboration Meeting 13 August 2003 Michigan State University

D. Olson, L B N L 2 STAR Collab. Mtg. 13 Aug 2003Contents Overview of multi-site data gridOverview of multi-site data grid Features of a grid-enabled clusterFeatures of a grid-enabled cluster How to grid-enable a clusterHow to grid-enable a cluster CommentsComments

D. Olson, L B N L 3 STAR Collab. Mtg. 13 Aug 2003

D. Olson, L B N L 4 STAR Collab. Mtg. 13 Aug 2003 CMS Integration Grid Testbed Managed by ONE Linux box at Fermi Time to process 1 event: MHz From Miron Livny, example from last fall.

D. Olson, L B N L 5 STAR Collab. Mtg. 13 Aug 2003 Example Grid Application: Data Grids for High Energy Physics Tier2 Centre ~1 TIPS Online System Offline Processor Farm ~20 TIPS CERN Computer Centre FermiLab ~4 TIPS France Regional Centre Italy Regional Centre Germany Regional Centre Institute Institute ~0.25TIPS Physicist workstations ~100 MBytes/sec ~622 Mbits/sec ~1 MBytes/sec There is a “bunch crossing” every 25 nsecs. There are 100 “triggers” per second Each triggered event is ~1 MByte in size Physicists work on analysis “channels”. Each institute will have ~10 physicists working on one or more channels; data for these channels should be cached by the institute server Physics data cache ~PBytes/sec ~622 Mbits/sec or Air Freight (deprecated) Tier2 Centre ~1 TIPS Caltech ~1 TIPS ~622 Mbits/sec Tier 0 Tier 1 Tier 2 Tier 4 1 TIPS is approximately 25,000 SpecInt95 equivalents Famous Harvey Newman slide SLAC FNALBNL

D. Olson, L B N L 6 STAR Collab. Mtg. 13 Aug 2003 What do we get? Distribute load across available resources. Access to resources shared with other groups/projects. Eventually sharing across grid will look like sharing within a cluster (see below). On-demand access to much larger resource than available in dedicated fashion. (Also spreading costs across more funding sources.)

D. Olson, L B N L 7 STAR Collab. Mtg. 13 Aug 2003 Features of a grid site (server side services) Local compute & storage resourcesLocal compute & storage resources Batch system for cluster (pbs, lsf, condor, …) Disk storage (local, NFS, …) NIS or Kerberos user accounting system Possibly robotic tape (HPSS, OSM, Enstore, …) Added grid servicesAdded grid services Job submission (Globus gatekeeper) Data transport (GridFTP) Grid user to local account mapping (gridmap file, …) Grid security (GSI) Information services (MDS, GRIS, GIIS, Ganglia) Storage management (SRM, HRM/DRM software) Replica management (HRM & FileCatalog for STAR) Grid admin person Required STAR servicesRequired STAR services MySQL db for FileCatalog Scheduler provides (will provide) client-side grid interface

D. Olson, L B N L 8 STAR Collab. Mtg. 13 Aug 2003 How to grid-enable a cluster Signup on listsSignup on lists Study globus toolkit administrationStudy globus toolkit administration Install and configureInstall and configure VDT (grid) Ganglia (cluster monitoring) HRM/DRM (storage management & file transfer) Set up method for grid-mapfile (user) managementSet up method for grid-mapfile (user) management Additionally install/configure MySQL & FileCatalog & STAR softwareAdditionally install/configure MySQL & FileCatalog & STAR software

D. Olson, L B N L 9 STAR Collab. Mtg. 13 Aug 2003 Background URL’s stargrid-l mail liststargrid-l mail list Globus Toolkit - Toolkit - Mail lists, see - Documentation - www-unix.globus.org/toolkit/documentation.html Admin guide - Condor Mail lists: condor-users and condor-world VDT SRM

D. Olson, L B N L 10 STAR Collab. Mtg. 13 Aug 2003 VDT grid software distribution ( ( Virtual Data Toolkit (VDT) is the software distribution packaging for the US Physics Grid Projects (GriPhyN, PPDG, iVDGL).Virtual Data Toolkit (VDT) is the software distribution packaging for the US Physics Grid Projects (GriPhyN, PPDG, iVDGL). It uses pacman for the distribution tool (developed by Saul Youssef, BU Atlas) VDT contents (1.1.10) Condor/Condor-G 6.5.3, Globus 2.2.4, GSI OpenSSH, Fault Tolerant Shell v2.0, Chimera Virtual Data System 1.1.1, Java JDK1.1.4, KX509 / KCA, MonaLisa, MyProxy, PyGlobus, RLS 2.0.9, ClassAds 0.9.4, Netlogger Client, Server and SDK packages Configuration scripts Support model for VDT The VDT team centered at U. Wisc. performs testing and patching of code included in VDT VDT is the prefered contact for support of the included software packages (Globus, Condor, …) Support effort comes from iVDGL, NMI, other contributors

D. Olson, L B N L 11 STAR Collab. Mtg. 13 Aug 2003 Additional software Ganglia - cluster monitoringGanglia - cluster monitoring Not strictly req’d for grid but STAR uses as input to grid info svcs HRM/DRM - storage management & data transferHRM/DRM - storage management & data transfer Contact Eric Hjort & Alex Sim Expected to be in VDT in future Being used for bulk data ransfer between BNL & LBNL + STAR software …+ STAR software …

D. Olson, L B N L 12 STAR Collab. Mtg. 13 Aug 2003 VDT installation (globus, condor, …) ( Steps:Steps: Install pacman Prepare to install VDT (directory, accounts) Install VDT software using pacman Prepare to run VDT components Get host & service certificates ( Optionally install & run tests (from VDT) Where to install VDTWhere to install VDT VDT-Server on gatekeeper nodes VDT-Client on nodes that initiate grid activities VDT-SDK on nodes for grid-dependent s/w development

D. Olson, L B N L 13 STAR Collab. Mtg. 13 Aug 2003 Manage users (grid-mapfile, …) Users on grid are identified by their X509 certificate.Users on grid are identified by their X509 certificate. Every grid transaction is authenticated with a proxy derived from the user’s certificate.Every grid transaction is authenticated with a proxy derived from the user’s certificate. Also every grid communicaiton path is authenticated with host & service certificates (SSL). Default gatekeep installation uses grid-mapfile to convert X509 id to local user idDefault gatekeep installation uses grid-mapfile to convert X509 id to local user id [stargrid01] ~/> cat /etc/grid-security/grid-mapfile | grep doegrids "/DC=org/DC=doegrids/OU=People/CN=Douglas L Olson" olson "/DC=org/DC=doegrids/OU=People/CN=Alexander Sim " asim "/OU=People/CN=Dantong Yu /DC=doegrids/DC=org" grid_a "/OU=People/CN=Dantong Yu /DC=doegrids/DC=org" grid_a "/OU=People/CN=Mark Sosebee /DC=doegrids/DC=org" grid_a "/OU=People/CN=Shawn McKee 83467/DC=doegrids/DC=org" grid_a There are obvious security considerations that need to fit with your site requirementsThere are obvious security considerations that need to fit with your site requirements There are projects underway to manage this mapping for a collaboration across several sites - a work in progressThere are projects underway to manage this mapping for a collaboration across several sites - a work in progress

D. Olson, L B N L 14 STAR Collab. Mtg. 13 Aug 2003Comments Figure 6 mo. full time to start, then 0.25 FTE for cluster that is used rather heavily by a number of usersFigure 6 mo. full time to start, then 0.25 FTE for cluster that is used rather heavily by a number of users Assuming reasonably competent linux cluster administrator who is not yet familiar with grid Grid software and STAR distributed data management software is still evolving so there is some work to follow this (in the 0.25 FTE)Grid software and STAR distributed data management software is still evolving so there is some work to follow this (in the 0.25 FTE) During next year - static data distributionDuring next year - static data distribution In 1+ year should have rather dynamic user-driven data distributionIn 1+ year should have rather dynamic user-driven data distribution