LCG and Tier-1 Facilities Status ● LCG interoperability. ● Tier-1 facilities.. ● Observations. (Not guaranteed to be wry, witty or nonobvious.) Joseph.

Slides:



Advertisements
Similar presentations
UCL HEP Computing Status HEPSYSMAN, RAL,
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
S.Chechelnitskiy / SFU Simon Fraser Running CE and SE in a XEN virtualized environment S.Chechelnitskiy Simon Fraser University CHEP 2007 September 6 th.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
GRID workload management system and CMS fall production Massimo Sgaravatto INFN Padova.
GRID Workload Management System Massimo Sgaravatto INFN Padova.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Dave Newbold, University of Bristol24/6/2003 CMS MC production tools A lot of work in this area recently! Context: PCP03 (100TB+) just started Short-term.
Grid Canada CLS eScience Workshop 21 st November, 2005.
SAMGrid as a Stakeholder of FermiGrid Valeria Bartsch Computing Division Fermilab.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
PROOF work progress. Progress on PROOF The TCondor class was rewritten. Tested on a condor pool with 44 nodes. Monitoring with Ganglia page. The tests.
Steve Traylen Particle Physics Department EDG and LCG Status 9 th December 2003
Ashok Agarwal University of Victoria 1 GridX1 : A Canadian Particle Physics Grid A. Agarwal, M. Ahmed, B.L. Caron, A. Dimopoulos, L.S. Groer, R. Haria,
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Southgrid Technical Meeting Pete Gronbech: 26 th August 2005 Oxford.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
CERN Using the SAM framework for the CMS specific tests Andrea Sciabà System Analysis WG Meeting 15 November, 2007.
Data reprocessing for DZero on the SAM-Grid Gabriele Garzoglio for the SAM-Grid Team Fermilab, Computing Division.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
May 12, 2005Batch Workshop HEPiX Karlsruhe 1 Preparing for the Grid— Changes in Batch Systems at Fermilab HEPiX Batch System Workshop.
Southgrid Technical Meeting Pete Gronbech: May 2005 Birmingham.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Presenter Name Facility Name UK Testbed Status and EDG Testbed Two. Steve Traylen GridPP 7, Oxford.
GLIDEINWMS - PARAG MHASHILKAR Department Meeting, August 07, 2013.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
OSG Abhishek Rana Frank Würthwein UCSD.
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
DataTAG is a project funded by the European Union DataTAG WP4 meeting, Bologna 29/07/2003 – n o 1 GLUE Schema - Status Report DataTAG WP4 meeting Bologna,
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
13 October 2004GDB - NIKHEF M. Lokajicek1 Operational Issues in Prague Data Challenge Experience.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Eileen Berman. Condor in the Fermilab Grid FacilitiesApril 30, 2008  Fermi National Accelerator Laboratory is a high energy physics laboratory outside.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
Gennaro Tortone, Sergio Fantinel – Bologna, LCG-EDT Monitoring Service DataTAG WP4 Monitoring Group DataTAG WP4 meeting Bologna –
VOX Project Status T. Levshina. 5/7/2003LCG SEC meetings2 Goals, team and collaborators Purpose: To facilitate the remote participation of US based physicists.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
II EGEE conference Den Haag November, ROC-CIC status in Italy
Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.
April 18, 2006FermiGrid Project1 FermiGrid Project Status April 18, 2006 Keith Chadwick.
CERN LCG1 to LCG2 Transition Markus Schulz LCG Workshop March 2004.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
SuperB – INFN-Bari Giacinto DONVITO.
Belle II Physics Analysis Center at TIFR
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Computing Board Report CHIPP Plenary Meeting
The LHCb Computing Data Challenge DC06
Presentation transcript:

LCG and Tier-1 Facilities Status ● LCG interoperability. ● Tier-1 facilities.. ● Observations. (Not guaranteed to be wry, witty or nonobvious.) Joseph L. Kaiser LCG Interoperability

● Three interoperability choices 1. Dedicated machinery nodes/10 cpus – now is 13 nodes/26 cpus no SE. 2. LCG-2_0_0 – via LCFGng. Upgrade available. 3. We have been “off the air” since #2 started. 2.LCG gateway to US-CMS resources. 1. LCG CE installed on cmssrv10 with condor-g – May Sporadic work has been done: problems – time ! = progress 3. Solutions in new upgrade? 3.Common interfaces. 1.Someone must be doing something.

Things to do ● Upgrade to current tag on dedicated resources and on CE interface (cmssrv10). ● Get back online with current resources. ● Enter LCG fray. ● Configure CE interface. ● Test simple jobs with current LCG. ● Test simple jobs with one CMS cluster. ● Configure farm to behave as LCG worker nodes. Test again. ● SE needs to be installed.

LCG Observations ● LCG support is good. ● LCG installation infrastructure is not (LCFGng). ● LCG instructions on doing manual installs is good. (Manual installs are Rocksible.) ● My input has been nil once up due to other time constraints. ● FNAL is a minimal presence due to time and resource constraints, both human and machine.

Tier-1 Facilities – Current ● 196 machines in 5 clusters with various servers. ● LCG: 13 WN's. ● Storage: 20 dcache servers : admin, pool – 30 TB ● 4 IBRIX nodes – 9 TB disk. ● UAF – 3 servers, 36 WN's bunch o' disk. ● PG – 4 servers, 84 WN's ● DGT – 1 server, 4 WN's ● Various servers – Rocks, console, bugzilla, VOMS, GIIS, GRIS, LCG-CE, MOP, tomcat/webserver, ganglia ● 2 FTE's.

Tier-1 Facilities – Future ● This year: – 100 WN's on the floor. – 4 new storage nodes, 5 new servers. – 10 TB storage. ● Next year – 300 new worker nodes up to 20 new servers. – Storage = gobs and gobs ● 3 FTE's

Stuff to do ● LCG interoperability. ● Dcache vs. Lustre vs. GFS ● Condor or some other batch system. ● Sharing = fine grained policy ● Continued Rocks work – scalability, configuration etc. ● Load balancing for UAF – LVS, switch etc. ● High availability for critical servers. ● GRID3/Tier-2 involvement/support. ● Documentation. ● Monitoring – users, processes, machines. ● Security (FBS) ● Time to plan and architect.

Observations – Political priorities are trumped by physical realities. – ∑ N(TE) <= ∑ N(People) – CMS sysadmin ratio has been at 1:100 but many servers and storage with unique configurations included. (Google is at 1:500; Farms team is about 1: but they mostly deal with physical problems, no higher level issues, no research, not much grid - yet.) – UAF, storage, PG are separate knowledge areas that need attention.