UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.

Slides:



Advertisements
Similar presentations
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Advertisements

Florida Tech Grid Cluster P. Ford 2 * X. Fave 1 * M. Hohlmann 1 High Energy Physics Group 1 Department of Physics and Space Sciences 2 Department of Electrical.
CHEPREO Tier-3 Center Achievements. FIU Tier-3 Center Tier-3 Centers in the CMS computing model –Primarily employed in support of local CMS physics community.
EGEE is a project funded by the European Union under contract IST Using SRM: DPM and dCache G.Donvito,V.Spinoso INFN Bari
Implementing Finer Grained Authorization in the Open Science Grid Gabriele Carcassi, Ian Fisk, Gabriele, Garzoglio, Markus Lorch, Timur Perelmutov, Abhishek.
Nada Abdulla Ahmed.  SmoothWall Express is an open source firewall distribution based on the GNU/Linux operating system. Designed for ease of use, SmoothWall.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment Chapter 8: Implementing and Managing Printers.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment, Enhanced Chapter 8: Implementing and Managing Printers.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment Chapter 8: Implementing and Managing Printers.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
SRM at Clemson Michael Fenn. What is a Storage Element? Provides grid-accessible storage space. Is accessible to applications running on OSG through either.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
How to Install and Use the DQ2 User Tools US ATLAS Tier2 workshop at IU June 20, Bloomington, IN Marco Mambelli University of Chicago.
Interactive Job Monitor: CafMon kill CafMon tail CafMon dir CafMon log CafMon top CafMon ps LcgCAF: CDF submission portal to LCG resources Francesco Delli.
OSG Storage Architectures Tuesday Afternoon Brian Bockelman, OSG Staff University of Nebraska-Lincoln.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
CERN Using the SAM framework for the CMS specific tests Andrea Sciabà System Analysis WG Meeting 15 November, 2007.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
Module 2: Installing Exchange Server Overview Introduction to the Exchange Server 2007 Server Roles Installing Exchange Server 2007 Completing the.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
EGEE is a project funded by the European Union under contract IST VO box: Experiment requirements and LCG prototype Operations.
VO Box Issues Summary of concerns expressed following publication of Jeff’s slides Ian Bird GDB, Bologna, 12 Oct 2005 (not necessarily the opinion of)
Padova, 5 October StoRM Service view Riccardo Zappi INFN-CNAF Bologna.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Auditing Project Architecture VERY HIGH LEVEL Tanya Levshina.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
LCG and Tier-1 Facilities Status ● LCG interoperability. ● Tier-1 facilities.. ● Observations. (Not guaranteed to be wry, witty or nonobvious.) Joseph.
1 CMS Software Installation, Bockjoo Kim, 23 Oct. 2008, T3 Workshop, Fermilab CMS Commissioning and First Data Stan Durkin The Ohio State University for.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
T3g software services Outline of the T3g Components R. Yoshida (ANL)
CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013.
EGEE is a project funded by the European Union under contract IST Experiment Software Installation toolkit on LCG-2
Feedback from CMS Andrew Lahiff STFC Rutherford Appleton Laboratory Contributions from Christoph Wissing, Bockjoo Kim, Alessandro Degano CernVM Users Workshop.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Tier 3 Support and the OSG US ATLAS Tier2/Tier3 Workshop at UChicago August 20, 2009 Marco Mambelli –
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
OSG STORAGE OVERVIEW Tanya Levshina. Talk Outline  OSG Storage architecture  OSG Storage software  VDT cache  BeStMan  dCache  DFS:  SRM Clients.
Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Atlas Tier 3 Overview Doug Benjamin Duke University.
CERN IT Department CH-1211 Genève 23 Switzerland t CMS SAM Testing Andrea Sciabà Grid Deployment Board May 14, 2008.
May 27, 2009T.Kurca JP CMS-France1 CMS T2_FR_CCIN2P3 Towards the Analysis Facility (AF) Tibor Kurča Institut de Physique Nucléaire de Lyon JP CMS-France.
Open Science Grid Configuring RSV OSG Resource & Service Validation Thomas Wang Grid Operations Center (OSG-GOC) Indiana University.
Towards Dynamic Database Deployment LCG 3D Meeting November 24, 2005 CERN, Geneva, Switzerland Alexandre Vaniachine (ANL)
VO Experiences with Open Science Grid Storage OSG Storage Forum | Wednesday September 22, 2010 (10:30am)
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
Parrot and ATLAS Connect
Status of BESIII Distributed Computing
Xiaomei Zhang CMS IHEP Group Meeting December
Belle II Physics Analysis Center at TIFR
Installing and Running a CMS T3 Using OSG Software - UCR
Computing Board Report CHIPP Plenary Meeting
CMS analysis job and data transfer test results
Composition and Operation of a Tier-3 cluster on the Open Science Grid
The CMS Beijing Site: Status and Application
Presentation transcript:

UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1

What are your service needs?  Basic cluster:  Submit Condor jobs to cluster  Submit CRAB jobs to grid  Run CMSSW  Download data registered in DBS (PhEDEx & srm client)  Computing element:  Service CRAB production jobs  Storage element:  Service all CRAB jobs October 23, UMD T3 experiences

UMD cluster basics October 23, 2008 UMD T3 experiences 3  Configuration Configuration  1 HN, 8 WNs, ~9TB disk array  HN = Rocks HN, CE & SE (obviously not scalable)  WNs = 7 interactive WNs + 1 PhEDEx WN  Disk array RAID-6, xfs, logical volume, network mounted from HN (direct attached storage)  Cluster management: RocksRocks  Free, with software rolls such as Ganglia, Condor  “Clean reinstall” model for WN management  Network  All nodes have internal and external network connections  Scalable, but some view as risky

gLite-UI & CRAB  gLite-UI (EDG utils) somewhat necessary for CRAB  CRAB now offers CrabServer, which does not have to be installed at your site (direct users to set server_name=bari in crab.cfg)CrabServer  gLite-UI cannot be installed on a Rocks HN, probably not on OSG CE or SE  gLite-UI configuration is a challenge, work from example (not the template) example  Links:  gLite-UI: 1, 2, 3,  YAIM YAIM  CRAB CRAB October 23, UMD T3 experiences

CMSSW  Can have CMSSW versions automatically installed and removed via OSG utilities.  ‘Production releases’ of CMSSW  Bockjoo KimBockjoo Kim  Alternatively, manually install, create link named /cmssoft/cms & edit /etc/grid3-locations.txt  Frontier DB queries require Squid web proxySquid web proxy  Support for CRAB jobs requires site-local-config.xml & storage.xml (examples)site-local-config.xml & storage.xmlexamples October 23, UMD T3 experiences

site planning October 23, UMD T3 experiences

BeStMan storage element (SE)  Lightweight, easy to install, configure, and use  Will manage files for you or provide a gateway to your existing file system  OSG also supports BeStMan on top of XrootdFS (requires two additional nodes, minimum)  OSG guide for BeStMan on XrootdFS is coming out, OSG guide for just BeStMan (you will want to set your own configuration options)OSG guide for just BeStMan configuration  Getting to work with FNAL srm-client requires using special tags in calls or editing $SRM_CONFIG  webservice_path=srm/v2/server (.wsdl?)  access_latency=ONLINE  pushmode=true October 23, UMD T3 experiences

Monitoring October 23, 2008 UMD T3 experiences 8  Ganglia for cluster monitoring, comes with Rocks  RSV for OSG monitoring, comes with OSG RSV  We don’t use SAM  Tests CMS-specific details, very nice! We use CRAB.  Enables participation in official production  SAM tests for BeStMan SEs under development

PhEDEx  You will probably want a “PhEDEx node” in addition to your OSG CE & SE node(s)  Transfer public DBS data to or from your site  To site: does not require SE  To site & shown as host in DBS: requires SE  From site: requires dCache SE or a special PhEDEx client just for you at the receiving site  PhEDEx can run atop gLite-UI  gLite-UI required for advanced protocols  Otherwise uses srm  Also requires storage.xml, which can be different from CMSSW’s storage.xml.storage.xml October 23, UMD T3 experiences

Tricks October 23, 2008 UMD T3 experiences 10  Always back up your OSG installation before any upgrade! pacman allow easy rollback of software from backup.  Use cp -p : permissions in OSG directory are important  Use soft links on your first install, then you can move it around for upgrades and fixes  Set shell for grid users to /bin/true  Deter brute-force ssh attacks (we use DenyHosts)DenyHosts  Keep a detailed log  Write a user guideuser guide  Train admin backup  OSG CMS Tier-3 hypernews to get help OSG CMS Tier-3 hypernews