Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.

Slides:



Advertisements
Similar presentations
12th September 2002Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Imperial College, London 12 th September 2002.
Advertisements

Computing Infrastructure
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Introduction to Unix GLY 560: GIS for Earth Scientists Class Home Page:
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Tuesday, September 08, Head Node – Magic.cse.buffalo.edu Hardware Profile Model – Dell PowerEdge 1950 CPU - two Dual Core Xeon Processors (5148LV)
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
A crash course in njit’s Afs
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
STAR Software Basics Introduction to the working environment Lee Barnby - Kent State University.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
CNIT 132 Intermediate HTML and CSS Publish Web Page.
Reiknistofnun Háskóla Íslands
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
University of Illinois at Urbana-Champaign NCSA Supercluster Administration NT Cluster Group Computing and Communications Division NCSA Avneesh Pant
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Hosted by Designing a Backup Architecture That Actually Works W. Curtis Preston President/CEO The Storage Group.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Nov 1, 2000Site report DESY1 DESY Site Report Wolfgang Friebel DESY Nov 1, 2000 HEPiX Fall
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
CMSBrownBag,05/29/2007 B.Mangano How to “use” CMSSW on own Linux Box and be happy In this context “use” means: - check-out pre-compiled CMSSW code - run.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
July 28' 2011INDIA-CMS_meeting_BARC1 Tier-3 TIFR Makrand Siddhabhatti DHEP, TIFR Mumbai July 291INDIA-CMS_meeting_BARC.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Windows 2000 Server Active Directory Groups User Accounts Frank Schneemann.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Outline: Tasks and Goals The analysis (physics) Resources Needed (Tier1) A. Sidoti INFN Pisa.
1 Moving Linux in CMS Vincenzo. 2 Generic Certification backward compatibility SLC4 is binary backward compatible: –Binaries built on SLC3 are usable.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Hans Wenzel CMS 101 September " Introduction to the FNAL UAF and other US facilities " “work in progress Hans Wenzel Fermilab ● Introduction ●
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Hyperion Artifact Life Cycle Management Agenda  Overview  Demo  Tips & Tricks  Takeaways  Queries.
Scientific Computing in PPD and other odds and ends Chris Brew.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
TANYA LEVSHINA Monitoring, Diagnostics and Accounting.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
ATLAS Computing Wenjing Wu outline Local accounts Tier3 resources Tier2 resources.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Advanced Computing Facility Introduction
Compute and Storage For the Farm at Jlab
Chapter Objectives In this chapter, you will learn:
Welcome to Indiana University Clusters
2. OPERATING SYSTEM 2.1 Operating System Function
SAM at CCIN2P3 configuration issues
UK GridPP Tier-1/A Centre at CLRC
Manchester HEP group Network, Servers, Desktop, Laptops, and What Sabah Has Been Doing Sabah Salih.
NCSA Supercluster Administration
Reiknistofnun Háskóla Íslands
Presentation transcript:

Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab

Patrick Gartung 2 CMS 101 Mar 2007 What is the UAF?  The User Analysis Farm (UAF) is part of the Tier 1 facility at Fermilab.  The UAF is a farm running the Linux operating system providing interactive computing for users.  Separate interactive and batch computing: ● Pool of interactive nodes ● Network switch provides “round robin” access to 20 nodes through the alias cmsuaf.fnal.gov ● 5 direct access nodes cmswn051->cmswn055.fnal.gov ● batch farm (shared with production grid ) running condor. ● 600+ nodes ● batch slots

Patrick Gartung 3 CMS 101 Mar 2007 Benefits of using the UAF  Reference Platform.  Interactive system.  CMS software is available and maintained.  System administrators keep the system up.  Thursday morning is the slot for system maintenance and upgrades.  Fast access to mass storage, plenty of disk space.  Fast networking (GigaBit Ethernet).  Plenty of CPU to do some production projects.  Condor batch system to automate these projects  Data samples.  Basically provide all the things you “don’t” want to do or worry about yourself!

Patrick Gartung 4 CMS 101 Mar 2007 kinit cmsuaf.fnal.gov ENSTORE Configuration of User Computing as of Mar 2007: Enstore: Powderhorn STK Silo:Drives: 9 x 940B, speed 30 MB/s, Capacity: 200 GB/cartridge dCache: >600 TB read Pools (backed up to tape) >10 TB write Pools (backed up to tape, can also read from them) >100 TB resilient Pools (no tape backup, multiple copies, not deleted) /pnfs/cms/WAX /pnfs/cms/WAX/resilient > 700 registered users Gigabit Ethernet cmswn UAF /uscms/home/ /uscms_data/d1/ /uscmst1b_scratch/lpc1/

Patrick Gartung 5 CMS 101 Mar 2007 Getting Help USCMS User Computing web pages Then on the left select: Software & Computing Then on the left select: User Computing LPC Helpdesk (my primary job) Wilson Hall 11th floor crossover Computing Division Helpdesk Wilson Hall ground floor across from credit union

Patrick Gartung 6 CMS 101 Mar 2007 Obtaining a CMS account at FNAL  For onsite visitors  Apply for a Fermilab ID if you do not have one  Apply for CMS UAF account using this webform  For offsite users  Follow the directions and links on this webpage

Patrick Gartung 7 CMS 101 Mar 2007 Connecting to the UAF Get an addressless and forwardable kerberos ticket for the FNAL.GOV realm kinit -n (on SLF 3 & SLF4) kinit –A (on OSX & SL3 & SL4) Connect to UAF OpenSSH servers (also 52,52,54,55) Use if you get afs error when logging into cmsuaf.fnal.gov If you use ssh which does not support kerberos authentication, you will be presented with a Cryptocard challenge Cryptocard available from CD helpdesk for pickup or by mail For more information, including how to connect from a Windows PC see this webpage :

Patrick Gartung 8 CMS 101 Mar 2007 General Setup of runtime environment: Use “source /uscmst1/prod/sw/cms/cshrc uaf”. This sets SCRAM_ARCH=slc3_ia32_gcc323 and the path to the scramv1 command. Then you can run commands like : scramv1 project CMSSW CMSSW_1_2_3; cd CMSSW_1_2_3; eval `scramv1 runtime –csh ; scramv1 build To access sl4 releases use “setenv SCRAM_ARCH slc4_ia32_gcc345” or “export SCRAM_ARCH=slc4_ia32_gcc345”. Use “source /uscmst1/prod/sw/cms/cshrc nightly” to get debug builds built each night and of the latest prereleases. Use “source /uscmst1/prod/sw/cms/cshrc sl4” to get sl4 builds built each night.

Patrick Gartung 9 CMS 101 Mar 2007 Where to put your files Home area - with backup /uscms/home/ 2GB quota, nightly backup Data area – no backup /uscms_data/d1/ 10GB quota, no backup Scratch area /uscmst1b_scratch/lpc1/3daylifetime AFS directory /afs/fnal.gov/home/room(1,2,3)/ 500MB quota, nightly backup ~/public_html appears as