US ATLAS Western Tier 2 Status Report Wei Yang Nov. 30, 2007 US ATLAS Tier 2 and Tier 3 workshop at SLAC.

Slides:



Advertisements
Similar presentations
The RHIC-ATLAS Computing Facility at BNL HEPIX – Edinburgh May 24-28, 2004 Tony Chan RHIC Computing Facility Brookhaven National Laboratory.
Advertisements

ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
CHEPREO Tier-3 Center Achievements. FIU Tier-3 Center Tier-3 Centers in the CMS computing model –Primarily employed in support of local CMS physics community.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
Outline Network related issues and thinking for FAX Cost among sites, who has problems Analytics of FAX meta data, what are the problems  The main object.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
SouthGrid Status Pete Gronbech: 4 th September 2008 GridPP 21 Swansea.
Setup your environment : From Andy Hass: Set ATLCURRENT file to contain "none". You should then see, when you login: $ bash Doing hepix login.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
IRODS performance test and SRB system at KEK Yoshimi KEK Building data grids with iRODS 27 May 2008.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Site Lightning Report: MWT2 Mark Neubauer University of Illinois at Urbana-Champaign US ATLAS Facilities UC Santa Cruz Nov 14, 2012.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
F. Brasolin / A. De Salvo – The ATLAS benchmark suite – May, Benchmarking ATLAS applications Franco Brasolin - INFN Bologna - Alessandro.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
Multi-Tiered Storage with Xrootd at ATLAS Western Tier 2 Andrew Hanushevsky Wei Yang SLAC National Accelerator Laboratory 1CHEP2012, New York
Tier1 Status Report Martin Bly RAL 27,28 April 2005.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
USATLAS Network/Storage and Load Testing Jay Packard Dantong Yu Brookhaven National Lab.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
SLAC Experience on Bestman and Xrootd Storage Wei Yang Alex Sim US ATLAS Tier2/Tier3 meeting at Univ. of Chicago Aug 19-20,
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
Experience with the Thumper Wei Yang Stanford Linear Accelerator Center May 27-28, 2008 US ATLAS Tier 2/3 workshop University of Michigan, Ann Arbor.
NML Bioinformatics Service— Licensed Bioinformatics Tools High-throughput Data Analysis Literature Study Data Mining Functional Genomics Analysis Vector.
BNL Service Challenge 3 Site Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
Southgrid Technical Meeting Pete Gronbech: 24 th October 2006 Cambridge.
U.S. ATLAS Facilities Jim Shank Boston University (Danton Yu, Rob Gardner, Kaushik De, Torre Wenaus, others)
ATLAS Great Lakes Tier-2 (AGL-Tier2) Shawn McKee (for the AGL Tier2) University of Michigan US ATLAS Tier-2 Meeting at Harvard Boston, MA, August 17 th,
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Status SC3 SARA/Nikhef 20 juli Status & results SC3 throughput phase SARA/Nikhef Mark van de Sanden.
11 January 2005 High Performance Computing at NCAR Tom Bettge Deputy Director Scientific Computing Division National Center for Atmospheric Research Boulder,
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Tier-2 storage A hardware view. HEP Storage dCache –needs feed and care although setup is now easier. DPM –easier to deploy xrootd (as system) is also.
BNL Service Challenge 3 Site Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Florida Tier2 Site Report USCMS Tier2 Workshop Livingston, LA March 3, 2009 Presented by Yu Fu for the University of Florida Tier2 Team (Paul Avery, Bourilkov.
BNL Oracle database services status and future plans Carlos Fernando Gamboa, John DeStefano, Dantong Yu Grid Group, RACF Facility Brookhaven National Lab,
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.
Southwest Tier 2 (UTA). Current Inventory Dedidcated Resources  UTA_SWT2 320 cores - 2GB/core Xeon EM64T (3.2GHz) Several Headnodes 20TB/16TB in IBRIX/DDN.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
U N C L A S S I F I E D LA-UR Leveraging VMware to implement Disaster Recovery at LANL Anil Karmel Technical Staff Member
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
Atlas Tier 3 Overview Doug Benjamin Duke University.
Jefferson Lab Site Report Sandy Philpott HEPiX Fall 07 Genome Sequencing Center Washington University at St. Louis.
BeStMan/DFS support in VDT OSG Site Administrators workshop Indianapolis August Tanya Levshina Fermilab.
a brief summary for users
Brief introduction about “Grid at LNS”
The Beijing Tier 2: status and plans
Western Analysis Facility
Berkeley Storage Manager (BeStMan)
Update on Plan for KISTI-GSDC
UTFSM computer cluster
Southwest Tier 2.
The LHCb Computing Data Challenge DC06
Presentation transcript:

US ATLAS Western Tier 2 Status Report Wei Yang Nov. 30, 2007 US ATLAS Tier 2 and Tier 3 workshop at SLAC

VersionGHzCoresSPECint2000 SPECcfp200 0Description PeakValue cluster name SITE:PROD_SLAC Pentium(R) III CPU family 1400MHz ,392noma Dual Core AMD Opteron(tm) Processor ,12649,54457,508don Dual Core AMD Opteron(tm) Processor ,452284,592349,860cob Dual Core AMD Opteron(tm) Processor ,521170,352138,400yili Xeon(TM) CPU 2.66GHz ,360tori Dual Core AMD Opteron™ Processor ,8271,015,8120boer Intel Xeon(R) X ,178226,5120fell Summary: Totals1,012 1,746,812648,520 Normalization 1, ATLAS has ~ 32% fair share

CPU resource  Current (FY07 fund)  312 cores of AMD Opteron 2218  2 GB / core  Procurement with FY08 program fund  320 cores of Intel Xeon X5355 (40 units)  2 GB / core  38% program fund

Storage  Current  72 TB raw / 51 TB usable (3 thumpers)  FY08 Procurement  58% program funds  Thumpers. > 200 TB usable based on old price  Negotiating Price  Xrootd on Solaris / ZFS  GridFTP for Xrootd  Composite Name Space and XrootdFS  Run SRM on XrootdFS

Procurement for other Experiments  224 cores of AMD Opteron 2218  104 cores of Intel Xeon X5355  2 GB / core  In general queues, accessible by ATLAS  SUN Black box # 2  Intel Xeon X5335  CPU nodes ordered from DELL

Software infrastructure  RHEL 3 32 bit on old batch nodes  RHEL 4 64 bit on newer batch nodes  LSF 6.1 with Fair Share  How to implement analysis queue in fair share environment ?  Nagios  Ganglia

Grid Middleware and DDM  OSG 0.8  Gatekeeper Need customization of Globus LSF module  GridFTP and GridFTP for Xrootd  GUMS 1.2 one-to-one mapping for DNs and local accounts  RSV and Gratia Has to hack into Gratia code to make it reporting correctly  DQ2 0.4, will upgrade to 0.4.1

Networking  Tuning significantly improved the iperf performance measurement  Increase TCP buffers  Enable hardware functions for transmission  Can not achieve stable measurement, probably due to competing traffic  WAN is still 1 GB / 2GB  10 GB upgrade plan pushed back to January  Plan to evaluate TeraPath and QoS

Networking, cont’d  Disk-2-disk  2~3MB/sec with 1 stream  15 MB/sec with 12 streams  300 MB / sec goal  SRM on XrootdFS + multiple GridFTP for Xrootd  One GridFTP with multiple NIC channel binding

ATLAS production SLAC  Production runs smoothly  Gatekeeper can handle jobs easily  Utilization is low  Regional physicists utilizing WT2 spare cycles  Much less (useless) debugging at Tier 2 level after moving to PandaMover

September is both a good month and a bad month for WT2