Southwest Tier 2.

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

Southwest Tier 2 Center Status Report U.S. ATLAS Tier 2 Workshop - Harvard Mark Sosebee for the SWT2 Center August 17, 2006.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
The LHC Computing Grid Project Tomi Kauppi Timo Larjo.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
TechFair ‘05 University of Arlington November 16, 2005.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
UTA Site Report Jae Yu UTA Site Report 2 nd DOSAR Workshop UTA Mar. 30 – Mar. 31, 2006 Jae Yu Univ. of Texas, Arlington.
High Energy Physics at UTA UTA faculty Andrew Brandt, Kaushik De, Andrew White, Jae Yu along with many post-docs, graduate and undergraduate students investigate.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
DØSAR a Regional Grid within DØ Jae Yu Univ. of Texas, Arlington THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
High Energy Physics & Computing Grids TechFair Univ. of Arlington November 10, 2004.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
Joint Institute for Nuclear Research Synthesis of the simulation and monitoring processes for the data storage and big data processing development in physical.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
STORAGE EXPERIENCES AT MWT2 (US ATLAS MIDWEST TIER2 CENTER) Aaron van Meerten University of Chicago Sarah Williams Indiana University OSG Storage Forum,
BeStMan/DFS support in VDT OSG Site Administrators workshop Indianapolis August Tanya Levshina Fermilab.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
HIGH ENERGY PHYSICS DATA WHERE DATA COME FROM, WHERE THEY NEED TO GO In their quest to understand the fundamental nature of matter and energy, Fermilab.
NDGF and the distributed Tier-I Center Michael Grønager, PhD Technical Coordinator, NDGF dCahce Tutorial Copenhagen, March the 27th, 2007.
A Nordic Tier-1 for LHC Mattias Wadenstein Systems Integrator, NDGF Grid Operations Workshop Stockholm, June the 14 th, 2007.
A Distributed Tier-1 for WLCG Michael Grønager, PhD Technical Coordinator, NDGF CHEP 2007 Victoria, September the 3 rd, 2007.
ANL T3g infrastructure S.Chekanov (HEP Division, ANL) ANL ASC Jamboree September 2009.
Buying into “Summit” under the “Condo” model
Database Replication and Monitoring
6th DOSAR Workshop University Mississippi Apr. 17 – 18, 2008
Data Federation & Data Management for the CMS Experiment at the LHC
Grid site as a tool for data processing and data analysis
Belle II Physics Analysis Center at TIFR
U.S. ATLAS Tier 2 Computing Center
Southwest Tier 2 Center Status Report
Experience of Lustre at a Tier-2 site
SAM at CCIN2P3 configuration issues
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
LHC DATA ANALYSIS INFN (LNL – PADOVA)
Computing Board Report CHIPP Plenary Meeting
ATLAS Sites Jamboree, CERN January, 2017
LHC Data Analysis using a worldwide computing grid
High Energy Physics at UTA
High Energy Physics at UTA
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
The LHC Computing Grid Visit of Professor Andreas Demetriou
The LHCb Computing Data Challenge DC06
Presentation transcript:

Southwest Tier 2

Southwest Tier 2 Computing facility used by the ATLAS experiment Funded by the National Science Foundation as a subcontract to the U.S. ATLAS Operations program. A consortium of the University of Texas at Arlington (UTA), University of Oklahoma (OU), and Langston University (LU) One of four Tier 2 facilities in United States of America There are forty Tier 2 facilities worldwide

Large Hadron Collider The LHC is a ~17 mile ring crossing between France and Switzerland Accelerates beams of protons to nearly (0.99999) the speed of light Collides the beams at certain locations (ATLAS) for data to be recorded Current Run II provides 13 TeV collisions/events

ATLAS Experiment The size of a seven story building 90+ million data channels 40 million events occur per second 500-600 events per second are recorded 4 megabytes per event 2.5 GB/s stored to tape

ATLAS Computing Model Tiered Model with Tiers 0, 1 ,2 and 3 Tier 0 is central facility CERN Tier 1 is a national facility Brookhaven National Laboratory (BNL) for the USA Tier 2 is a regional facility USA Tier 2's Northeast Tier 2 ATLAS Great Lakes Tier2 Midwest Tier 2 Southwest Tier 2 Tier 3 is an institutional facility University or group cluster

ATLAS Tier 2 Roles Provides CPU cycles for: User Analysis Monte Carlo simulations of events Reprocessing exercises Provides Storage for: Physics Data From recorded collisions From simulated collisions User Analysis Results

Southwest Tier 2 resources UTA UTA_SWT2 Dedicated to simulations 2360 job slots; 130 TB of storage SWT2_CPB Performs all tier 2 processing jobs 8736 job slots; 5.5 PB of storage OU OU_OCHEP Performs all tier 2 processing jobs 844 job slots OU_OSCER Campus cluster provides additional simulation resources 1800 job slots, 500TB Storage Langston University LUCILLE NSF-MRI funded cluster 960 job slots; 110TB of storage

Tier 2 Contributions Within ATLAS This chart show the cumulative contribution of CPU-seconds delivered by Tier 2's for successful processing jobs for the year (1/1 through 10/31) SWT2 is second largest in ATLAS!

SWT2_CPB Supports all ATLAS activities Current Deployed Resources: 20 Gbps external network connection 439 compute nodes 4368 cores / 8736 job slots At least 2 GB RAM per job slot 32 storage servers 5.5 PB usable storage 1 NFS server 1 admin server 14 grid/service nodes

Grid Services Compute Element GLOBUS based access to local batch system for submitting work Storage Element Storage Resource Manager (SRM) access to storage using GridFTP servers (2) Root Door Alternative access to storage Squid caching services CVMFS (distributed file system for experiment code base) Calibration data Monitoring services NAGIOS Custom scripts

Topology

Compute Dell R630 CPU: 2 x Xeon E5-2640 v4 2.4 GHz Broadwell 10 cores, 20 threads / CPU RAM: 128 GB , 2.133 GHz, DDR4 Storage: 2 x 800 GB SSD (RAID-0) Network: 2 x 10 Gbps (SFP+) 2 x 1 Gbps (Ethernet)

Storage MD 3460 Shelf 60 x 8 GB SATA disk (4 x RAID 6 Disk Groups => 4 x 104 TB exported SAS drives) 4 x 12 Gbps SAS connections Dell R730 CPU: 2 x Xeon E5-2680 v4 2.8 GHz Broadwell 14 cores, 28 threads / CPU RAM: 256 GB, 2.133 GHz, DDR4 Network: 4 x 10 Gbps (SFP+), 2 x 1 Gbps (Ethernet)

Networking Core: 2 x Dell S6000 (LAG) 32 x 40 Gbps (QFSP+) TOR: Dell N4032F 24 x 10 Gbps (SFP+) 2 x 40 Gbps (QFSP+) Legacy: Dell N2048 48 x 1 Gbps (Ethernet) 2 x 10 Gbps (SFP+)

Topology (revisited)

Conclusions The Southwest Tier 2 continues to be a valuable resource for the ATLAS experiment We are helping to enable the discovery of new physics at the LHC Our contributions will grow as we add more resources For more information visit www.atlas-swt2.org