Report from US ALICE Yves Schutz WLCG 24/01/2007.

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
ALICE-USA Grid-Deployment Plans (By the way, ALICE is an LHC Experiment, TOO!) Or (We Sometimes Feel Like and “AliEn” in our own Home…) Larry Pinsky—Computing.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
October LHCUSA meeting BNL Bjørn S. Nilsen Update on NSF-ITR Proposal Bjørn S. Nilsen The Ohio State University.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
GridPP Building a UK Computing Grid for Particle Physics Professor Steve Lloyd, Queen Mary, University of London Chair of the GridPP Collaboration Board.
STAR Off-line Computing Capabilities at LBNL/NERSC Doug Olson, LBNL STAR Collaboration Meeting 2 August 1999, BNL.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
October 2006ICFA workshop, Cracow1 HEP grid computing in Portugal Jorge Gomes LIP Computer Centre Lisbon Laboratório de Instrumentação e Física Experimental.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
Status of PDC’07 and user analysis issues (from admin point of view) L. Betev August 28, 2007.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
March 16,2005 LHC GDB Meeting (Lyon) L. Pinsky--ALICE-USA1 ALICE-USA Grid-Deployment Plans Or (We Sometimes Feel Like and “AliEn” in our own Home…) Larry.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team ATHIC2012, Busan,
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
ALICE-USA Computing January 21, 2014 CERN L. Pinsky--University of Houston1 University of Houston Computing Support Status & Future Possibilities for ALICE.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Alex Read, Dept. of Physics Grid Activities in Norway R-ECFA, Oslo, 15 May, 2009.
Procedure to follow for proposed new Tier 1 sites Ian Bird CERN, 27 th March 2012.
The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I.
2012 RESOURCES UTILIZATION REPORT AND COMPUTING RESOURCES REQUIREMENTS September 24, 2012.
US Planck Data Analysis Review 1 Julian BorrillUS Planck Data Analysis Review 9–10 May 2006 Computing Facilities & Capabilities Julian Borrill Computational.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
STFC in INDIGO DataCloud WP3 INDIGO DataCloud Kickoff Meeting Bologna April 2015 Ian Collier
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
May 23, 2007ALICE DOE Review - Computing1 ALICE-USA Computing Overview of Hard and Soft Computing Resources Needed to Achieve Research Goals 1.Calibration.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
July 19, 2005-LHC GDB T0/T1 Networking L. Pinsky--ALICE-USA1 ALICE-USA T0/T1 Networking Plans Larry Pinsky—University of Houston For ALICE-USA.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
Availability of ALICE Grid resources in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
Status of WLCG FCPPL project
(Prague, March 2009) Andrey Y Shevel
Ian Bird WLCG Workshop San Francisco, 8th October 2016
U.S. ATLAS Tier 2 Computing Center
Computing Facilities & Capabilities
Kolkata Status and Plan
Update on Plan for KISTI-GSDC
The LHC Computing Grid Visit of Her Royal Highness
MC data production, reconstruction and analysis - lessons from PDC’04
Southwest Tier 2.
Simulation use cases for T2 in ALICE
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
New strategies of the LHC experiments to meet
Scientific Computing At Jefferson Lab
LHC Data Analysis using a worldwide computing grid
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Presentation transcript:

Report from US ALICE Yves Schutz WLCG 24/01/2007

WLCG 2 US Tier-1 Capable facilities for ALICE

WLCG 3 Capacities and operational support LBL - NERSC/PDSF  DOE proposal for ~500CPUs for ALICE  HPSS MSS has 22PB current capacity  Connected to ESNET via 10GB network  24/7 operation support in place LLNL - LC/Serial cluster  ALICE to run on a subset of ~640 CPUs  HPSS MSS has >PB current capacity  Connected to ESNET via 10GB network (same as NERSC)  24/7 operation support in place OSC - Itanium and Xeon clusters  ALICE to run on a subset of ~200 CPUs leveraged against NSF proposal  HPSS MSS available, tape procurement proposal submitted to NSF  24/7 operation support in place Houston TLC2 - Itanium clusters  ALICE to run on a subset of ~200 CPUs from NSF proposal  HPSS MSS available  24/7 operation support in place

WLCG 4 ALICE operation Sites  OSC and Houston TLC2 integrated and running the ALICE Grid Data Chalenges since 2004  LBL started operation last October  LLNL in preparation 2006 CPU/Storage capacity  ~90 ia64 at Houston/TLC2, 10 TB disk  ~50 ia32 at OSC, 35 TB disk, 35 TB MSS  ~40 ia32 at LBL, disk and tape storage being organized Middleware  Currently running with AliEn  Submitted a proposal to NSF to develop AliEn-OSG interfaces

WLCG 5 Resources US sites are providing ~7% of the total CPU capacity for ALICE  In the future, this proportion will be kept Relative contribution of US sites in PDC’06 (7% of total)

WLCG 6 Conclusions 4 US computing centres are contributing resources to the ALICE Grid computing  The amount of resources will increase in line with the ALICE requirements and in proportion to the US groups participation in the collaboration All four have T1 capability (MSS, network, support), especially important in view of the small number of T1s serving ALICE in Europe All sites (LLNL ongoing) are incorporated in the ALICE Grid and are participating in the Grid data challenges The relations and operational support of the centres are excellent The development programme to build interface of the ALICE services to OSG has not started Potential T2’s connecting: Mexico, Brazil, …