Spending Plans and Schedule Jae Yu July 26, 2002.

Slides:



Advertisements
Similar presentations
4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
Advertisements

Southwest Tier 2 Center Status Report U.S. ATLAS Tier 2 Workshop - Harvard Mark Sosebee for the SWT2 Center August 17, 2006.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Belle computing upgrade Ichiro Adachi 22 April 2005 Super B workshop in Hawaii.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
UTA Site Report Jae Yu UTA Site Report 2 nd DOSAR Workshop UTA Mar. 30 – Mar. 31, 2006 Jae Yu Univ. of Texas, Arlington.
Status of DØ Computing at UTA Introduction The UTA – DØ Grid team DØ Monte Carlo Production The DØ Grid Computing –DØRAC –DØSAR –DØGrid Software Development.
Nov 1, 2000Site report DESY1 DESY Site Report Wolfgang Friebel DESY Nov 1, 2000 HEPiX Fall
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost TRIUMF SITE REPORT Corrie Kost Head Scientific Computing.
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
DØ RAC Working Group Report Progress Definition of an RAC Services provided by an RAC Requirements of RAC Pilot RAC program Open Issues DØRACE Meeting.
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
DØ RACE Introduction Current Status DØRAM Architecture Regional Analysis Centers Conclusions DØ Internal Computing Review May 9 – 10, 2002 Jae Yu.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
26SEP03 2 nd SAR Workshop Oklahoma University Dick Greenwood Louisiana Tech University LaTech IAC Site Report.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
DØSAR a Regional Grid within DØ Jae Yu Univ. of Texas, Arlington THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
Feb. 13, 2002DØRAM Proposal DØCPB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Partial Workshop ResultsPartial.
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
PADME Kick-Off Meeting – LNF, April 20-21, DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12.
1 The S.Co.P.E. Project and its model of procurement G. Russo, University of Naples Prof. Guido Russo.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
6th DOSAR Workshop University Mississippi Apr. 17 – 18, 2008
Belle II Physics Analysis Center at TIFR
PC Farms & Central Data Recording
Grid Computing for the ILC
Bernd Panzer-Steindel, CERN/IT
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
UK GridPP Tier-1/A Centre at CLRC
DØ RAC Working Group Report
Proposal for a DØ Remote Analysis Model (DØRAM)
Presentation transcript:

Spending Plans and Schedule Jae Yu July 26, 2002

Jae Yu: MRI 2002 Meeting2 Introduction We (UTA) were successful in persuading DØ collaboration to adopt DØ Regional Analysis Center scheme –A proposal completed –Collaboration had a chance to look at it –World-wide finance committee has well accepted the scheme and enthusiastic about the proposal Significant amount computing research necessary to implement and employ Grid into DØ The MRI gave a significant head-start toward UTA becoming an ATLAS Tier II site –Will generate even more significant research for full scale ATLAS Grid implementation

July 26, 2002Jae Yu: MRI 2002 Meeting3 MRI2002- Year 1 Spending Plan ItemsBudgetEquipmentsCapacity RAID Array$124, drives IDE Disks$37, TB Disk servers$19,6004 GB Switches$14, GB ports Rack Encl.$9,7204 Proc. PCs$98, GHz

July 26, 2002Jae Yu: MRI 2002 Meeting4 What do we construct in the year 1? Storage Racks Disk/file Server Gbit switch RAID Arrays+ IDE Disks RAID Arrays+ IDE Disks RAID Arrays+ IDE Disks RAID Arrays+ IDE Disks RAID Arrays+ IDE Disks RAID Arrays+ IDE Disks Gbit switch Processing PCs Processing PC Rack Total Storage Space:> 24.5TB Total Processing CPU power: 91.2GHz

July 26, 2002Jae Yu: MRI 2002 Meeting5 Implementation Schedule & Plan-Yr. 1 Sept. 15, 2002: Purchase10% of the equipment Sept. 15 – Oct. 31 –System Installation & Burn-in –Software (both DØ and Grid) installation Install DØ Reconstruction and Analysis software Install UTA-DØ MC production software –Establish Data transfer from FNAL –Execute Analysis programs and begin MC production

July 26, 2002Jae Yu: MRI 2002 Meeting6 Oct. 31, 2002: Complete purchasing the Remainder Oct. 31 – Dec. 31, 2002: –System Installation and Burn-in –Software (both DØ and Grid) installation –Constant transfer and fill disks with DØ Data –Establish and implement Grid enabled batch control system –Establish user pool to become a DØRAC –Start DØ Reconstruction Jan. 2003: DØRAC workshop Yr. 1 Implementation-cont’d

July 26, 2002Jae Yu: MRI 2002 Meeting7 Necessary Venders Fiber channel IDE RAID array venders IDE DISK venders Processing farm PC and disk/file server venders Gbit switch venders RACK enclosure venders

July 26, 2002Jae Yu: MRI 2002 Meeting8 Where do we want to be in the year 2005? A proposal to the DØ collaboration has been written and submitted to establish DØ Regional Analysis Centers We want to become a DØ RAC w/ the following minimum capacity –Disk Storage Capacity: >110TB (Need additional 85.5TB) –Total Processing CPU power: >400 GHz (Need additional 310GHz) –Network: Minimum 1GBit/sec outside connection –Person power: 2 system support (Need one more) –Tape storage: >70TB (Don’t have this yet) Ultimately become an ATLAS Tier II site, building upon this infrastructure

July 26, 2002Jae Yu: MRI 2002 Meeting9 MRI2002- Year 2 Spending Plan ItemsBudgetEquipmentsCapacity RAID Array$99, Drives IDE Disks$37, TB Disk servers$4,9001 GB Switches$22, GB ports Rack Encl.$14,5806 Proc. PCs$98, GHz

July 26, 2002Jae Yu: MRI 2002 Meeting10 MRI2002- Year 3 Spending Plan ItemsBudgetEquipmentsCapacity RAID Array$99, Drives IDE Disks$32, TB Disk servers$9,8002 GB Switches$11, GB ports Rack Encl.$7,2903 Proc. PCs$87, GHz