International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.

Slides:



Advertisements
Similar presentations
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
Advertisements

NorduGrid Grid Manager developed at NorduGrid project.
Report on the Belle II Data Handling Group Kihyeon Cho (As a chair, on behalf of the Belle II DH Group) High Energy Physics Team KISTI (Korea Institute.
NIKHEF Testbed 1 Plans for the coming three months.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
FP6−2004−Infrastructures−6-SSA Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Gang CHEN, Hongmei ZHANG - IHEP.
Construction Experience and Application of the HEP DataGrid in Korea Bockjoo Kim On behalf of Korean HEP Data Grid Working Group.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
HEP GRID CHEP, KNU 11/9/2002 Youngjoon Kwon (Yonsei Univ.) 1 Belle Computing / Data Handling  What is Belle and why we need large-scale computing?
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
CHEP 2003Stefan Stonjek1 Physics with SAM-Grid Stefan Stonjek University of Oxford CHEP th March 2003 San Diego.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
Belle II Data Management System Junghyun Kim, Sunil Ahn and Kihyeon Cho * (on behalf of the Belle II Computing Group) *Presenter High Energy Physics Team.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
1 High Energy Physics (HEP) Computing HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu,
A Plan for HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF/D0 Grid Meeting August 5,
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Computing at CDF ➢ Introduction ➢ Computing requirements ➢ Central Analysis Farm ➢ Conclusions Frank Wurthwein MIT/FNAL-CD for the CDF Collaboration.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
Report on Belle II Data Handling Group Kihyeon Cho High Energy Physics Team KISTI (Korea Institute of Science and Technology Information) June 16~18, 2010.
1 DØ Grid PP Plans – SAM, Grid, Ceiling Wax and Things Iain Bertram Lancaster University Monday 5 November 2001.
DØSAR a Regional Grid within DØ Jae Yu Univ. of Texas, Arlington THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington.
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
Application of the EDG Testbed Bockjoo Kim*, Soo-Bong Kim Seoul National University (SNU) Kihyeon Cho, Youngdo Oh, Dongchul Son Center for High Energy.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
KISTI & Belle experiment Eunil Won Korea University On behalf of the Belle Collaboration.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
CDF computing in the GRID framework in Santander
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
DCAF (DeCentralized Analysis Farm) Korea CHEP Fermilab (CDF) KorCAF (DCAF in Korea) Kihyeon Cho (CHEP, KNU) (On the behalf of HEP Data Grid Working Group)
DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer,
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
Frank Wuerthwein, UCSD Update on D0 and CDF computing models and experience Frank Wuerthwein UCSD For CDF and DO collaborations October 2 nd, 2003 Many.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
LHCb datasets and processing stages. 200 kB100 kB 70 kB 0.1 kB 10kB 150 kB 0.1 kB 200 Hz LHCb datasets and processing stages.
March 1, CDF Grid Daejung Kong and Kihyeon Cho* (High Energy Physics Team, KISTI) * Corresponding author June 25, 2009 Korea CDF Meeting, SNU, Korea.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Recent Evolution of the Offline Computing Model of the NOA Experiment Talk #200 Craig Group & Alec Habig CHEP 2015 Okinawa, Japan.
DØ Grid Computing Gavin Davies, Frédéric Villeneuve-Séguier Imperial College London On behalf of the DØ Collaboration and the SAMGrid team The 2007 Europhysics.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Belle II Physics Analysis Center at TIFR
Kihyeon Cho Supercomputing Center KISTI
Particle Physics at KISTI
 YongPyong-High Jan We appreciate that you give an opportunity to have this talk. Our Belle II computing group would like to report on.
Gridifying the LHCb Monte Carlo production system
Data Processing for CDF Computing
Kihyeon Cho* (High Energy Physics Team, KISTI)
The LHCb Computing Data Challenge DC06
Presentation transcript:

International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung Cho Sungkyunkwan University (SKKU) Kihyeon Cho, Youngdo Oh Center for High Energy Physics (CHEP) Kyungpook National University (KNU) Bockjoo Kim Seoul National University (SNU) Jysoo Lee KISTI, Supercomputing Center

CDF (Collider Detector at Fermilab)  Proton (1TeV) – Antiproton (1TeV) Collider Experiment (~600 members)  Collision Event Rate : 2.5 MHz  Collision Runs : (Run IIa) (Run IIb) International Workshop on HEP Data Grid Nov 9, 2002, KNU

CDF Run IIa  Data Characteristics Event Rate to Tape : 300 Hz Raw (Compressed) Data Size : 250 (~100) KB/event Number of Events : ~ events/year  Requirements for Data Analysis 700 TB of Disk and 5 THz of CPU assuming 200 simultaneous users  Upgraded CAF (Central Analysis Facility) may not have enough CPUs and disk storage to handle CDF data International Workshop on HEP Data Grid Nov 9, 2002, KNU

CDF Grid  In Run IIb, expect 6~7 times more data (~3 PB/ year)  CDF needs distributed disk storage and computing systems among collaborators via fast network  HEP DataGRID CDF GRID working group was formed in March, 2002 Research on SAM (Sequential data Access via Metadata) with D0 group  CDF Korea Group DCAF (DeCentralized Analysis Farm) in Korea : KCAF (Dr. Cho’s Talk) CHEP/KNU : KCAF (Tier 1) SNU, SKKU (Tier 2) International Workshop on HEP Data Grid Nov 9, 2002, KNU

Network  Network Test CHEP - SNU, CHEP - SKKU: ~ 40 Mbps CHEP - CDF : ~20 Mbps Enough Bandwidth for Grid Research and Test Fermilab/CDF CHEP/KNU SNU SKKU KOREN 155Mbps APII 45Mbps International Workshop on HEP Data Grid Nov 9, 2002, KNU

Status (KCAF)  KCAF Construction 6 work nodes (12 CPU) for initial tests more work nodes by the end of this year 1 TB disk storage 1 +(4) TB Disk for the network buffer between CHEP and CDF  KCAF Software Linux based CDF software installed and tested PBS installed and working successfully  Main project : CDF Monte Carlo data production International Workshop on HEP Data Grid Nov 9, 2002, KNU

Status  In SKKU, a disk storage server (~0.5 TB) is constructed  Second disk storage server (~ 2TB) by the end of this month CHEP/KNU SKKU Work Nodes Disk Storage Server KOREN ~ 40 Mbps International Workshop on HEP Data Grid Nov 9, 2002, KNU

Status  Job submission from SKKU to CHEP work nodes / Output data transfer from CHEP to the SKKU data storage server is planned using NFS or Globus SE  Grid Software and Middleware Globus 2.0 : 10 nodes (CHEP), 1 node (SNU), 1 node (Fermilab) Construct Private CA GridFTP, Replica Catalog, Replica Management installed Test Grid testbed : CHEP - SNU International Workshop on HEP Data Grid Nov 9, 2002, KNU

Prospects  KCAF at CHEP (Tier 1) ~ 10% of CDF Run II data processing and storage (~500 TB and ~200 nodes) (both real and MC data)  SNU/SKKU (Tier 2) Disk Storage System (~ 50 TB) Local Clusters (mini-KCAF) (~60 nodes)  Network CHEP-Fermilab : ~ 70 Mbps for Run IIb required CHEP-SNU/SKKU: ~ 40 Mbps  Implementation of CDF data processing in full Grid environment International Workshop on HEP Data Grid Nov 9, 2002, KNU