KEK GRID for ILC Experiments Akiya Miyamoto, Go Iwai, Katsumasa Ikematsu KEK LCWS 2010 27 March 2010.

Slides:



Advertisements
Similar presentations
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
Advertisements

Status Report: JLDG ( T. Yoshie for JLDG) AGENDA 1. Current Status of JLDG 2. Reconfiguration/Extension Plan 3. Funding.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
ILD Software for ILD detector simulation and optimization Akiya Miyamoto KEK 12-July-2010 DESY Computing Seminar.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Grid Computing for High Energy Physics in Japan Hiroyuki Matsunaga International Center for Elementary Particle Physics (ICEPP), The University of Tokyo.
Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep ,
Software Common Task Group Report Akiya Miyamoto KEK ALCPG09 30 September 2009.
H IGH E NERGY A CCELERATOR R ESEARCH O RGANIZATION KEKKEK Current Status and Plan on Grid at KEK/CRC Go Iwai, KEK/CRC On behalf of KEK Data Grid Team Links.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
Grid Applications for High Energy Physics and Interoperability Dominique Boutigny CC-IN2P3 June 24, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA.
Data GRID deployment in HEPnet-J Takashi Sasaki Computing Research Center KEK.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
H IGH E NERGY A CCELERATOR R ESEARCH O RGANIZATION KEKKEK Current Status and Recent Activities on Grid at KEK Go Iwai, KEK/CRC On behalf of KEK Data Grid.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Workshop KEK - CC-IN2P3 KEK new Grid system 27 – 29 Oct. CC-IN2P3, Lyon, France Day2 14: :55 (40min) Koichi Murakami, KEK/CRC.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
Computing for ILC experiments Akiya Miyamoto KEK 14 May 2014 AWLC14 Any comments are welcomed.
Site Report from KEK, Japan JP-KEK-CRC-01 and JP-KEK-CRC-02 Go Iwai, KEK/CRC Grid Operations Workshop – 2007 Kungliga Tekniska högskolan, Stockholm, Sweden.
The ILC And the Grid Andreas Gellrich DESY LCWS2007 DESY, Hamburg, Germany
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
Computing Resources for ILD Akiya Miyamoto, KEK with a help by Vincent, Mark, Junping, Frank 9 September 2014 ILD Oshu City a report on work.
Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep , 2014 Draft.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
GRID Deployment Status and Plan at KEK ISGC2007 Takashi Sasaki KEK Computing Research Center.
Software Overview Akiya Miyamoto KEK JSPS Tokusui Workshop December-2012 Topics MC production Computing reousces GRID Future events Topics MC production.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
Summary of Software and tracking activities Tokusui Workshop December 2015 Akiya Miyamoto KEK.
May 10, 2000PHENIX CC-J Updates1 PHENIX CC-J Updates - Preparation For Opening - N.Hayashi / RIKEN May 10, 2000 PHENIX Computing
Computing Resources for ILD Akiya Miyamoto, KEK with a help by Vincent, Mark, Junping, Frank 9 September 2014 ILD Oshu City a report on work.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
CERN IT Department CH-1211 Genève 23 Switzerland t SL(C) 5 Migration at CERN CHEP 2009, Prague Ulrich SCHWICKERATH Ricardo SILVA CERN, IT-FIO-FS.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Activities and Perspectives at Armenian Grid site The 6th International Conference "Distributed Computing and Grid- technologies in Science and Education"
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
GRID & Parallel Processing Koichi Murakami11 th Geant4 Collaboration Workshop / LIP - Lisboa (10-14/Oct./2006) 1 GRID-related activity in Japan Go Iwai,
September 26, 2003K User's Meeting1 CCJ Usage for Belle Monte Carlo production and analysis –CPU time: 170K hours (Aug.1, 02 ~ Aug.22, 03)
KEK Computing Resources after Earthquake Akiya Miyamoto 30-March-2011 ILD Software WG meeting Status as of today.
ILC Detector Design D_R/R_1 FJPPL Project Application Akiya Miyamoto KEK FJPPL Workshop at Tsukuba May, 2009.
Computing requirements for the experiments Akiya Miyamoto, KEK 31 August 2015 Mini-Workshop on ILC Infrastructure and CFS for Physics and KEK.
Roman Pöschl ILD Workshop Spring – Feb Paris/France - Pre-Meeting 1 Roman Pöschl ILD Workshop – Jan Paris/France Pre-Meeting Grid Ressources.
ILD MCProduction with ILCDirac
The Beijing Tier 2: status and plans
KEKCC – KEK Central Computer System
Belle II Physics Analysis Center at TIFR
Update on SINET5 implementation for ICEPP (ATLAS) and KEK (Belle II)
Akiya Miyamoto KEK 1 June 2016
Update on Plan for KISTI-GSDC
NAREGI at KEK and GRID plans
Computing in ILC Contents: ILC Project Software for ILC
ILD Ichinoseki Meeting
gLite deployment and operation toward the KEK Super B factory
Interoperability of Digital Repositories
Current Grid System in Belle
Grid related activities at KEK
The LHCb Computing Data Challenge DC06
Presentation transcript:

KEK GRID for ILC Experiments Akiya Miyamoto, Go Iwai, Katsumasa Ikematsu KEK LCWS March 2010

Introduction GRID is an infrastructure for a large scale international researches GRID provides  Resources for Large scale computing Large scale data storage  International/Inter-regional communication basis GRID have been used extensively in ILD LOI studies  for MC productions  Data sharing between Japan – Germany/France/UK 2 Akiya Miyamoto, KEK LCWS2010, 27-Mar-2010, Beijing

Tohoku Univ. KEK Univ. of Tsukuba Nagoya Univ. Kobe Univ. Hiroshima IT Network in Japan and GRID Major HEP projects: – Belle, J-PARC, ATLASongoing projects – ILC, Belle2future projects Also covering – Material science, bio-chemistry and so on using synchrotron light and neutron source – Radiotherapy as technology transfer KEK has a role to support university groups in these fields. – including Grid deployment/operation. Akiya Miyamoto, KEKLCWS2010, 27-Mar-2010, Beijing3 Hiroshima Univ. (Alice) U of Tokyo (Atlas)

SINET3 4 Akiya Miyamoto, KEK LCWS2010, 27-Mar-2010, Beijing Round Trip Time: KEK  IHEP/KISTI ~ 100msec  FNAL ~ 200msec  DESY/IN2P3 ~ 300msec

GRID infrastructures 5 Akiya Miyamoto, KEK LCWS2010, 27-Mar-2010, Beijing Middle waregLiteNAREGIGfarmSRBiRODS Belle (Belle2)UsingPlanningUsing AtlasUsing Medical AppsUsingDevelopingPlanning ILCUsingPlanning J-PARCPlanning Testing LCGRENKEI KEKCC supports both LCG and NAREGI/RENKEI Many Japanese HEP groups are joining LCG NAREGI middleware is being deployed as the general purpose e-science infrastructure in Japan RENKEI is developing a system to provide a seamless user environment between the local resources and multiple grid environment

GRID for ILC Two Vos have been used:  CALICE-VO: Test beam data analysis and MC. Standard data processing in GRID  ILC-VO: Needs huge CPU resources for the studies. Available only on GRID Standard MC samples ( ~ 50TB) are on GRID for sharing Status:  A typical data transfer rate from IN2P3/DESY to KEK: ~ 200kB/sec/port a frequent timeout for transfer of ~ 2GB: Cured by removing a time out at IN2P3  Overhead of catalog access ILD DST: many small size DSTs, limited by CPU time for a MC job. MC and DST production at DESY/IN2P3  Merge DSTs to create a large size file, then replicated to KEK 6 Akiya Miyamoto, KEK LCWS2010, 27-Mar-2010, Beijing

A typical GRID performance 7 Akiya Miyamoto, KEK LCWS2010, 27-Mar-2010, Beijing File transfer: IN2P3  Kobe, 184 files/210 GB in 13 hours - part of ILD LOI study, in Dec ports/job Pedestal in transfer time ~ 20~60sec.  < 100MB is not effective. Instantaneous transfer rate: average 4 MB/sec, Max. 10 MB/sec  not great, but has been used for real works successfully Data size vs TimeTransfer rate During Dec. ‘08 to Feb. ’09, O(10TB) data have been exchanged through GRID. It was crucial for the successful LOI studies. During Dec. ‘08 to Feb. ’09, O(10TB) data have been exchanged through GRID. It was crucial for the successful LOI studies.

Resource scale at KEK (Focused only on LCG, other infrastructures are exclusive) CPU resources in Oct ~ 0.3 MSI2K Akiya Miyamoto, KEKLCWS2010, 27-Mar-2010, Beijing8 Oct 2009

Resource scale at KEK (Focused only on LCG, other infrastructures are exclusive) 6MSI2K computing resource (recently updated) – 5 computing elements Migrated half of nodes from SL4 to SL5 – ~200 nodes – ~1,600 cores/~400 CPUs Akiya Miyamoto, KEKLCWS2010, 27-Mar-2010, Beijing9 Mar 2010

Storage Resource at KEK 10 Akiya Miyamoto, KEK LCWS2010, 27-Mar-2010, Beijing DPM as a SRM Backend storage device: IBM HPSS, TS3500, max 3000 TB capacity shared by other Vos and batch server users. ILC dedicated space are now in preparation

1,207 of 1,285 GB 2,837 of 3,830 GB Read Write Akiya Miyamoto, KEK11LCWS2010, 27-Mar-2010, Beijing

Conclusion GRID had been used successfully during the LOI era  GRID played the important role for data transfer between Japan and Europe. In last 12 months, GRID resources in KEK has increased significantly. We hope be able to contribute significantly in coming MC production. Akiya Miyamoto, KEK 12 LCWS2010, 27-Mar-2010, Beijing

13 Akiya Miyamoto, KEKLCWS2010, 27-Mar-2010, Beijing B A C K U P