Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.

Slides:



Advertisements
Similar presentations
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Advertisements

Belle computing upgrade Ichiro Adachi 22 April 2005 Super B workshop in Hawaii.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
Status of the DESY Grid Centre Volker Guelzow for the Grid Team DESY IT Hamburg, October 25th, 2011.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
Workshop KEK - CC-IN2P3 KEK new Grid system 27 – 29 Oct. CC-IN2P3, Lyon, France Day2 14: :55 (40min) Koichi Murakami, KEK/CRC.
Workshop summary Ian Bird, CERN WLCG Workshop; DESY, 13 th July 2011 Accelerating Science and Innovation Accelerating Science and Innovation.
Alex Read, Dept. of Physics Grid Activity in Oslo CERN-satsingen/miljøet møter MN-fakultetet Oslo, 8 juni 2009 Alex Read.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
The ILC And the Grid Andreas Gellrich DESY LCWS2007 DESY, Hamburg, Germany
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
Evolution of Grid Projects and what that means for WLCG Ian Bird, CERN WLCG Workshop, New York 19 th May 2012.
Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep , 2014 Draft.
21 October 2010 Dietrich Liko Grid Tier-2 HEPHY Scientific Advisory Board.
Impact of end of EMI+EGI-SA3 April 2013: EMI project finishes EGI-Inspire-SA3 finishes (mainly CERN affected) EGI-Inspire continues until April 2014 EGI.eu.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
MTA SZTAKI Hungarian Academy of Sciences Introduction to Grid portals Gergely Sipos
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Alex Read, Dept. of Physics Grid Activities in Norway R-ECFA, Oslo, 15 May, 2009.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
The GridPP DIRAC project DIRAC for non-LHC communities.
LHC Computing, CERN, & Federated Identities
Ian Bird WLCG Workshop, Barcelona, 9 th July July 2014 Ian Bird; Barcelona1.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
KEK GRID for ILC Experiments Akiya Miyamoto, Go Iwai, Katsumasa Ikematsu KEK LCWS March 2010.
DataTAG is a project funded by the European Union International School on Grid Computing, 23 Jul 2003 – n o 1 GridICE The eyes of the grid PART I. Introduction.
WLCG Operations Coordination report Maria Alandes, Andrea Sciabà IT-SDC On behalf of the WLCG Operations Coordination team GDB 9 th April 2014.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Ian Bird LCG Project Leader Status of EGEE  EGI transition WLCG LHCC Referees’ meeting 21 st September 2009.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
STFC in INDIGO DataCloud WP3 INDIGO DataCloud Kickoff Meeting Bologna April 2015 Ian Collier
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
DIRAC for Grid and Cloud Dr. Víctor Méndez Muñoz (for DIRAC Project) LHCb Tier 1 Liaison at PIC EGI User Community Board, October 31st, 2013.
DIRAC Distributed Computing Services A. Tsaregorodtsev, CPPM-IN2P3-CNRS FCPPL Meeting, 29 March 2013, Nanjing.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
GRID & Parallel Processing Koichi Murakami11 th Geant4 Collaboration Workshop / LIP - Lisboa (10-14/Oct./2006) 1 GRID-related activity in Japan Go Iwai,
Ian Bird LCG Project Leader Summary of EGI workshop.
EMI is partially funded by the European Commission under Grant Agreement RI Commercial applications of open source middleware: the EMI and DCore.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
Dynamic Extension of the INFN Tier-1 on external resources
Gene Oleynik, Head of Data Storage and Caching,
WLCG IPv6 deployment strategy
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Overview of the Belle II computing
Belle II Physics Analysis Center at TIFR
U.S. ATLAS Tier 2 Computing Center
Update on Plan for KISTI-GSDC
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Readiness of ATLAS Computing - A personal view
SA1 ROC Meeting Bologna, October 2004
WLCG Collaboration Workshop;
Input on Sustainability
LHC Data Analysis using a worldwide computing grid
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
The LHCb Computing Data Challenge DC06
Presentation transcript:

Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga

Outline KEKCC status Grid and distributed computing Belle II and ILC Prospect for the future Summary This talk focuses on data analysis. Common services (such as web and ) are not covered.

Computing system at KEK (KEKCC) In operation since April 2012 Only one large system for data analysis in KEK Previous Belle system was merged KEKCC resources are shared with all KEK’s projects Belle / Belle II is the main user for the next several years Grid (and Cloud) services have been integrated into current KEKCC No user for Cloud so far

KEKCC replacement The whole system has to be replaced with the new one by another lease contract Every ~3 years (next replacement in summer 2015) This leads to many problems System is not available during the replacement and commissioning More than 1 month for data storage Replica at other sites would help. System is likely to be unstable just after the start of operations Massive data transfer from the previous system needs long time and efforts In the worst case, data could be lost…

Batch server Xeon ~3000 cores, Scientific Linux 5 Grid and local jobs are all managed by LSF batch scheduler Cream 1 Cream 1 Cream 2 Cream 2 LSF LB WMS Grid jobs Local jobs Computing Element (CE)

CPU Utilization

Inefficient use of CPU Up to 80% CPU utilization Job slots are almost full I/O bound Some Belle jobs use index files and inefficient Mis-use by end-users Software bugs (which should be checked before submitting many jobs) Very short jobs (Less than few minutes) –Overhead of batch scheduler (and Grid) gets higher We have to monitor such jobs and guide the offending users on a daily basis Grid users are not many so far, but user training is definitely needed

CPU utilization for Grid Belle II MC production

Storage System (HSM) GridFTP 1 GridFTP 2 GridFTP 3 GridFTP 4 StoRM Frontend GridFTP 1 GridFTP 2 DPM Head 2PB disk cache (GPFS) 16PB capacity (HPSS) GHI (GPFS HPSS Interface) as a backend for GridFTP servers For Grid: 600TB for Belle 350TB for ILC 80TB for others Storage Element (SE)

Grid at KEK KEK is not involved in WLCG But we have deployed gLite and NAREGI middleware for several years ILC and Belle (II) are main Grid users at KEK VO Management Service Belle II : KEK ILC (ILD + SiD) : DESY DIRAC Middleware to access to distributed resources Grid, Cloud, local resources Originally developed by and for LHCb Used by Belle and ILC now Need customization to each computing model

A T. Hara (KEK)

Preparation for Belle II Grid KEK cannot afford huge resources to be needed in the next years with the current level of budget Technology evolution is not so fast Perhaps similar situation for other Grid sites Data migration to new system will be more difficult Human resources are not enough for both computing and experiment sides Current service quality is not sufficient for the host lab (i.e. Tier 0) We need to provide more services Some tasks can be outsourced, but still need more lab staff Preparation for computing started late

ILC case Smaller amount of data compared to Belle II (in the first several years) Still similar to current level of LHC experiment More collaborators (and sites) worldwide All collaborators must be able to access data equally Data distribution would be more complex More efforts for coordination and monitoring of distributed computing infrastructure In Belle II, most of software and services rely on WLCG. We should consider how we will do for ILC.

Worldwide LHC Computing Grid (WLCG) To support 4 LHC experiments Close collaboration with EGI (European Grid Infrastructure) and OSG (American) EGI and OSG supports many fields of science (Bio, Astrophysics, …), but future funding are not clear We discussed Ian Bird (WLCG Project leader) in October, and he proposed WLCG expansion to include other HEP (Belle II and ILC etc.) and other fields. Still being discussed in WLCG

Future directions Sustainability is a big problem Future funding is not clear in EU and US Maintaining Grid middleware by ourselves becomes heavy (with future funding) Try to adopt “Standards” software/protocol as much as possible CERN and many other sites is deploying Cloud Operational cost should be reduced by streamlining services Resource demands will not be affordable for WLCG (and Belle II) in the near future We need better (efficient) computing model and software (e.g. better use of many cores) Exploit new technology –GPGPU, ARM processor Collaborate with other fields (and private sectors)

Summary Belle II is a big challenge for KEK First full-scale distributed computing For ILC, Belle II will be a good exercise and lessons learned would be beneficial It would be nice for ILC to collaborate with Belle II Important to train young students/postdocs who will join ILC in future Keep up with technology evolution Better software reduces processing resources Education to users is also important Start preparations early LHC computing had been considered since ~2000 (>10 years before the Higgs discovery)