ALICE Tier-2 at Hiroshima Toru Sugitate of Hiroshima University for ALICE-Japan GRID Team LHCONE workshop at the APAN 38 th.

Slides:



Advertisements
Similar presentations
FNAL Site Perspective on LHCOPN & LHCONE Future Directions Phil DeMar (FNAL) February 10, 2014.
Advertisements

T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
Outline Network related issues and thinking for FAX Cost among sites, who has problems Analytics of FAX meta data, what are the problems  The main object.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
COMSATS Institute of Information Technology, Islamabad PK-CIIT Grid Operations in Pakistan COMSATS Ali Zahir Site Admin / Faculty Member ALICE T1/2 March.
Site report: Tokyo Tomoaki Nakamura ICEPP, The University of Tokyo 2014/12/10Tomoaki Nakamura1.
ALICE data access WLCG data WG revival 4 October 2013.
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
SouthGrid Status Pete Gronbech: 2 nd April 2009 GridPP22 UCL.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Site Report: Tokyo Tomoaki Nakamura ICEPP, The University of Tokyo 2013/12/13Tomoaki Nakamura ICEPP, UTokyo1.
Workshop KEK - CC-IN2P3 KEK new Grid system 27 – 29 Oct. CC-IN2P3, Lyon, France Day2 14: :55 (40min) Koichi Murakami, KEK/CRC.
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
Workshop summary Ian Bird, CERN WLCG Workshop; DESY, 13 th July 2011 Accelerating Science and Innovation Accelerating Science and Innovation.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Scientific Networking: The Cause of and Solution to All Problems April 14 th Workshop on High Performance Applications of Cloud and Grid Tools Jason.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
UKI-SouthGrid Update Hepix Pete Gronbech SouthGrid Technical Coordinator April 2012.
COMSATS Institute of Information Technology, Islamabad PK-CIIT Grid Operations in Pakistan COMSATS Dr. Saif-ur-Rehman Muhammad Waqar Asia Tier Center Forum.
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
Analysis in STEP09 at TOKYO Hiroyuki Matsunaga University of Tokyo WLCG STEP'09 Post-Mortem Workshop.
ALICE Grid operations: last year and perspectives (+ some general remarks) ALICE T1/T2 workshop Tsukuba 5 March 2014 Latchezar Betev Updated for the ALICE.
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS.
LHCONE in Asia ASGC Wenshui Chen
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
Factors affecting ANALY_MWT2 performance MWT2 team August 28, 2012.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team ATHIC2012, Busan,
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
Status of Tokyo LCG tier-2 center for atlas / H. Sakamoto / ISGC07 Status of Tokyo LCG Tier 2 Center for ATLAS Hiroshi Sakamoto International Center for.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
ALICE Grid operations +some specific for T2s US-ALICE Grid operations review 7 March 2014 Latchezar Betev 1.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
UK Status and Plans Catalin Condurache – STFC RAL ALICE Tier-1/Tier-2 Workshop University of Torino, February 2015.
Report from US ALICE Yves Schutz WLCG 24/01/2007.
KOLKATA Grid Kolkata Tier-2 Status and Plan Site Name :- IN-DAE-VECC-02 Gocdb Name:- IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :-
Dynamic Extension of the INFN Tier-1 on external resources
Extending the farm to external sites: the INFN Tier-1 experience
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2016/2017
GRID OPERATIONS IN ROMANIA
The Beijing Tier 2: status and plans
2nd Asia Tier Centre Forum Summary report 4th April 2017 edoardo
ALICE internal and external network
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Operations and plans - Polish sites
Update on SINET5 implementation for ICEPP (ATLAS) and KEK (Belle II)
LCG Deployment in Japan
Kolkata Status and Plan
Update on Plan for KISTI-GSDC
LHCONE L3VPN Status update Mian Usman LHCOPN-LHCONE meeting
A Messaging Infrastructure for WLCG
KISTI Daejeon, 23rd September 2015
PK-CIIT Grid Operations in Pakistan
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Presentation transcript:

ALICE Tier-2 at Hiroshima Toru Sugitate of Hiroshima University for ALICE-Japan GRID Team LHCONE workshop at the APAN 38 th Meeting in National Chi Nan University, Taiwan, on Aug. 13, 2014

Toru Sugitate/Hiroshima Univ./2014 LHCONE workshop in APAN page 2 Our Physics Goal has unique capabilities measuring various particles in wide p T ranges (e.g GeV) with excellent PID abilities in extreme particle densities at the LHC, to  study Quark-Gluon-Plasma,  Understand the properties of strong QCD, and then to  reveal the primordial Universe filled with the QGP. CBM11

Toru Sugitate/Hiroshima Univ./2014 LHCONE workshop in APAN page 3 ALICE Tiers in WLCG 53 in Europe 10 in Asia 8 operational 2 future 2 in South America 1 operational 1 future 8 in North America 4 operational 4 future + 1 past 2 in Africa 1 operational 1 future 1 T0 7 T1 operational 2 T1 future

Toru Sugitate/Hiroshima Univ./2014 LHCONE workshop in APAN page 4 A Little Information about Us

Toru Sugitate/Hiroshima Univ./2014 LHCONE workshop in APAN page 5 The ALICE T2 site “JP-HIROSHIMA-WLCG” with grid middleware EMI-3 on SL as stable as possible. GRID service; APEL, sBDII, CREAM-CE, XROOTD, DPM-SE, VOBOX… as compact as possible. WN resources; 1164 Xeon-cores in total x 2cpu x 32 boxes x 2cpu x 20 blades x 2cpu x 26 blades x 2cpu x 3 blades x 2cpu x 42 blades Storage cap; 408TB disks on 6 servers and no MS Around 2/3 resource deployed to the ALICE GRID The rest in a local cluster Network B/W: 1Gbps on 40Gbps-SINET4 in Japan WLCG support by ASGC in Taiwan Responsible by Prof. Toru Sugitate Operated by TS with remote technical support by a part-time SE of SOUM corp. in Tokyo ALICE Tier-2 at Hiroshima

Toru Sugitate/Hiroshima Univ./2014 LHCONE workshop in APAN page 6 Hiroshima DC DMZ FW TRUSTED Disk servers WN 10Gbps L3 40Gbps Nexus 5596/2248x2 grid02: sBDII/APEL grid04: Xrootd/DPM-SE grid01: VOBOX/CVMFS grid06: CREAM-CE GRID Xeon Xeon Equal Logic PS6510E 2T SATA x 48 w/RAID6 Xeon Xeon Xeon Xeon JCS RVAX-4U 1TB SATA x 144 w/RAID6 3TB SATA x 24 w/RAID6 Xeon Fortigate 200B 1Gbps ALICE-T2 864 Xeon cores w/ 2GB/core 216 TB storage w/ Raid6 ALICE-T2 864 Xeon cores w/ 2GB/core 216 TB storage w/ Raid6 ALICE-T3 / Local cluster 300 Xeon cores w/ 1GB/core 192 TB storage w/ Raid6 ALICE-T3 / Local cluster 300 Xeon cores w/ 1GB/core 192 TB storage w/ Raid6 grid03/05: CVMFS/Squid Configuration since Feb Gbps Secure/Robust subnets 10Gbps internal connection (part.) Space/Energy saving Xeon Cataly 3560E KEK HS-L3-01 HiNET

Toru Sugitate/Hiroshima Univ./2014 LHCONE workshop in APAN page 7 Daily Score in July 2014

Toru Sugitate/Hiroshima Univ./2014 LHCONE workshop in APAN page 8 ALICE jobs in July 2014  864 Xeon-cores in CRAEM-CE.  Stably accepting over 800 jobs and process around 7,000 jobs a day.  They produces Gbps traffic in WAN at peaks. EMI-2 EMI jobs external port at FW internal port at FW 100 MB/s Writing in (top) and Reading out (bottom) traffic to Storage Active jobs in the last 12 months

Toru Sugitate/Hiroshima Univ./2014 LHCONE workshop in APAN page 9 Science Info. NETwork in Japan Present on SINET4 Hiroshima DC: 40Gbps Core node T2 to KEK via Hiroshima DC on 1Gbps-MPLS of HEPNet-J Internat’l connection via a default SINET routing Recently NII declares major upgrade to SINET5 in 2016 Domestic nodes at 100Gbps, and 400Gbps/1Tbps later Direct links to US/Eu at 100Gbps core node/DC 40-10Gbps edge node/DC Gbps

Toru Sugitate/Hiroshima Univ./2014 LHCONE workshop in APAN page 10 Summary and Plan  Hiroshima Tier-2 has been in operation since  Accepts over 800 jobs stably and process around 7,000 jobs a day, which  produces Gbps traffic in WAN at peaks.  Trace network and tune up connection may increase the productivity, but …  The T2 site declares a 10 Gbps connection to SINET University campus LAN upgrade to multi-10 Gbps connection 2015 A 10 Gbps line between the T2 site and Hiroshima DC 2016 Transition to SINET5; 10 Gbps ports at Hiroshima DC 2016 Replace the Router and FW with 10 Gbps ports (TBC) 2016 Approach to LHCONE 2017 Replacement of the T2 equipment may backup the plan