Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
GSIAF "CAF" experience at GSI Kilian Schwarz. GSIAF Present status Present status installation and configuration installation and configuration usage.
ALICE analysis at GSI (and FZK) Kilian Schwarz CHEP 07.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
Outline Network related issues and thinking for FAX Cost among sites, who has problems Analytics of FAX meta data, what are the problems  The main object.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
11 ALICE Computing Activities in Korea Beob Kyun Kim e-Science Division, KISTI
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
PROOF Cluster Management in ALICE Jan Fiete Grosse-Oetringhaus, CERN PH/ALICE CAF / PROOF Workshop,
N EWS OF M ON ALISA SITE MONITORING
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Site operations Outline Central services VoBox services Monitoring Storage and networking 4/8/20142ALICE-USA Review - Site Operations.
CSCS Status Peter Kunszt Manager Swiss Grid Initiative CHIPP, 21 April, 2006.
Optimisation of Grid Enabled Storage at Small Sites Jamie K. Ferguson University of Glasgow – Jamie K. Ferguson – University.
London Tier 2 Status Report GridPP 11, Liverpool, 15 September 2004 Ben Waugh on behalf of Owen Maroney.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team ATHIC2012, Busan,
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
EGEE is a project funded by the European Union under contract IST VO box: Experiment requirements and LCG prototype Operations.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
AliEn central services Costin Grigoras. Hardware overview  27 machines  Mix of SLC4, SLC5, Ubuntu 8.04, 8.10, 9.04  100 cores  20 KVA UPSs  2 * 1Gbps.
LCG and Tier-1 Facilities Status ● LCG interoperability. ● Tier-1 facilities.. ● Observations. (Not guaranteed to be wry, witty or nonobvious.) Joseph.
ALICE DATA ACCESS MODEL Outline 05/13/2014 ALICE Data Access Model 2  ALICE data access model  Infrastructure and SE monitoring.
+ AliEn site services and monitoring Miguel Martinez Pedreira.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
PDC’06 - status of deployment and production Latchezar Betev TF meeting – April 27, 2006.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
Report from US ALICE Yves Schutz WLCG 24/01/2007.
ALICE WLCG operations report Maarten Litmaath CERN IT-SDC ALICE T1-T2 Workshop Torino Feb 23, 2015 v1.2.
Grid Operations in Germany T1-T2 workshop 2015 Torino, Italy Kilian Schwarz WooJin Park Christopher Jung.
Grid Operations in Germany T1-T2 workshop 2016 Bergen, Norway Kilian Schwarz Sören Fleischer Raffaele Grosso Christopher Jung.
A Distributed Tier-1 for WLCG Michael Grønager, PhD Technical Coordinator, NDGF CHEP 2007 Victoria, September the 3 rd, 2007.
Virtual machines ALICE 2 Experience and use cases Services at CERN Worker nodes at sites – CNAF – GSI Site services (VoBoxes)
Availability of ALICE Grid resources in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
Kilian Schwarz ALICE Computing Meeting GSI, October 7, 2009
WLCG IPv6 deployment strategy
LCG Service Challenge: Planning and Milestones
Report PROOF session ALICE Offline FAIR Grid Workshop #1
ALICE FAIR Meeting KVI, 2010 Kilian Schwarz GSI.
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Update on Plan for KISTI-GSDC
GSIAF "CAF" experience at GSI
Christof Hanke, HEPIX Spring Meeting 2008, CERN
Brookhaven National Laboratory Storage service Group Hironori Ito
HEPiX IPv6 Working Group F2F Meeting
The LHCb Computing Data Challenge DC06
Presentation transcript:

Data transfers and storage Kilian Schwarz GSI

GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch & GSIAF) Directly attached disk storage (81 TB) 35 TB ALICE::GSI::SE ::xrootd PROOF/ Batch Grid CERN GridKa 150 Mbps GSI GSI batchfarm: Common batch queue (132 nodes) CE Grid Lustre Clustre 84 TB Soon 1 Gbs 3rd party copy test cluster 10 TB Local xrd Cluster 15 TB gStore backend

monitoring

Future for Alice Tier 2&3 at GSI: Remarks: 2/3 of that capacity is for the tier 2 (ALICE central, fixed via WLCG MoU) 1/3 for the tier 3 (local usage, may be used via Grid) currently: 10% of ALICE T2 proposal running: go for 20% of ALICE T2 capacities question: how to distribute among GSI storage facilities ? Year ramp-up CPU (kSI2k) 400/ / / / Disk (TB)120/80300/200390/260510/ WAN (Mb/s)

transfers Principal method: within AliEn mirror -t find_collection alice::gsi::se listTransfer -id master Before: find -c find_collection. file  after some initial difficulties it works fine, now !!!

ALICE::GSI::SE usage

(almost) all requested transfers DONE

Test cluster for 3rd party copy exercise Currently 2 file servers and 1 local redirector on seperate machine Current status: xrootd up and working. Data can be pulled via this cluster from CERN (voalice04). Plan: setup test SE with new FTD supporting 3rd party copy method Connect to global ALICE redirector See presentation of Fabrizio yesterday

GSI 1 Gb upgrade Current transfer method using AliEn FTD The global Grid ALICE vobox GSI Batch Farm and File Servers The whole traffic is tunneled via the vobox.  Only necessary to put vobox into seperate subnet with 1 Gb connection to outside world with safe connection via firewall or network bridge to internal GSI network.  No real bottleneck since modern machines should be able to handle 1 Gb in/out connection without problems 1 Gb should also be enough for GridKa to serve T2s

GSI 1 Gb upgrade Xrootd 3rd party copy The global Grid vobox ALICE::GSI::SE GSI Batchfarm/GSIAF Possibility: put vobox and SE in seperate subnet with 1 Gb connection to outside world. But how to handle safe connection to all WNs and GSIAF Without introducing unnecessary bottlenecks ? To seperate traffic within scientific net (ALICE) and other GSI traffic (800/200)

Technical issues Additional to the subnet „problem“ When using 3rd party copy and the AliEn SE as interface between GSI and the outside world we need to have –The possibility to delete files again in the SE, otherwise it will become pretty full (ev. sufficient during staging/cp process via SE) –In that context also: list content of 1 specific SE with AliEn methods –How to limit bandwidth via 3rd party transfer method ? Or will always everything offered be used ? I know that it is planned to introduce a machinery for (input+output) traffic coordination for globalized xrootd clusters. Does that solve the issue ?

T2 Opinion poll Opinion poll: how to other ALICE T2 centres configure their site with respect to network and firewall ? –Legnaro: no firewall, gLite + AliEn, no subnet, 1Gbs, WNs in private network, SE seen via xrd door to dCache –Subatech: 1 Gb/s, firewall covering the whole site, no subnet or DMZ, no proxy to connect to outside world, DPM –Catania: no firewall, no subnet, DPM, 1 Gb/s, 10 Gb/s next year, IPtables instead of hardware firewall, AliEn & gLite –NIHAM: WNs and data servers in same private subnet. Direct communication possible, xrootd SE, only software firewall, soon 10 Gb connection, 2/3 plain AliEn (32 and 64bit), PBS batch system