RAL Plans for SC2 Andrew Sansum Service Challenge Meeting 24 February 2005.

Slides:



Advertisements
Similar presentations
ESLEA and HEPs Work on UKLight Network. ESLEA Exploitation of Switched Lightpaths in E- sciences Applications Exploitation of Switched Lightpaths in E-
Advertisements

Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Steve Traylen Particle Physics Department Experiences of DCache at RAL UK HEP Sysman, 11/11/04 Steve Traylen
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
UK Status for SC3 Jeremy Coles GridPP Production Manager: Service Challenge Meeting Taipei 26 th April 2005.
Southwest Tier 2 Center Status Report U.S. ATLAS Tier 2 Workshop - Harvard Mark Sosebee for the SWT2 Center August 17, 2006.
Tier1A Status Andrew Sansum GRIDPP 8 23 September 2003.
Service Challenge Meeting “Service Challenge 2 update” James Casey, IT-GD, CERN IN2P3, Lyon, 15 March 2005.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Applications Area Issues RWL Jones Deployment Team – 2 nd June 2005.
1 RAL Status and Plans Carmine Cioffi Database Administrator and Developer 3D Workshop, CERN, November 2009.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Tier1 - Disk Failure stats and Networking Martin Bly Tier1 Fabric Manager.
Jeremy Coles - RAL 17th May 2005Service Challenge Meeting GridPP Structures and Status Report Jeremy Coles
April 2001HEPix/HEPNT1 RAL Site Report John Gordon CLRC, UK.
RAL PPD Site Update and other odds and ends Chris Brew.
RAL Tier 1 Site Report HEPSysMan – RAL – May 2006 Martin Bly.
BNL Facility Status and Service Challenge 3 Zhenping Liu, Razvan Popescu, Xin Zhao and Dantong Yu USATLAS Computing Facility Brookhaven National Lab.
Tier1 Status Report Martin Bly RAL 27,28 April 2005.
Martin Bly RAL Tier1/A RAL Tier1/A Report HepSysMan - July 2004 Martin Bly / Andrew Sansum.
Service Challenge Meeting - ISGC_2005, 26. Apr Bruno Hoeft 10Gbit between GridKa and openlab (and their obstacles) Forschungszentrum Karlsruhe GmbH.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Tier1 Site Report HEPSysMan, RAL May 2007 Martin Bly.
NL Service Challenge Plans Kors Bos, Sander Klous, Davide Salomoni (NIKHEF) Pieter de Boer, Mark van de Sanden, Huub Stoffers, Ron Trompert, Jules Wolfrat.
J Coles eScience Centre Storage at RAL Tier1A Jeremy Coles
BNL Facility Status and Service Challenge 3 HEPiX Karlsruhe, Germany May 9~13, 2005 Zhenping Liu, Razvan Popescu, and Dantong Yu USATLAS/RHIC Computing.
Martin Bly RAL Tier1/A Centre Preparations for the LCG Tier1 Centre at RAL LCG CERN 23/24 March 2004.
Tier1 Andrew Sansum GRIDPP 10 June GRIDPP10 June 2004Tier1A2 Production Service for HEP (PPARC) GRIDPP ( ). –“ GridPP will enable testing.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
Site report HIP / CSC HIP : Helsinki Institute of Physics CSC: Scientific Computing Ltd. (Technology Partner) Storage Elements (dCache) for ALICE and CMS.
RAL Site Report John Gordon HEPiX/HEPNT Catania 17th April 2002.
Status SC3 SARA/Nikhef 20 juli Status & results SC3 throughput phase SARA/Nikhef Mark van de Sanden.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
UK Tier 1 Centre Glenn Patrick LHCb Software Week, 28 April 2006.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Tier-2 storage A hardware view. HEP Storage dCache –needs feed and care although setup is now easier. DPM –easier to deploy xrootd (as system) is also.
RAL Site Report HEPiX - Rome 3-5 April 2006 Martin Bly.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
Tier-1 Andrew Sansum Deployment Board 12 July 2007.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Florida Tier2 Site Report USCMS Tier2 Workshop Livingston, LA March 3, 2009 Presented by Yu Fu for the University of Florida Tier2 Team (Paul Avery, Bourilkov.
Tier1 Status Report Andrew Sansum Service Challenge Meeting 27 January 2004.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
RAL Site Report Martin Bly SLAC – October 2005.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
LCG Storage Workshop “Service Challenge 2 Review” James Casey, IT-GD, CERN CERN, 5th April 2005.
CASTOR Status at RAL CASTOR External Operations Face To Face Meeting Bonny Strong 10 June 2008.
The RAL Tier-1 and the 3D Deployment Andrew Sansum 3D Meeting 22 March 2006.
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
J Jensen/J Gordon RAL Storage Storage at RAL Service Challenge Meeting 27 Jan 2005.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
Dissemination and User Feedback Castor deployment team Castor Readiness Review – June 2006.
STORAGE EXPERIENCES AT MWT2 (US ATLAS MIDWEST TIER2 CENTER) Aaron van Meerten University of Chicago Sarah Williams Indiana University OSG Storage Forum,
RAL Site Report HEP SYSMAN June 2016 – RAL Gareth Smith, STFC-RAL With thanks to Martin Bly, STFC-RAL.
Tier-1 Data Storage Challenges Extreme Data Workshop Andrew Sansum 20 th April 2012.
Service Challenge 3 CERN
Tier-1 Status Progress and Difficulties A View from RAL
Christof Hanke, HEPIX Spring Meeting 2008, CERN
The INFN Tier-1 Storage Implementation
Presentation transcript:

RAL Plans for SC2 Andrew Sansum Service Challenge Meeting 24 February 2005

RAL SC2 Plans

24 February 2005RAL SC2 Plans Site Network SAR F/W Router A Tier 1 SuperJanet4 (2*1Gb) UKLIGHT 2*1Gb Lightpath to CERN Via Netherlight Peering to CERN by Netherlight (2*1Gb/s) In Place RAL => UKLIGHT by Friday Netscreen 8Gb/s

24 February 2005RAL SC2 Plans Tier1 LAN Disk+CPU Nortel 5510 stack (80Gbit) Summit 7i N*1 Gbit 1 Gbit/link Site Router

24 February 2005RAL SC2 Plans dCache Evolution X X TODAY Yesterday

24 February 2005RAL SC2 Plans SC2 Disk Server EonStore Server: dual 2.8Ghz Xeon Eonstore U320 SCSI 7501 chipset 2*1Gbit uplink (1 in use) 15+1 RAID 5 250GB SATA 1 Lun per device, ext2 2 dCache Disk pools 120MB/s

24 February 2005RAL SC2 Plans dCache Versions d-cache-core d-cache-lcg d-cache-opt d-cache-client vdt_globus_info_server-VDT1.1.14rh7gcc3-4 vdt_globus_essentials-VDT1.1.14rh7gcc3-4.lcg1 vdt_globus_sdk-VDT1.1.14rh7gcc3-4.lcg1 vdt_globus_info_essentials-VDT1.1.14rh7gcc3-4

24 February 2005RAL SC2 Plans Production dCache SRM Nortel 5510 Stack (80Gps) SRM Summit 7i SJ4 (2*1Gps) dCache D/B Gridftp door + Disk Pools ATLAS LHCB CMS DTEAM 4 TB 4 TB 6 TB 6 TB postgres

24 February 2005RAL SC2 Plans CMS dCache Network Load At peak: 500Mb/s sustained until RAL caught up with CERN stager. Not yet optimised – RAL lightly loaded

24 February 2005RAL SC2 Plans CMS Production Network Load At peak: 500Mb/s sustained (full end to end including cataloguing etc) until RAL caught up with CERN stager. Not yet optimised – RAL lightly loaded

24 February 2005RAL SC2 Plans SRM Server Load Postgres moved Off head node Postgres CPU load very high – mainly waiting on update. Needs tuning?

24 February 2005RAL SC2 Plans Postgres Server Load

24 February 2005RAL SC2 Plans SC2 Infrastructure Build another PRODUCTION dCache SRM Build from kickstart – therefore quick Build on PRODUCTION SRM experience Build in production monitoring –Exception monitoring (SURE/NAGIOS) –Ganglia Monitoring –helpdesk –Etc etc Build modular – avoid bottlenecks Decouple network transfers from disk servers – allows easier network/kernel tuning

24 February 2005RAL SC2 Plans SC2 dCache SRM gridftp Nortel 5510 Stack (80Gps) SRM Summit 7i UKLIGHT (2*1Gps) SJ4 2*1Gps dCache D/B 8 dCache pools 6 TB 6 TB 6 TB 6 TB Diskless GridFTP doors

24 February 2005RAL SC2 Plans SC2 dCache SRM gridftp Nortel 5510 Stack (80Gps) SRM Summit 7i UKLIGHT (2*1Gps) SJ4 2*1Gps dCache D/B 8 dCache pools gridftp 6 TB 6 TB 6 TB 6 TB

24 February 2005RAL SC2 Plans SC2 dCache SRM Nortel 5510 Stack (80Gps) SRM Summit 7i UKLIGHT (2*1Gps) SJ4 2*1Gps dCache D/B 8 dCache pools 6 TB 6 TB 6 TB 6 TB Or directly attach Disk servers as both Pools and gridftp doors

24 February 2005RAL SC2 Plans SC2 dCache SRM gridftp Nortel 5510 Stack (80Gps) SRM Summit 7i UKLIGHT (2*1Gps) SJ4 2*1Gps dCache D/B 8 dCache pools After Service Challenge Re-deploy capacity Retain functionality

24 February 2005RAL SC2 Plans Production SRM SC2 SRM Data Pathways Production SJ4 Production SJ4 UKLIGHT Experiment ProductionService Challenges

24 February 2005RAL SC2 Plans MSS Integration Back end from dCache to MSS (STK plus RAL ADS) written and tested. Just beginning experiment testing (CMS initially). MSS still bottlenecked at MSS cache disk – probably will not exceed 60MB/s (writing) until new hardware delivered (in time for July). Will not commit to MSS testing for SC2, but may turn dCache MSS back end on at some stage in SC2 and see what we an achieve.

24 February 2005RAL SC2 Plans Timetable Network –Access agreement for UK Light –Peering agreed with NetherLight –Attach to UKLIGHT 25 February –Start throughput tests to CERN, W/B 28 Feb? Hardware –All hardware now drained from production –Extra NICs purchased –Disk Stress tests complete H/W ready to deploy dCache SRM –Just starting deployment (2-3 days) –Ready for SRM tests from CERN to RAL W/B 7 March –Peak load test 2Gb/s sometime in W/B 14 March