CSCS Status Peter Kunszt Manager Swiss Grid Initiative CHIPP, 21 April, 2006.

Slides:



Advertisements
Similar presentations
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
Advertisements

LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Cambridge Site Report Cambridge Site Report HEP SYSMAN, RAL th June 2010 Santanu Das Cavendish Laboratory, Cambridge Santanu.
Summary of issues and questions raised. FTS workshop for experiment integrators Summary of use  Generally positive response on current state!  Now the.
LCG 3D StatusDirk Duellmann1 LCG 3D Throughput Tests Scheduled for May - extended until end of June –Use the production database clusters at tier 1 and.
FZU participation in the Tier0 test CERN August 3, 2006.
Quarterly report SouthernTier-2 Quarter P.D. Gronbech.
1 portal.p-grade.hu Further information on P-GRADE Gergely Sipos MTA SZTAKI Hungarian Academy of Sciences.
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
Data management for ATLAS, ALICE and VOCE in the Czech Republic L.Fiala, J. Chudoba, J. Kosina, J. Krasova, M. Lokajicek, J. Svec, J. Kmunicek, D. Kouril,
IST E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA Raquel Pezoa Universidad.
WLCG Service Report ~~~ WLCG Management Board, 1 st September
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
GridPP Deployment Status GridPP14 Jeremy Coles 6 th September 2005.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Next steps with EGEE EGEE training community.
The Swiss Grid Initiative Context and Initiation Work by CSCS Peter Kunszt, CSCS.
Testing the UK Tier 2 Data Storage and Transfer Infrastructure C. Brew (RAL) Y. Coppens (Birmingham), G. Cowen (Edinburgh) & J. Ferguson (Glasgow) 9-13.
Ian Bird LCG Deployment Area Manager & EGEE Operations Manager IT Department, CERN Presentation to HEPiX 22 nd October 2004 LCG Operations.
Site Report --- Andrzej Olszewski CYFRONET, Kraków, Poland WLCG GridKa+T2s Workshop.
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
WebFTS File Transfer Web Interface for FTS3 Andrea Manzi On behalf of the FTS team Workshop on Cloud Services for File Synchronisation and Sharing.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
SiGNET – Slovenian Production Grid Marko Mikuž Univ. Ljubljana & J. Stefan Institute on behalf of SiGNET team ICFA DDW’06 Kraków, 10 th October 2006.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Last update 29/01/ :01 LCG 1Maria Dimou- cern-it-gd Maria Dimou IT/GD CERN VOMS server deployment LCG Grid Deployment Board
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
GGUS summary (4 weeks) VOUserTeamAlarmTotal ALICE4015 ATLAS CMS LHCb Totals
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
SL5 Site Status GDB, September 2009 John Gordon. LCG SL5 Site Status ASGC T1 - will be finished before mid September. Actually the OS migration process.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Enabling Grids for E-sciencE Experience Supporting the Integration of LHC Experiments Computing Systems with the LCG Middleware Simone.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Grid Deployment Board 5 December 2007 GSSD Status Report Flavia Donno CERN/IT-GD.
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Training in EGEE-II Mike Mineter (Some slides from Brendan Hamill)
WLCG Operations Coordination report Maria Alandes, Andrea Sciabà IT-SDC On behalf of the WLCG Operations Coordination team GDB 9 th April 2014.
Ian Bird LCG Project Leader Status of EGEE  EGI transition WLCG LHCC Referees’ meeting 21 st September 2009.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
1/3/2006 Grid operations: structure and organization Cristina Vistoli INFN CNAF – Bologna - Italy.
VO Box discussion ATLAS NIKHEF January, 2006 Miguel Branco -
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
Road map SC3 preparation
WLCG IPv6 deployment strategy
“A Data Movement Service for the LHC”
LCG Service Challenge: Planning and Milestones
Stuart Wild. Particle Physics Group Meeting, January 2010.
SLR, SLS and SLA issues Afrodite Sevasti SA2 participant
The CCIN2P3 and its role in EGEE/LCG
SA1 ROC Meeting Bologna, October 2004
The LHCb Computing Data Challenge DC06
Presentation transcript:

CSCS Status Peter Kunszt Manager Swiss Grid Initiative CHIPP, 21 April, 2006

CHIPP, 21_04_2006, P. Kunszt Status: Personnel Current status  Sysadmin by CSCS FUS group – organized internally by FUS  PK as official LCG CHIPP contact Hiring more people (1, maybe 2)  Interviews are being conducted  Last interview to be held on Monday next week  Expected start date for new person: Aug 1 st latest User Support  Formalization of interactions within/outside CSCS  All outside contacts through Grid Team (PK)  As necessary, infrastructure items handed down to CSCS FUS group using the internal Request Tracker  CSCS Full member of the DECH federation support  User support through GGUS, see:  Only for Phoenix directly through

CHIPP, 21_04_2006, P. Kunszt Status: Phoenix Upgrades  Cluster has been upgraded to LCG 2.7  Attempted to upgrade the kernel to 2.6 – failed due to network and storage hardware driver problems; solution has been found but not applied yet  Upgraded to the latest 2.4 kernel Extensions  LCG 2.7 introduced new services  DPM – migration from our ‘classic SE’ has been successful  LFC – not installed yet; probably next week  VOBoxes dedicated to each VO  FTS – not necessary at CSCS for now, see below  We ordered 5 additional Dell Service nodes as a stop-gap solution until the end of the year  Dedicated VO Boxes for Atlas, CMS, LHCb possible  LFC node  Separation of UI and MON, avoiding ‘service overload’

CHIPP, 21_04_2006, P. Kunszt High Bandwidth Networking - SWITCH

CHIPP, 21_04_2006, P. Kunszt Status: Networking Currently 1Gb available from CSCS through SWITCH  10Gb possible especially through new link via Domodossola/Brig  Current usage at 20Mb (!) so CSCS does not see any need to upgrade  Switches for 10Gb are here but not installed yet (no need) Dedicated Links  Possible to have dedicated bandwidth (lambda lightpaths) to CERN  All GEANT connections are through the CERN POP  GEANT can also provide dedicated links to FZK, IN2P3, CNAF, RAL  Pricing unclear – waiting for SWITCH personnel to get back from vacation  Need unclear – suggest to run SC4 exercise first, see below

CHIPP, 21_04_2006, P. Kunszt Status: Service Challenge 4 SC4 starting!  Contacts to FZK established, communication very good  First tests of DPM successful – single-file transfer rate between FZK and CSCS is about 50MB/s  The channels FZK-CSCS and CSCS-FZK are now available on the FZK FTS server. The endpoints of the server are:  1GB/1MB files have been transferred in both directions ok. Thorough testing of the throughput starting today/next week  So we are one of the first Tier2’s to join  As declared, LCG pre-service == SC4 so it‘s important to be part of it early. Considerations  Bottlenecks will probably not be on the network  Storage bandwidth  Firewalls  Where the network traffic flows through to other Tier1s (even if through CERN) is irrelevant, importance is on  Scaling wrt amount of data (storage bandwidth)  Locality of access  SWITCH has a Performance Enhancement Response Team to diagnose network problems, see

CHIPP, 21_04_2006, P. Kunszt Status: EGEE CSCS is now official member of EGEE-II  All legal issues have been sorted  Contract has been signed  We’ll get a total of 184’000 Euros for 2 years Involvement  SA1 – infrastructure: running the cluster, participate in GGUS  NA3 – training: participate in training and education  NA4 – applications: support of application ‘gridification’ Mandate is not only HEP  NA4 involvement in Biomed, Chemistry, Earth sciences  Suggestion to extend the Phoenix cluster with 15-20% so that the non-HEP VOs can be supported, funding from CSCS/EGEE

CHIPP, 21_04_2006, P. Kunszt Status: Portal Installation of a P-Grade portal at CSCS almost complete  alprose01.projects.cscs.ch will run the portal, the installation is being done.  The portal will be hooked up to the LCG/EGEE environment for people to use/test  It is an interesting and potentially useful extension to the running services, we had interests expressed by other people (ETHZ, UniGE)