INFSO-RI-508833 Enabling Grids for E-sciencE www.eu-egee.org File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.

Slides:



Advertisements
Similar presentations
Data Management Expert Panel. RLS Globus-EDG Replica Location Service u Joint Design in the form of the Giggle architecture u Reference Implementation.
Advertisements

LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software SC3 Gavin McCance – JRA1 Data Management Cluster LCG Storage Management.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
Summary of issues and questions raised. FTS workshop for experiment integrators Summary of use  Generally positive response on current state!  Now the.
QCDgrid Technology James Perry, George Beckett, Lorna Smith EPCC, The University Of Edinburgh.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Simply monitor a grid site with Nagios J.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
INFSO-RI Enabling Grids for E-sciencE Logging and Bookkeeping and Job Provenance Services Ludek Matyska (CESNET) on behalf of the.
EGEE is a project funded by the European Union under contract IST Testing processes Leanne Guy Testing activity manager JRA1 All hands meeting,
How to Install and Use the DQ2 User Tools US ATLAS Tier2 workshop at IU June 20, Bloomington, IN Marco Mambelli University of Chicago.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE middleware: gLite Data Management EGEE Tutorial 23rd APAN Meeting, Manila Jan.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Enabling Grids for E-sciencE System Analysis Working Group and Experiment Dashboard Julia Andreeva CERN Grid Operations Workshop – June, Stockholm.
INFSO-RI Enabling Grids for E-sciencE SA1 and gLite: Test, Certification and Pre-production Nick Thackray SA1, CERN.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Operations Automation Team James Casey EGEE’08.
CERN Using the SAM framework for the CMS specific tests Andrea Sciabà System Analysis WG Meeting 15 November, 2007.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
LCG EGEE is a project funded by the European Union under contract IST LCG PEB, 7 th June 2004 Prototype Middleware Status Update Frédéric Hemmer.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
INFSO-RI Enabling Grids for E-sciencE Scenarios for Integrating Data and Job Scheduling Peter Kunszt On behalf of the JRA1-DM Cluster,
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
Grid Deployment Enabling Grids for E-sciencE BDII 2171 LDAP 2172 LDAP 2173 LDAP 2170 Port Fwd Update DB & Modify DB 2170 Port.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Ricardo Rocha CERN (IT/GS) EGEE’08, September 2008, Istanbul, TURKEY Experiment.
INFSO-RI Enabling Grids for E-sciencE GridICE: Grid and Fabric Monitoring Integrated for gLite-based Sites Sergio Fantinel INFN.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks MSG - A messaging system for efficient and.
INFSO-RI Enabling Grids for E-sciencE The gLite File Transfer Service: Middleware Lessons Learned form Service Challenges Paolo.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Using GStat 2.0 for Information Validation.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Progress on first user scenarios Stephen.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
Site Manageability & Monitoring Issues for LCG Ian Bird IT Department, CERN LCG MB 24 th October 2006.
SAM Sensors & Tests Judit Novak CERN IT/GD SAM Review I. 21. May 2007, CERN.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
FTS monitoring work WLCG service reliability workshop November 2007 Alexander Uzhinskiy Andrey Nechaevskiy.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Regional Nagios Emir Imamagic /SRCE EGEE’09,
INFSO-RI Enabling Grids for E-sciencE FTS failure handling Gavin McCance Service Challenge technical meeting 21 June.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
INFSO-RI Enabling Grids for E-sciencE gLite Certification and Deployment Process Markus Schulz, SA1, CERN EGEE 1 st EU Review 9-11/02/2005.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Enabling Grids for E-sciencE CMS/ARDA activity within the CMS distributed system Julia Andreeva, CERN On behalf of ARDA group CHEP06.
INFSO-RI Enabling Grids for E-sciencE gLite Test and Certification Effort Nick Thackray CERN.
INFSO-RI Enabling Grids for E-sciencE Operations Parallel Session Summary Markus Schulz CERN IT/GD Joint OSG and EGEE Operations.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE Operations: Evolution of the Role of.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
VO Box discussion ATLAS NIKHEF January, 2006 Miguel Branco -
EGEE-II INFSO-RI Enabling Grids for E-sciencE WLCG File Transfer Service Sophie Lemaitre – Gavin Mccance Joint EGEE and OSG Workshop.
Grid Deployment Technical Working Groups: Middleware selection AAA,security Resource scheduling Operations User Support GDB Grid Deployment Resource planning,
INFSO-RI Enabling Grids for E-sciencE FTS Administrators Tutorial for Tier-2s Paolo Badino
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
INFSO-RI Enabling Grids for E-sciencE Padova site report Massimo Sgaravatto On behalf of the JRA1 IT-CZ Padova group.
INFSO-RI Enabling Grids for E-sciencE File Transfer Service Patricia Mendez Lorenzo CERN (IT-GD) / CNAF Tier 2 INFN - SC3 Meeting.
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
Baseline Services Group Status of File Transfer Service discussions Storage Management Workshop 6 th April 2005 Ian Bird IT/GD.
“A Data Movement Service for the LHC”
ATLAS Use and Experience of FTS
lundi 25 février 2019 FTS configuration
Site availability Dec. 19 th 2006
Presentation transcript:

INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service Challenge Meeting April , Taipei

Enabling Grids for E-sciencE INFSO-RI Outline Overview of Components Tier-0 / Tier-1 / Tier-2 deployment proposals Initial test / early-access setup Experiment integration

Enabling Grids for E-sciencE INFSO-RI FTS service LCG created a set of requirements based on the Robust Data Transfer Service Challenge LCG and gLite teams translated this into a detailed architecture and design document for the software and the service –A prototype (radiant) was created to test out the architecture and was used in SC1 and SC2  Architecture and design have worked well for SC2 –gLite FTS (“File Transfer Service”) is an instantiation of the same architecture and design, and is the candidate for use in SC3  Current version of FTS and SC2 radiant software are interoperable

Enabling Grids for E-sciencE INFSO-RI FTS service File Transfer Service is a fabric service It provides point to point movement of SURLs –Aims to provide reliable file transfer between sites, and that’s it! –Allows sites to control their resource usage –Does not do ‘routing’ (e.g like Phedex) –Does not deal with GUID, LFN, Dataset, Collections It’s a fairly simple service that provides sites with a reliable and manageable way of serving file movement requests from their VOs We are understanding together with the experiments the places in the software where extra functionality can be plugged in –How the VO software frameworks can load the system with work –Places where VO specific operations (such as cataloguing), can be plugged-in, if required

Enabling Grids for E-sciencE INFSO-RI Components… Channel is a point to point network connection –Dedicated pipe: CERN to T1 distribution –Not dedicated pipe: T2’s uploading to T1 Focus of the presentation is upon deployment of the gLite FTS software –Distinguish server software and client software –Assume suitable SRM clusters deployed at source and destination of the pipe –Assume MyProxy server deployed somewhere

Enabling Grids for E-sciencE INFSO-RI Server and client software Server software lives at one end of the pipe –It’s doing a 3 rd party copy… –Propose deployment models take highest tier approach Client software can live at both ends –(…or indeed anywhere) –Propose to put it at both ends of the pipe –For administrative channel management –For basic submission and monitoring of jobs

Enabling Grids for E-sciencE INFSO-RI Single channel

Enabling Grids for E-sciencE INFSO-RI Multiple channels Single set of servers can manage multiple channels from a site

Enabling Grids for E-sciencE INFSO-RI Tier-0 and Tier-1 in the proposal: An Oracle database to hold the state –MySQL is on-the-list but low-priority unless someone screams A transfer server to run the transfer agents –Agents responsible for assigning jobs to channels managed by that site –Agents responsible for actually running the transfer (or for delegating the transfer to srm-cp). An application server (tested with Tomcat5) –To run the submission and monitoring portal – i.e. the thing you use to talk to the system What you need to run the server

Enabling Grids for E-sciencE INFSO-RI Tier-0, Tier-1 and Tier-2 in the proposal: Client command-lines installed –Some way to configure them (where’s my FTS service portal?)  Currently static file or ‘gLite configuration service’ (R-GMA)  BDII? (not integrated just now) Who will use the client software? –Site administrators: status and control of the channels they participate in –Production jobs: to move locally created files –Or.. The overall experiment software frameworks will submit directly (via API) to relevant channel portal, or even into relevant channel DB (?) What you need to run the client

Enabling Grids for E-sciencE INFSO-RI Initial use models considered Tier-0 to Tier-1 distribution –Proposal: put server at Tier-0 –This was the model used in SC2 Tier-1 to Tier-2 distribution –Proposal: put server at Tier-1 – push –This is analogous to the SC2 model Tier-2 to Tier-1 upload –Proposal: put server at Tier-1 – pull Other models? –Probably… –For SC3 or for service phase beyond?

Enabling Grids for E-sciencE INFSO-RI Test-bed Initial small-scale test setups have been running at CERN during and since SC2 to determine reliability as new versions come out –This small test setup will continue to smoke-test new versions Expanding test setup as we head to SC3 –Allows greater stress testing of software –Allows us to gain further operational experience and develop operating procedures –Critical: allows experiments to get early access to the service to understand how their frameworks can make use of it

Enabling Grids for E-sciencE INFSO-RI Testing plan Move new server software onto CERN T0 radiant cluster –Provisioning of necessary resources underway –Internal tests in early May –Staged opening of evaluation setup to willing experiments mid May Start testing with agreed T1 sites –As and where resources permit –Same topology as SC2: transfer software only at CERN T0 –Pushing data to T1s mid / late May –Which T1s? What schedule? Work with agreed T1 sites to deploy server software (which T1s?) –Identify one or two T2 sites to test transfers with (which?) –Early June –Tutorials to arrange for May

Enabling Grids for E-sciencE INFSO-RI Experiment involvement Schedule experiments onto the evaluation setup Some consulting on how to integrate frameworks –Discuss with service challenge / development team –Already presented ideas at LCG storage management workshop –Comments:  seems fairly easy, in principle  different timescales / priorities for this Doing to actual work –Should be staged  people are busy  easier to debug one at a time –Work out schedule ASAP

Enabling Grids for E-sciencE INFSO-RI Individual experiments Technical discussions to happen… …this will be easier once you have an evaluation setup you can see

Enabling Grids for E-sciencE INFSO-RI Summary Outlined server and client installs Propose server at Tier-0 and Tier-1 –Oracle DB, Tomcat application server, transfer node Propose client tools at T0, T1 and T2 –This is a UI / WN type install Evaluation setup –Initially at CERN T0, interacting with T1 a la SC2 –Expand to few agreed T1s interacting with agreed T2s Experiment interaction –Schedule technical discussions and work