Presentation at University of Twente, The Netherlands

Slides:



Advertisements
Similar presentations
The DataTAG Project 25 March, Brussels FP6 Information Day Peter Clarke, University College London.
Advertisements

CGW03, Crakow, 28 October 2003 DataTAG Project Update CGW’2003 workshop, Crakow (Poland) October 28, 2003 Olivier Martin, CERN, Switzerland.
A conceptual model of grid resources and services Authors: Sergio Andreozzi Massimo Sgaravatto Cristina Vistoli Presenter: Sergio Andreozzi INFN-CNAF Bologna.
Efficient Network Protocols for Data-Intensive Worldwide Grids Seminar at JAIST, Japan 3 March 2003 T. Kelly, University of Cambridge, UK S. Ravot, Caltech,
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
CAIDA Bandwidth Estimation Meeting San Diego June 2002 R. Hughes-Jones Manchester 1 EU DataGrid - Network Monitoring Richard Hughes-Jones, University of.
The LHC Computing Grid Project Tomi Kauppi Timo Larjo.
DataTAG Project IST Copenhagen, 4/6 Nov Slides collected by: Cristina Vistoli INFN-CNAF.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
NORDUnet 2003, Reykjavik, Iceland, 26 August 2003 High-Performance Transport Protocols for Data-Intensive World-Wide Grids T. Kelly, University of Cambridge,
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
MAIN TECHNICAL CHARACTERISTICS Next generation optical transport networks with 40Gbps capabilities are expected to be based on the ITU’s.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
23 January 2003Paolo Moroni (Slide 1) SWITCH-cc meeting DataTAG overview.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
TERENA Networking Conference, Zagreb, Croatia, 21 May 2003 High-Performance Data Transport for Grid Applications T. Kelly, University of Cambridge, UK.
Tiziana FerrariThe DataTAG Projct, Roma Nov DataTAG Project.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract The main idea behind the DataTAG project was to strengthen the collaboration.
29/1/2002A.Ghiselli, INFN-CNAF1 DataTAG / WP4 meeting Cern, 29 January 2002 Agenda  start at  Project introduction, Olivier Martin  WP4 introduction,
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
DataTAG: Glue schema test C.Vistoli DataTAG Wp4 Meeting 23/5/2002.
18/09/2002Presentation to Spirent1 Presentation to Spirent 18/09/2002.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
The EU DataTAG Project Richard Hughes-Jones Based on Olivier H. Martin GGF3 Frascati, Italy Oct 2001.
DataTAG overview. 3 February 2003Paolo Moroni (Slide 2) APM meeting - Barcelona Summary  Why DataTAG?  DataTAG project  Test-bed extensions  General.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
Bob Jones EGEE Technical Director
Workload Management Workpackage
LHCOPN/LHCONE status report pre-GDB on Networking CERN, Switzerland 10th January 2017
“A Data Movement Service for the LHC”
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
U.S. ATLAS Grid Production Experience
TESTBED (Technical)
Peter Kacsuk – Sipos Gergely MTA SZTAKI
Globus —— Toolkits for Grid Computing
Sergio Fantinel, INFN LNL/PD
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Networking between China and Europe
Grid Computing.
Integration of Network Services Interface version 2 with the JUNOS Space SDK
Networking for grid Network capacity Network throughput
WP7 objectives, achievements and plans
Network Requirements Javier Orellana
CMS report from FNAL demo week Marco Verlato (INFN-Padova)
CERN-USA connectivity update DataTAG project
The DataTAG Project Olivier H. Martin
A conceptual model of grid resources and services
CERN external networking update & DataTAG project
CEOS workshop on Grid computing Slides mainly from Olivier Martin
The DataTAG Project Olivier H. Martin CERN - IT Division
DataTAG Project update
TCP Performance over a 2.5 Gbit/s Transatlantic Circuit
Patrick Dreher Research Scientist & Associate Director
Presented at the GGF3 conference 8th October Frascati, Italy
ExaO: Software Defined Data Distribution for Exascale Sciences
The EU DataTAG Project Olivier H. Martin CERN - IT Division
5th EU DataGrid Conference
Report on GLUE activities 5th EU-DataGRID Conference
The DataTAG Project UCSD/La Jolla, USA Olivier H. Martin / CERN
Advanced Networking Collaborations at SLAC
BigData Express: Toward Predictable, Schedulable, and High-performance Data Transfer Wenji Wu, April 4, 2019.
Wide-Area Networking at SLAC
Internet2 Spring Member meeting
Presented at the 4th DataGrid Conference
High-Performance Data Transport for Grid Applications
Presentation transcript:

The DataTAG Project http://www.datatag.org/ Presentation at University of Twente, The Netherlands 17 September 2002 J.P. Martin-Flatin and Olivier H. Martin CERN, Switzerland

Project Partners EU-funded partners: CERN (CH), INFN (IT), INRIA (FR), PPARC (UK) and University of Amsterdam (NL) U.S.-funded partners: Caltech, UIC, UMich, Northwestern University, StarLight Associate partners: SLAC, ANL, FNAL, Canarie, etc. Project coordinator: CERN contact: datatag-office@cern.ch 17 September 2002

Budget of EU Side EUR 3.98M Funded manpower: 15 FTE/year 21 FTE recruited Start date: January 1, 2002 Duration: 2 years 17 September 2002

Three Objectives Build a testbed to experiment with massive file transfers across the Atlantic High-performance protocols for gigabit networks underlying data-intensive Grids Interoperability between several major Grid projects in Europe and USA 17 September 2002

Grids DataTAG iVDGL GIIS giis.ivdgl.org GIIS edt004.cnaf.infn.it mds-vo-name=glue Gatekeeper: Padova-site Grids GIIS edt004.cnaf.infn.it Mds-vo-name=‘Datatag’ LSF Resource Broker Gatekeeper: US-CMS GIIS giis.ivdgl.org mds-vo-name=ivdgl-glue Gatekeeper grid006f.cnaf.infn.it Condor Gatekeeper edt004.cnaf.infn.it Gatekeeper: US-ATLAS WN1 edt001.cnaf.infn.it WN2 edt002.cnaf.infn.it Computing Element-1 PBS Computing Element -2 Fork/pbs Gatekeeper LSF hamachi.cs.uchicago.edu dc-user.isi.edu rod.mcs.anl.gov Job manager: Fork DataTAG iVDGL 17 September 2002

Testbed 17 September 2002

Objectives Provisioning of 2.5 Gbit/s transatlantic circuit between CERN (Geneva) and StarLight (Chicago) Dedicated to research (no production traffic) Multi-vendor testbed with layer-2 and layer-3 capabilities: Cisco Alcatel Juniper Testbed open to other Grid projects Collaboration with GEANT 17 September 2002

2.5 Gbit/s Transatlantic Circuit Operational since 20 August 2002 (T-Systems) Delayed by KPNQwest bankruptcy Routing plan developed for access across GEANT Circuit initially connected to Cisco 76xx routers (layer 3) High-end PC servers at CERN and StarLight: SysKonnect GbE can saturate the circuit with TCP traffic Layer-2 equipment deployment under way Full testbed deployment scheduled for 31 October 2002 17 September 2002

Why Yet Another 2.5 Gbit/s Transatlantic Circuit? Most existing or planned 2.5 Gbit/s transatlantic circuits are for production not suitable for advanced networking experiments Need operational flexibility: deploy new equipment (routers, GMPLS-capable multiplexers), activate new functionality (QoS, MPLS, distributed VLAN) The only known exception to date is the Surfnet circuit between Amsterdam and Chicago (StarLight) 17 September 2002

Major R&D 2.5 Gbit/s circuits between Europe & USA R&D Connectivity UK SuperJANET4 IT GARR-B New York 3*2.5Gbit/s Abilene FR INRIA GEANT Canarie StarLight ESNET NL SURFnet CH CERN MREN FR VTHD ATRIUM Major R&D 2.5 Gbit/s circuits between Europe & USA

Network Research 17 September 2002

DataTAG Activities Enhance TCP performance Monitoring QoS modify Linux kernel Monitoring QoS LBE (Scavenger) Bandwidth reservation AAA-based bandwidth on demand lightpath managed as a Grid resource 17 September 2002

TCP Performance Issues TCP’s current congestion control (AIMD) algorithms are not suited to gigabit networks long time to recover from packet loss Line errors are interpreted as congestion Delayed ACKs + large window size + large RTT = problem 17 September 2002

Single vs. Multiple Streams: Effect of a Single Packet Loss Streams/Throughput 10 5 1 7.5 4.375 2 9.375 10 Avg. 7.5 Gbps Throughput in Gbit/s 7 5 Avg. 6.25 Gbps Avg. 4.375 Gbps 5 2.5 Avg. 3.75 Gbps T =~ 45 min! (RTT=120ms, MSS=1500bytes) T T T T Time 17 September 2002

Responsiveness Capacity RTT # inc Responsiveness 9.6 kbit/s 40 ms 1 10 Mbit/s 20 ms 8 150 ms 622 Mbit/s 120 ms ~2,900 ~6 min 2.5 Gbit/s ~11,600 ~23 min 10 Gbit/s ~46,200 ~1h 30min 17 September 2002 inc size = MSS = 1,460

Research Directions New fairness principle Change multiplicative decrease: do not divide by two Change additive increase binary search local and global stability Caltech technical report CALT-68-2398 Estimation of the available capacity and bandwidth*delay product: on the fly cached 17 September 2002

Grid Interoperability 17 September 2002

Objectives Interoperability between European and US Grids Middleware integration and coexistence GLUE = Grid Lab Uniform Environment integration & standardization testbed and demo Enable a set of applications to run on the transatlantic testbed: CERN LHC experiments: ATLAS, CMS, Alice other experiments: CDF, D0, BaBar, Virgo, Ligo, etc. 17 September 2002

GLUE Relationships Integration Interoperability GriPhyN PPDG iVDGL HEP applications, other experiments Integration GriPhyN PPDG iVDGL HICB/HIJTB Interoperability standardization GLUE 17 September 2002

Interoperability Framework Grid Resources Core Services Optimization Services must be common to all Grids must coexist CE, SE, NE 17 September 2002

Grid Software Architecture Scheduler Grid Request Submission Replica Management Resource Discovery Security Policy Access Protocols Information Protocols Compute Service Storage Service Network Service Catalog Service Computers Storage Systems Network Devices 17 September 2002

Status of GLUE Activities Resource discovery and GLUE schema computing element storage element network element Authentication across organizations Minimal authorization Unified service discovery Common software deployment procedures 17 September 2002

Resource Discovery and GLUE Schema Computing Resources Structure Description computing element cluster sub-cluster representation host Entry point into queuing system Container groups subcluster or nodes Homogeneous collection of nodes Physical computing nodes 17 September 2002

Future GLUE Activities Data movement: GridFTP replica location service Advanced authorization: cross-organization, community-based authorization 17 September 2002

Demos iGrid 2002 IST 2002 SC 2002 US16 with University of Michigan US14 with Caltech and ANL CA03 with Canarie IST 2002 SC 2002 17 September 2002

Summary Gigabit testbed for data-intensive Grids: Layer 3 in place Layer 2 being provisioned Modified version of TCP to improve performance Grid interoperability: GLUE schema for resource discovery Working on common authorization solutions Evaluation of software deployment tools First interoperability tests on heterogeneous transatlantic testbeds 17 September 2002