Present and Future Networks an HENP Perspective Present and Future Networks an HENP Perspective Harvey B. Newman, Caltech HENP WG Meeting Internet2 Headquarters,

Slides:



Advertisements
Similar presentations
Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
Advertisements

University of Illinois at Chicago The Future of STAR TAP: Enabling e-Science Research Thomas A. DeFanti Principal Investigator, STAR TAP Director, Electronic.
International Grid Communities Dr. Carl Kesselman Information Sciences Institute University of Southern California.
University of Illinois at Chicago Annual Update Thomas A. DeFanti Principal Investigator, STAR TAP Director, Electronic Visualization Laboratory.
The DataTAG Project 25 March, Brussels FP6 Information Day Peter Clarke, University College London.
M A Wajid Tanveer Infrastructure M A Wajid Tanveer
ONE PLANET ONE NETWORK A MILLION POSSIBILITIES Barry Joseph Director, Offer and Product Management.
Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
POLITEHNICA University of Bucharest California Institute of Technology National Center for Information Technology Ciprian Mihai Dobre Corina Stratan MONARC.
Harvey B Newman Harvey B Newman FAST Meeting, Caltech FAST Meeting, Caltech July 1, HENP.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
Protocols and the TCP/IP Suite Chapter 4. Multilayer communication. A series of layers, each built upon the one below it. The purpose of each layer is.
Chapter 6 High-Speed LANs Chapter 6 High-Speed LANs.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Challenges to address in the next future Apr 3, 2006 HEPiX Spring Meeting 2006 Enzo Valente, GARR and INFN.
WG Goals and Workplan We have a charter, we have a group of interested people…what are our plans? goalsOur goals should reflect what we have listed in.
Harvey B. Newman, Caltech Harvey B. Newman, Caltech Internet2 Virtual Member Meeting March 19, 2002 Internet2 Virtual Member Meeting March 19, 2002http://l3www.cern.ch/~newman/HENPGridsNets_I2Virt ppt.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
MAIN TECHNICAL CHARACTERISTICS Next generation optical transport networks with 40Gbps capabilities are expected to be based on the ITU’s.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
23 January 2003Paolo Moroni (Slide 1) SWITCH-cc meeting DataTAG overview.
The Internet2 HENP Working Group Internet2 Spring Meeting May 8, 2002 Shawn McKee University of Michigan HENP Co-chair.
The Internet2 HENP Working Group Internet2 Spring Meeting April 9, 2003.
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
Online-Offsite Connectivity Experiments Catalin Meirosu *, Richard Hughes-Jones ** * CERN and Politehnica University of Bucuresti ** University of Manchester.
Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS.
Networking Shawn McKee University of Michigan DOE/NSF Review November 29, 2001.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
CEOS Working Group on Information Systems and Services - 1 Data Services Task Team Discussions on GRID and GRIDftp Stuart Doescher, USGS WGISS-15 May 2003.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
STAR TAP, Euro-Link, and StarLight Tom DeFanti April 8, 2003.
Tiziana FerrariThe DataTAG Projct, Roma Nov DataTAG Project.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
29/1/2002A.Ghiselli, INFN-CNAF1 DataTAG / WP4 meeting Cern, 29 January 2002 Agenda  start at  Project introduction, Olivier Martin  WP4 introduction,
. Large internetworks can consist of the following three distinct components:  Campus networks, which consist of locally connected users in a building.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
Networking Shawn McKee University of Michigan PCAP Review October 30, 2001.
18/09/2002Presentation to Spirent1 Presentation to Spirent 18/09/2002.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
CERN Campus Network Infrastructure Specificities Jean-Michel Jouanigot Campus Network Leader CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH EUROPEAN LABORATORY.
The EU DataTAG Project Richard Hughes-Jones Based on Olivier H. Martin GGF3 Frascati, Italy Oct 2001.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Global Research & Education Networking - Lambda Networking, then Tera bps Kilnam Chon KAIST CRL Symposium.
Transporting High Energy Physics Experiment Data over High Speed Genkai/Hyeonhae on 4 October 2002 at Oita Korea-Kyushu Gigabit Network.
Data and Computer Communications Eighth Edition by William Stallings Chapter 1 – Data Communications, Data Networks, and the Internet.
Hall D Computing Facilities Ian Bird 16 March 2001.
The Internet2 HENP Working Group Internet2 Virtual Briefing March 19, 2002 Shawn McKee University of Michigan HENP Co-chair.
CERN-USA connectivity update DataTAG project
DataTAG Project update
The Internet2 HENP Working Group Internet2 Fall Meeting October 27, 2002 Hi Everyone, I am happy to have a chance to brief you on the High Energy and.
Wide Area Networking at SLAC, Feb ‘03
Presented at the GGF3 conference 8th October Frascati, Italy
The EU DataTAG Project Olivier H. Martin CERN - IT Division
5th EU DataGrid Conference
Presentation at University of Twente, The Netherlands
Wide-Area Networking at SLAC
Presented at the 4th DataGrid Conference
Presentation transcript:

Present and Future Networks an HENP Perspective Present and Future Networks an HENP Perspective Harvey B. Newman, Caltech HENP WG Meeting Internet2 Headquarters, Ann Arbor October 26,

Next Generation Networks for Experiments u Major experiments require rapid access to event samples and subsets from massive data stores: up to ~500 Terabytes in 2001, Petabytes by 2002, ~100 PB by 2007, to ~1 Exabyte by ~2012. è Across an ensemble of networks of varying capability u Network backbones are advancing rapidly to the 10 Gbps range: Gbps end-to-end requirements for data flows will follow u Advanced integrated applications, such as Data Grids, rely on seamless “transparent” operation of our LANs and WANs è With reliable, quantifiable (monitored), high performance è They depend in turn on in-depth, widespread knowledge of expected throughput u Networks are among the Grid’s basic building blocks è Where Grids interact by sharing common resources è To be treated explicitly, as an active part of the Grid design u Grids are interactive; based on a variety of networked apps è Grid-enabled user interfaces; Collaboratories

LHC Computing Model Data Grid Hierarchy (Ca. 2005) Tier 1 Tier2 Center Online System Offline Farm, CERN Computer Ctr ~25 TIPS FNAL Center IN2P3 Center INFN Center RAL Center Institute Institute ~0.25TIPS Workstations ~100 MBytes/sec ~2.5 Gbps Mbits/sec Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels Physics data cache ~PByte/sec ~2.5 Gbits/sec Tier2 Center ~2.5 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/(  Tier1)/(  Tier2) ~1:1:1

Baseline BW for the US-CERN Transatlantic Link: TAN-WG (DOE+NSF) Plan: Reach OC12 Baseline in Spring 2002; then 2X Per Year

Transatlantic Net WG (HN, L. Price) Bandwidth Requirements [*] [*] Installed BW. Maximum Link Occupancy 50% Assumed The Network Challenge is Shared by Both Next- and Present Generation Experiments

Total U.S. Internet Traffic Source: Roberts et al., 2001 U.S. Internet Traffic Voice Crossover: August /Year 2.8/Year 1Gbps 1Tbps 10Tbps 100Gbps 10Gbps 100Tbps 100Mbps 1Kbps 1Mbps 10Mbps 100Kbps 10Kbps 100 bps 1 Pbps 100 Pbps 10 Pbps 10 bps ARPA & NSF Data to 96 New Measurements Limit of same % GDP as Voice Projected at 4/Year

AMS-IX Internet Exchange Throughput Accelerated Growth in Europe (NL) Hourly Traffic 8/23/ Gbps 2.0 Gbps 1.0 Gbps 0 Monthly Traffic 4X Growth from

GriPhyN iVDGL Map Circa US, UK, Italy, France, Japan, Australia Tier0/1 facility Tier2 facility 10 Gbps link 2.5 Gbps link 622 Mbps link Other link Tier3 facility u International Virtual-Data Grid Laboratory è Conduct Data Grid tests “at scale” è Develop Common Grid infrastructure è National, international scale Data Grid tests, leading to managed ops (GGOC) u Components è Tier1, Selected Tier2 and Tier3 Sites è Distributed Terascale Facility (DTF) è Gbps networks: US, Europe, transoceanic Possible New Partners è Brazil T1 è Russia T1 è Pakistan T2 è China T2 è …

Abilene and Other Backbone Futures u Abilene partnership with Qwest extended through 2006 u Backbone to be upgraded to 10-Gbps in three phases: Complete by October 2003 è Detailed Design Being Completed Now è GigaPoP Upgrade start in February 2002 u Capability for flexible provisioning in support of future experimentation in optical networking è In a multi- infrastructure u Overall approach to the new technical design and business plan is for an incremental, non-disruptive transition u Also: GEANT in Europe; Super-SINET in Japan; Advanced European national networks (DE, NL, etc.)

TEN-155 and GEANT European A&R Networks GEANT: from 9/01 10 & 2.5 Gbps TEN-155 OC12 Core Project: European A&R Networks are Advancing Rapidly

National Research Networks in Japan SuperSINET è Start of operation January 2002 è Support for 5 important areas: HEP, Genetics, Nano Technology, HEP, Genetics, Nano Technology, Space/Astronomy, GRIDs Space/Astronomy, GRIDs è Provides k 10 Gbps IP connection k Direct inter-site GbE links k Some connections to 10 GbE in JFY2002 HEPnet-J è Will be re-constructed with MPLS-VPN in SuperSINET IMnet è Will be merged into SINET/SuperSINET Tokyo Osaka Nagoya Internet Osaka U Kyoto U ICR Kyoto-U Nagoya U NIFS NIG KEK Tohoku U IMS U-Tokyo NAO U Tokyo NII Hitotsubashi NII Chiba IP WDM path IP router OXC ISAS

STARLIGHT: The Next Generation Optical STARTAP StarLight, the Optical STAR TAP, is an advanced optical infrastructure and proving ground for network services optimized for high-performance applications. In partnership with CANARIE (Canada), SURFnet (Netherlands), and soon CERN. u Started this Summer u Existing Fiber: Ameritech, AT&T, Qwest; MFN, Teleglobe, Global Crossing and Others u Main distinguishing features: è Neutral location (Northwestern University) è 40 racks for co-location è 1/10 Gigabit Ethernet based è Optical switches for advanced experiments k GMPLS, OBGP u 2*622 Mbps ATMs connections to the STAR TAP u Developed by EVL at UIC, iCAIR at NWU, ANL/MCS Div.

NL SURFnet GENEVA UK SuperJANET4 ABILENE ESNE T MREN It GARR-B GEANT NewYork Fr Renater STAR-TAP STARLIGHT DataTAG Project u EU-Solicited Project. CERN, PPARC (UK), Amsterdam (NL), and INFN (IT) u Main Aims: è Ensure maximum interoperability between US and EU Grid Projects è Transatlantic Testbed for advanced network research u 2.5 Gbps wavelength-based US-CERN Link 7/2002 (Higher in 2003)

Daily, Weekly, Monthly and Yearly Statistics on 155 Mbps US-CERN Link Mbps Used Routinely BW Upgrades Quickly Followed by Upgraded Production Use

Throughput Changes with Time u Link, route upgrades, factors 3-16 in 12 months u Improvements in steps at times of upgrades è 8/01: 105 Mbps reached with 30 Streams: SLAC-IN2P3 è 9/1/01: 102 Mbps reached in One Stream: Caltech-CERN  See edu/monitoring/bulk/ Also see the Internet2 E2E Initiative:

Caltech to SLAC on CALREN2 A Shared Production OC12 Network u SLAC: 4 CPU Sun; Caltech: 1 GHz PIII; GigE Interfaces u Need Large Windows; Multiple streams help u Bottleneck bandwidth ~320 Mbps; RTT 25 msec;  Window > 1 MB needed for a single stream u Results vary by a factor of up to 5 over time; sharing with campus traffic CALREN2

Max. Packet Loss Rates for Given Throughput [Matthis: BW < MSS/(RTT*Loss 0.5 )] u 1 Gbps LA-CERN Throughput Means Extremely Low Packet Loss è ~1E-8 with standard packet size u According to the Equation a single stream with 10 Gbps throughput requires a packet loss rate of 7 X 1E-11 with standard size packets è 1 packet lost per 5 hours ! u LARGE Windows è 2.5 Gbps Caltech-CERN  53 Mbytes u Effects of Packet Drop (Link Error) on a 10 Gbps Link: MDAI è Halve the Rate: to 5 Gbps è It will take ~ 4 Minutes for TCP to ramp back up to 10 Gbps u Large Segment Sizes (Jumbo Frames) Could Help, Where Supported u Motivation for exploring TCP Variants; Other Protocols

Key Network Issues & Challenges Net Infrastructure Requirements for High Throughput Net Infrastructure Requirements for High Throughput è Careful Router configuration; monitoring è Enough Router “Horsepower” (CPUs, Buffer Space) è Server and Client CPU, I/O and NIC throughput sufficient è Packet Loss must be ~Zero (well below 0.1%) k I.e. No “Commodity” networks è No Local infrastructure bottlenecks k Gigabit Ethernet “clear path” between selected host pairs k To 10 Gbps Ethernet by ~2003 è TCP/IP stack configuration and tuning is Absolutely Required k Large Windows k Multiple Streams è End-to-end monitoring and tracking of performance è Close collaboration with local and “regional” network engineering staffs (e.g. router and switch configuration).

Key Network Issues & Challenges None of this scales from 0.08 Gbps to 10 Gbps k New (expensive) hardware k The last mile, and tenth-mile problem k Firewall performance; security issues Concerns k The “Wizard Gap” (ref: Matt Matthis; Jason Lee) k RFC2914 and the Network Police; “Clever” Firewalls k Net Infrastructure providers (Local, regional, national, int’l) who may or may not want (or feel able) to accommodate HENP “bleeding edge” users k New TCP/IP developments (or TCP alternatives) are required for multiuser Gbps links [UDP/RTP ?]

Internet2 HENP WG [*] u To help ensure that the required è National and international network infrastructures (end-to-end) è Standardized tools and facilities for high performance and end-to-end monitoring and tracking, and è Collaborative systems are developed and deployed in a timely manner, and used effectively to meet the needs of the US LHC and other major HENP Programs, as well as the general needs of our scientific community. are developed and deployed in a timely manner, and used effectively to meet the needs of the US LHC and other major HENP Programs, as well as the general needs of our scientific community. è To carry out these developments in a way that is broadly applicable across many fields u Forming an Internet2 WG as a suitable framework [*] Co-Chairs: S. McKee (Michigan), H. Newman (Caltech); Sec’y J. Williams (Indiana); With thanks to Rob Gardner (Indiana )

Network-Related Hard Problems “Query Estimation”: Reliable Estimate of Performance k Throughput monitoring, and also Modeling k Source and Destination Host & TCP-stack Behavior Policy Versus Technical Capability Intersection k Strategies: (New Algorithms) k Authentication, Authorization, Priorities and Quotas Across Sites k Metrics of Performance k Metrics of Conformance to Policy è Key Role of Simulation (for Grids as a Whole): “Now Casting” ?

US CMS Remote Control Room For LHC US CMS will use the CDF/KEK remote control room concept for Fermilab Run II as a starting point. However, we will (1) expand the scope to encompass a US based physics group and US LHC accelerator tasks, and (2) extend the concept to a Global Collaboratory for realtime data acquisition + analysis

Networks, Grids and HENP è Next generation 10 Gbps network backbones are almost here: in the US, Europe and Japan k First stages arriving in 6-12 months è Major International links at Gbps in 0-12 months è There are Problems to be addressed in other world regions è Regional, last mile and network bottlenecks and quality are all on the critical path è High (reliable) Grid performance across network means k End-to-end monitoring (including s/d host software) k Getting high performance toolkits in users’ hands k Working with Internet E2E, the HENP WG and DataTAG to get this done è iVDGL as an Inter-Regional Effort, with a GGOC k Among the first to face and address these issues

Agent-Based Distributed System: JINI Prototype (Caltech/NUST) r r Includes “Station Servers” (static) that host mobile “Dynamic Services” r r Servers are interconnected dynamically to form a fabric in which mobile agents can travel with a payload of physics analysis tasks r r Prototype is highly flexible and robust against network outages r r Amenable to deployment on leading edge and future portable devices (WAP, iAppliances, etc.) Z Z “The” system for the travelling physicist r r Studies with this prototype use the MONARC Simulator, and build on the SONN study See See Station Server Station Server Station Server Lookup Service Lookup Service Proxy Exchange Registration Service Listener Lookup Discovery Service Remote Notification

6800 Hosts; 36 (7 I2) Reflectors Users In 56 Countries Annual Growth 250%