Achieving Dependable Bulk Throughput in a Hybrid Network Guy Almes Aaron Brown Martin Swany Joint Techs Meeting Univ Wisconsin -- 17 July 2006.

Slides:



Advertisements
Similar presentations
Indiana University Global NOC Chris Robb The Hybrid Packet and Optical Initiative as a Connectivity Solution Presented to the APAN NOC & Resource Allocation.
Advertisements

Duke University SDN Approaches and Uses GENI CIO Workshop – July 12, 2012.
Pushing Up Performance for Everyone Matt Mathis 7-Dec-99.
Transitioning to IPv6 April 15,2005 Presented By: Richard Moore PBS Enterprise Technology.
Introduction 1 Lecture 14 Transport Layer (Transmission Control Protocol) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer.
Congestion Control An Overview -Jyothi Guntaka. Congestion  What is congestion ?  The aggregate demand for network resources exceeds the available capacity.
Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 5, 2001.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
Inside the Internet. INTERNET ARCHITECTURE The Internet system consists of a number of interconnected packet networks supporting communication among host.
The Future of the Internet Jennifer Rexford ’91 Computer Science Department Princeton University
Copyright 2003 CCNA 1 Chapter 7 TCP/IP Protocol Suite and IP Addressing By Your Name.
The Effects of Systemic Packets Loss on Aggregate TCP Flows Thomas J. Hacker May 8, 2002 Internet 2 Member Meeting.
Providing Controlled Quality Assurance in Video Streaming across the Internet Yingfei Dong, Zhi-Li Zhang and Rohit Rakesh Computer Networking and Multimedia.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Big Data: Movement, Crunching, and Sharing Guy Almes, Academy for Advanced Telecommunications 13 February 2015.
Experiences in Design and Implementation of a High Performance Transport Protocol Yunhong Gu, Xinwei Hong, and Robert L. Grossman National Center for Data.
Sponsored by the National Science Foundation Research & Experiments on GENI GENI CC-NIE Workshop NSF Mark Berman, Mike Zink January 7,
L14. Fair networks and topology design D. Moltchanov, TUT, Spring 2008 D. Moltchanov, TUT, Spring 2015.
Virtual Circuit Network. Network Layer 2 Network layer r transport segment from sending to receiving host r network layer protocols in every host, router.
Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 26, 2001.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
GENI Experiments in Optimizing Network Environments using XSP Ezra Kissel and Martin Swany University of Delaware Abstract Our proposal is to build, deploy.
Rick Summerhill Chief Technology Officer, Internet2 TIP January 2008 Honolulu, HI Internet2 Update.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
Panel: Is IP Routing Dead? -- Linda Winkler, Argonne Natl Lab -- Bill St Arnaud, CANARIE Guy Almes PFLDnet Workshop – Geneva 3 February 2003.
2005 © SWITCH PERT – Beyond Fat Pipes Simon Leinen.
COP 5611 Operating Systems Spring 2010 Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 2:00-3:00 PM.
Erik Radius Manager Network Services SURFnet, The Netherlands Joint Techs Workshop Columbus, OH - July 20, 2004 GigaPort Next Generation Network & SURFnet6.
1 How High Performance Ethernet Plays in RONs, GigaPOPs & Grids Internet2 Member Meeting Sept 20,
TCP with Variance Control for Multihop IEEE Wireless Networks Jiwei Chen, Mario Gerla, Yeng-zhong Lee.
Internet2 Network Observatory Update Matt Zekauskas, Measurement SIG 2006 Fall Member Meeting 4-Dec-2006.
Performance Measurements in Internet2 Guy Almes Geneve – 15 March 2004.
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
S305 – Network Infrastructure Chapter 5 Network and Transport Layers.
The TCP-ESTATS-MIB Matt Mathis John Heffner Raghu Reddy Pittsburgh Supercomputing Center Rajiv Raghunarayan Cisco Systems J. Saperia JDS Consulting, Inc.
13-Oct-2003 Internet2 End-to-End Performance Initiative: piPEs Eric Boyd, Matt Zekauskas, Internet2 International.
1 Evaluating NGI performance Matt Mathis
TCP transfers over high latency/bandwidth networks Internet2 Member Meeting HENP working group session April 9-11, 2003, Arlington T. Kelly, University.
Measurement in the Internet Measurement in the Internet Paul Barford University of Wisconsin - Madison Spring, 2001.
Internet2/Abilene Perspective Guy Almes and Ted Hanss Internet2 Project NASA Ames -- August 10, 1999.
Chapter 11.4 END-TO-END ISSUES. Optical Internet Optical technology Protocol translates availability of gigabit bandwidth in user-perceived QoS.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
An Architectural Approach to Managing Data in Transit Micah Beck Director & Associate Professor Logistical Computing and Internetworking Lab Computer Science.
Spring 2004 Internet2 Member Meeting NLR Service Center Update Dave Jent Indiana University.
CCNA1 v3 Module 2 v3 CCNA 1 Module 2 JEOPARDY K. Martin.
© 2015 Pittsburgh Supercomputing Center Opening the Black Box Using Web10G to Uncover the Hidden Side of TCP CC PI Meeting Austin, TX September 29, 2015.
Logistical Networking: Buffering in the Network Prof. Martin Swany, Ph.D. Department of Computer and Information Sciences.
Internet2 Engineering Challenges Campus Engineering Workshop, Houston Guy Almes 10 April 2002.
Southern California Infrastructure Philip Papadopoulos Greg Hidley.
An Analysis of AIMD Algorithm with Decreasing Increases Yunhong Gu, Xinwei Hong, and Robert L. Grossman National Center for Data Mining.
05 October 2001 End-to-End Performance Initiative Network Measurement Matt Zekauskas, Fall 2001 Internet2 Member Meeting Network Measurement.
HOPI Update - Internet 2 Project Hybrid Optical Packet Infrastructure Peter O’Neil NCAB May 19, 2004.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
Chapter 9 The Transport Layer The Internet Protocol has three main protocols that run on top of IP: two are for data, one for control.
© 2006 Andreas Haeberlen, MPI-SWS 1 Monarch: A Tool to Emulate Transport Protocol Flows over the Internet at Large Andreas Haeberlen MPI-SWS / Rice University.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Joint Techs 17 July 2006 University of Wisconsin, Madison,
Measurements on Internet2
APAN-JP Updates - Proposal of CIREN for NSF's IRNC Solicitation -
Hemant Kr Rath1, Anirudha Sahoo2, Abhay Karandikar1
“Detective”: Integrating NDT and E2E piPEs
OSI Reference Model Unit II
OSI Model 7 Layers 7. Application Layer 6. Presentation Layer
Abilene Update Rick Summerhill
Network Basics and Architectures Neil Tang 09/05/2008
Mark Johnson MCNC HOPI Applications Mark Johnson MCNC
Presentation transcript:

Achieving Dependable Bulk Throughput in a Hybrid Network Guy Almes Aaron Brown Martin Swany Joint Techs Meeting Univ Wisconsin July 2006

Outline Observations:  on user needs and technical opportunities  on TCP dynamics Notion of a Session Layer  the obvious application  a stronger application Phoebus as HOPI experiment  deployment  early performance results Phoebus as an exemplar hybrid network

On User Needs In a variety of cyberinfrastructure-intensive applications, dependable high-speed wide-area bulk data flows are of critical value Examples:  Terabyte data sets in HPC applications  Data-intensive TeraGrid applications  Access to Sloan Digital Sky Survey and similar very large data collections Also, we stress ‘dependable’ rather than ‘guaranteed’ performance As science becomes more data-intensive, these needs will be prevalent in many science disciplines

On Technology Drivers Network capacity increases, but user throughput increases more slowly  Source: DOE The cause of this gap relates to TCP dynamics

On TCP Dynamics Consider the Mathis Equation for Reno Focus on bulk data flows over wide areas How can we attack it?  Reduce non-congestive packet loss (a lot!)  Raise the MTU (but only helps if end-to-end!)  Improve TCP algorithms (e.g., FAST, Bic)  RTT is still a factor  Use end-to-end circuits  Decrease RTT??

Situation for running example

The Transport-Layer Gateway A session is the end-to-end chain of segment- specific transport connections  In our early work, each of these transport connections is a conventional TCP connection  Each transport-level gateway (depot) receives data from one connection and pipes it to the next connection in the chain Physical Data Link Network Transport Session Physical Data Link Network Transport Session Physical Data Link Network Transport User Space

The Logistical Session Layer

Obvious Application Place a depot half-way between hosts A and B, thus cut the RTT roughly in half Bad news: only a small factor Good news: it actually does more

Obvious Application: With one depot to reduce RTT

Stronger Application Place one depot at HOPI node near the source, and another near the destination Observe:  Abilene Measurement Infrastructure:  2 nd percentile: 950 Mb/smedian: 980 Mb/s  MTU = 9000 bytes; loss is very low  Local infrastructure:  MTU and loss are good, but not always very good  but the RTT is very small But with HOPI we can do even better

The HOPI Project The Hybrid Optical and Packet Infrastructure Project (hopi.internet2.edu) Leverage both the 10-Gb/s Abilene backbone and a 10-Gb/s lambda of NLR Explore combining packet infrastructure with dynamically-provisioned lambdas

Stronger application: depots near each host Backbone: large RTT 9000-byte MTU very low non-congestive loss GigaPoP / Campus: very small RTT some 1500-byte MTU some non-congestive loss

Two Conjectures Small RTT does effectively mask moderate imperfections in MTU and loss End-to-end session throughput is (only a little less than) the minimum of component connection throughputs

Phoebus Phoebus aims to narrow the performance gap by bringing revolutionary networks like HOPI to users  Phoebus is another name for the mythical Apollo in his role as the “sun god” Phoebus stresses the ‘session’ concept to enable multiple network/transport infrastructures to be catenated Phoebus builds on an earlier project called the Logistical Session Layer (LSL)

Experimental Phoebus Deployment Place Phoebus depots at each HOPI node Ingress/egress spans via ordinary Internet2/ Abilene IP infrastructure Backbone span can use either/both of:  10-Gb/s path through Abilene  dynamic 10-Gb/s lambda Initial test user sites:  SDSC host with gigE connectivity  Columbia Univ host with gigE connectivity

Initial Performance Results In very early tests:  SDSC to losa: about 900 Mb/s  losa to nycm: about 5.1 Gb/s  nycm to Columbia: about 900 Mb/s  direct: 380 ± 88 Mb/s  Phoebus: 762 ± 36 Mb/s In later tests with a variety of file sizes, SDSC to losa performance became worse

Initial Performance Results

Initial Test Results What about the three components?  SDSC to losa depot: Mb/s  losa depot to nycm depot: Gb/s  nycm depot to Columbia: Mb/s Whatever caused that weakness in the SDSC-to-losa path did slow things down

Plans for Summer 2006 ‘Experimental production’ Phoebus, reaching out to interested users Improve access control and instrumentation:  Maintain a log of achieved performance Test use of dynamic HOPI lambdas Evaluate Phoebus as a service within newnet Test use of Phoebus internationally

Comments on Backbone Span Backbone could ensure flow performance between pairs of backbone depots Backbone could provide a Phoebus Service in addition to its “IP” service Relatively easy to use dynamic lambdas within the backbone portion of the Phoebus infrastructure Alternatively, the backbone portion could use IP, but a non-TCP transport protocol!

Comments on the Local (Ingress and Egress) Spans Near ends, we have good, but not perfect, local/metro-area infrastructure Relatively hard to deploy dynamic lambdas Small RTTs allow high-speed TCP flows to be extended to many local sites in a scalable way

Thus, Phoebus leverages both:  innovative wide-area infrastructure and  conventional local-area infrastructure Phoebus can thus extend the value of multi-lambda wide-area infrastructure to many science users on high-quality conventional campus networks

Ongoing Work Phoebus deployment on HOPI  We’re seeking project participants!  Please for information ESP-NP  ESP = Extensible Session Protocol  Implementation on an IXP Network Processor from Intel  The IXP2800 can forward at 10 Gb/s

Acknowledgements UD Students:  Aaron Brown, Matt Rein Internet2:  Eric Boyd, Rick Summerhill, Matt Zekauskas,... HOPI Testbed Support Center (TSC) Team  MCNC, Indiana Univ NOC, Univ Maryland San Diego Supercomputer Center:  Patricia Kovatch, Tony Vu Columbia University:  Alan Crosswell, Megan Pengelly, the Unix group Dept of Energy Office of Science:  MICS Early Career Principal Investigator program

End Thank you for your attention Questions?