Creating a Global Lambda GRID: International Advanced Networking and StarLight Presented by Joe Mambretti, Director, International Center for Advanced Internet Research ( Director, Metropolitan Research and Education Network ( Based on StarLight Presentation Slides by Tom DeFanti – PI, STAR TAP, Director EVL, University of Illinois, Chicago APAN Conference Phuket, Thailand January 24, 2002
Creation and Early Implementation of Advanced Networking Technologies - The Next Generation Internet All Optical Networks, Terascale Networks Advanced Applications, Middleware and Metasystems, Large-Scale Infrastructure, NG Optical Networks and Testbeds, Public Policy Studies and Forums Related to NG Networks Accelerating Leading Edge Innovation and Enhanced Global Communications through Advanced Internet Technologies, in Partnership with the Global Community Introduction to iCAIR:
Tom DeFanti, Maxine Brown Principal Investigators, STAR TAP Linda Winkler, Bill Nickless, Alan Verlo, Andy Schmidt STAR TAP Engineering Joe Mambretti, Tim Ward StarLight Facilities, et al
Who is StarLight? StarLight is jointly managed and engineered by Electronic Visualization Laboratory (EVL), University of Illinois at Chicago –Tom DeFanti, Maxine Brown, Andy Schmidt, Jason Leigh, Cliff Nelson, and Alan Verlo International Center for Advanced Internet Research (iCAIR), Northwestern University –Joe Mambretti, David Carr and Tim Ward Mathematics and Computer Science Division (MCS), Argonne National Laboratory –Linda Winkler and Bill Nickless; Rick Stevens and Charlie Catlett In Partnership with Bill St Arnaud, Kees Neggars, Olivier Martin, etc.
What is StarLight? 710 N. Lake Shore Drive, Chicago Abbott Hall, Northwestern University Chicago view from 710 StarLight is an experimental optical infrastructure and proving ground for network services optimized for high-performance applications StarLight Leverages $32M (FY2002-3) in experimental networks (I-WIRE, TeraGrid, OMNInet, SURFnet, CA*net4, DataTAG)
Where is StarLight? Located in Northwestern’s Downtown Campus: 710 N. Lake Shore Drive Carrier POPs Chicago NAP
StarLight Infrastructure StarLight is a large research-friendly co-location facility with space, power and fiber that is being made available to university and national/international network collaborators as a point of presence in Chicago
StarLight Infrastructure StarLight is a production GigE and trial 10GigE switch/router facility for high-performance access to participating networks
StarLight is Operational Equipment at StarLight StarLight Equipment installed: –Cisco 6509 with GigE –IPv6 Router –Juniper M10 (GigE and OC-12 interfaces) –Cisco LS1010 with OC-12 interfaces –Data mining cluster with GigE NICs –Visualization/video server cluster (on order) SURFnet’s GSR Multiple vendor plans for 10GigE, DWDM and Optical Switch/Routing in the future Carriers at StarLight: SBC-Ameritech, Qwest, AT&T, Global Crossing, Teleglobe…
StarLight Connections STAR TAP (NAP) connection with two OC-12c ATM circuits The Netherlands (SURFnet) has two OC-12c POS from Amsterdam and a 2.5Gbps OC-48 to StarLight this month Abilene will soon connect via two GigE circuits Canada (CA*net3/4) is connected via GigE, soon 10GigE I-WIRE, a State-of-Illinois-funded dark-fiber multi-10GigE DWDM effort involving Illinois research institutions is being built. 36 strands to the Qwest Chicago PoP are in. NSF TeraGrid (DTF) 4x10GigE network being engineered by PACI and Qwest. NORDUnet is now sharing StarLight’s OC-12 ATM connection TransPAC/APAN is bringing in an OC-12, later an OC-48 CERN’s OC-48 is in the advanced funding stages
Evolving StarLight Optical Network Connections Vancouver Seattle Portland San Francisco Los Angeles San Diego (SDSC) NCSA Chicago NYC SURFnet, CERN CA*net4 Asia- Pacific AMPATH PSC Atlanta IU U Wisconsin CA*net4 Source: Maxine Brown 12/2001
StarLight Services and Locations AADS NAP 225 West Randolph Street StarLight 710 North Lakeshore Drive Qwest 455 North Cityfront Plaza IPv4 and IPv6 STAR TAP Transit (AS 10764) Int’l R&E Networks FedNets / NGIX ATM PVC Mesh to Other Participants Yes-- GigE 802.1q Policy-Free VLANs -YesFedNets / NGIX Co-Location Space, Power -YesQwest Customers Fiber Patches -$T&M Install $0 Monthly $ Install $ Monthly
TeraNodes in Action: Interactive Visual Tera Mining (Visual Data Mining of Tera Byte Data Sets) Data Mining Servers (TND-DSTP) Data Mining Servers (TND-DSTP) Data Mining Servers (TND-DSTP) Parallel Data Mining Correlation (TNC) Tera Map Tera Snap Chicago AmsterdamNWU, NCSA, ANL etc… –Problem is to touch a Terabyte of data interactively and to visualize it –100M/s – 24 hours to access 1 Terabyte of data –500M/s – 4.8 hours using a single PC –10G/s – 14.4 minutes using 20 node PC cluster –Need to parallelize data access and rendering Parallel Visualization (TNV)
Prototyping The Global Lambda Grid in Chicago: A Photonic-Switched Experimental Network of Light Paths
CONTROLPLANECONTROLPLANE Clusters Dynamically Allocated Lightpaths Switch Fabrics Physical Monitoring Apps Multi-leveled Architecture NEW!
Metros As International Nexus Points Prototype Global Lambda Grid Optical Metro Europe Tokyo? APAN AP Metro2 CA*net4 Miami to South Am? CalTech Amsterdam NetherLight CERN DataTAG StarLight NCSA SDSC CSW ASW Cluster OFA I-WIRE TeraGrid ANL
Multiwavelength Optical Amplifier Multiwavelength Fiber CSW ASW GE Links GE Links *N*N LAN PHY Interface, eg, 15xx nm 10GE serial GE Links GE Links Optical, Monitors, for Wavelength Precision, etc. Power Spectral Density Processor, Source + Measured PSD DWDM Links Multiple Per Fiber Computer Clusters Each Node = 1GE Multi 10s, 100s, 1000s of Nodes Multiple Optical Impairment Issues, Including Accumulations
Controlle r Client Device Client Controlle r Controlle r Optical Layer Control Plane Client Layer Control Plane Optical Layer Control Plane Client Layer Traffic Plane Optical Layer – Switched Traffic Plane UNI I-UNI CI
2x10GE Northwestern U Optical Switching Platform Passport 8600 Application Cluster OMNInet Technology Trial: January 2002 A four-site network in Chicago -- the first 10GE service trial! A test bed for all-optical switching and advanced high-speed services Partners: SBC, Nortel, iCAIR at Northwestern, EVL, CANARIE, ANL Application Cluster Optical Switching Platform Passport x10GE StarLight OPTera Metro 5200 Application Cluster Optical Switching Platform Passport x10GE 8x1GE UIC CA*net3--Chicago Optical Switching Platform Passport 8600 Application Cluster 2x10GE 8x1GE
EVL LAC UICStarLight 10x10 GigE 10GigE 2x 10GigE TNV3TNC TND DWDM TNV3TND DWDM StarLight On Ramps : Proposed Development Phase I Gigabit Ethernet NICs to 10 Gigabit Ethernet MAN TNDs = Datamining Clusters TNVs = Visualization Clusters (gigapixel/sec) TNCs = TeraGrid On-Ramps
StarLight On Ramps : Proposed Development Phase II 10 Gigabit Ethernet to 2x80 Gb MAN TND:Upgrade NICs in TND clusters to (8)x 10GigE TNV:Upgrade NICs in TNV3 clusters to (8)x10GigE O-E-O: or O-O-O Optical Switch at StarLight(?) DWDM: (2) 40Gb(?) and (8) 10Gb LAC 10GigE 10x 10GigE UIC 6509 EVL TNV3TNC x 10GigE 2x40GigE (UIC fiber) 2x40GigE TND StarLight TND3TNV DWDM O-E-O or O-O-O switch …
NSF’s Distributed Terascale Facility (DTF)
TeraGrid Interconnect Objectives Traditional: Interconnect sites/clusters using WAN –WAN bandwidth balances cost and utilization- objective to keep utilization high to justify high cost of WAN bandwidth TeraGrid: Build a wide area “machine room” network –TeraGrid WAN objective to handle peak M2M traffic –Partnering with Qwest to begin with 40 Gb/s and grow to ≥80 Gb/s within 2 years. Long-Term TeraGrid Objective –Build Petaflops capable distributed system, requiring Petabytes storage and a Terabit/second network. –Current objective is to step toward this goal. –Terabit/second network will require many lambdas operating at minimum OC-768 and its architecture is not yet clear. Source: Rick Stevens 12/2001
Trends Cyberinfrastructure Advent of regional dark fiber infrastructure –Community owned and managed (via 20 yr IRUs) –Typically supported by state or local resources Lambda services (IRUs) viable replacements for bandwidth service contracts –Need to be structured with built in capability escalation (BRI) –Need strong operating capability to exploit this Regional groups moving faster (much faster!) than national network providers and agencies –A viable path to putting bandwidth on a Moore’s law curve –Source of new ideas for national infrastructure architecture Source: Rick Stevens 12/2001
13.6 TF Linux TeraGrid Router or Switch/Router 32 quad-processor McKinley Servers 4GF, 8GB memory/server) Fibre Channel Switch HPSS ESnet HSCC MREN/Abilene Starlight 10 GbE 16 quad-processor McKinley Servers 4GF, 8GB memory/server) NCSA 500 Nodes 8 TF, 4 TB Memory 240 TB disk SDSC 256 Nodes 4.1 TF, 2 TB Memory 225 TB disk Caltech 32 Nodes 0.5 TF 0.4 TB Memory 86 TB disk Argonne 64 Nodes 1 TF 0.25 TB Memory 25 TB disk IA-32 nodes 4 Juniper M160 OC-12 OC-48 OC p IA-32 Chiba City 128p Origin HR Display & VR Facilities = 32x 1GbE = 64x Myrinet = 32x FibreChannel Myrinet Clos Spine = 8x FibreChannel OC-12 OC-3 vBNS Abilene MREN Juniper M p IBM SP Blue Horizon OC-48 NTON Sun E10K p Origin UniTree 1024p IA p IA Juniper M40 vBNS Abilene Calren ESnet OC-12 OC-3 8 Sun Starcat 16 GbE = 32x Myrinet HPSS 256p HP X-Class 128p HP V p IA Extreme Black Diamond 32 quad-processor McKinley Servers 4GF, 12GB memory/server) OC-12 ATM Calren 2 2 Source: Rick Stevens 12/2001
TeraGrid Network Architecture Cluster interconnect using multi-stage switch/router tree with multiple 10 GbE external links Separation of cluster aggregation and site border routers necessary for operational reasons Phase 1: Four routers or switch/routers –each with three OC-192 or 10 GbE WAN PHY –MPLS to allow for >10 Gb/s between any two sites Phase 2: Add Core routers or switch/routers –Each with ten OC-192 or 10 GbE WAN PHY –Expandable with additional 10 Gb/s interfaces Source: Rick Stevens 12/2001
Los Angeles 710 N. Lakeshore (Starlight) Chicago 1 mi Option 1: Full Mesh with MPLS Cluster Aggregation Switch/Router One Wilshire (Carrier Fiber Collocation Facility) Qwest San Diego POP Site Border Router or Switch/Router 2200mi 140mi25mi 115mi20mi 455 N. Cityfront Plaza (Qwest Fiber Collocation Facility) CaltechSDSC NCSA ANL Caltech Cluster SDSC Cluster NCSA Cluster ANL Cluster DWDM OC GbE Cienna Corestream DWDM DWDM TBD Other site resources IP Router Source: Rick Stevens 12/2001
Expansion Capability: “StarLights” Los Angeles One Wilshire (Carrier Fiber Collocation Facility) Qwest San Diego POP 2200mi 140mi25mi 115mi20mi 455 N. Cityfront Plaza (Qwest Fiber Collocation Facility) CaltechSDSC NCSA ANL Caltech Cluster SDSC Cluster NCSA Cluster ANL Cluster Regional Fiber Aggregation Points Additional Sites And Networks 710 N. Lakeshore (StarLight) Chicago DWDM OC GbE Cienna Corestream DWDM DWDM TBD Cluster Aggregation Switch/Router Site Border Router or Switch/Router Other site resources 1 mi IP Router IP Router (packets) or Lambda Router (circuits) Source: Rick Stevens 12/2001
Leverage Regional/Community Fiber Experimental Interconnects
UIUC/NCSA Starlight (NU-Chicago) Argonne UChicago IIT UIC Illinois Century Network James R. Thompson Ctr City Hall State of IL Bldg Level(3) 111 N. Canal McLeodUSA 151/155 N. Michigan Doral Plaza Qwest 455 N. Cityfront UC Gleacher 450 N. Cityfront Illinois’ I-WIRE Logical and Transport Topology Next Steps- -Fiber to FermiLab, other sites -Additional fiber to ANL, UIC -DWDM terminals at Level(3), McLeodUSA locations -Experiments with OC-768, Optical Switching/Routing Source: Rick Stevens 12/2001
An Advanced Network for Advanced Applications Designed in 1993; Initial Production in 1994, Managed at L2 & L3 Created by Consortium of Research Organizations -- over 20 Partner to STAR TAP/StarLight, I-WIRE, NGI and R&E Net Initiatives, Grid and Globus Initiatives etc. Model for Next Generation Internets Developed World’s First GigaPOP Next – the “Optical MREN” Soon - Optical ‘TeraPOP’ Services
GigaPoPs TeraPoPs (OIX) GigaPoP data from Internet2, map by Rick Stevens, Charlie Catlett Pacific Lightrail TeraGrid Interconnect
Pacific Light Rail draft 12/4/01 Critical Mass Sites Top 10 Res. Univ.: Next 15 Res. Univ: Centers, Labs: Intl. 10gig & Key Hubs Source: Ron Johnson 12/2001
CA*net 4 Physical Architecture Vancouver Calgary Regina Winnipeg Ottawa Montreal Toronto Halifax St. John’s Fredericton Charlottetown Chicago Seattle New York Los Angeles Miami Europe Dedicated Wavelength or SONET channel OBGP switches Optional Layer 3 aggregation service Large channel WDM system By Bill St. Arnaud (Provider of Excellence In Advanced Networking)
NSF ANIR NSF will emphasize support for domestic and international collaborations involving resource- intensive applications and leading-edge optical wavelength telecommunication technologies But, NSF will not abandon needed international collaboration services (e.g., STAR TAP)
StarLight Thanks StarLight planning, research, collaborations, and outreach efforts at the University of Illinois at Chicago are made possible, in part, by funding from: –National Science Foundation (NSF) awards ANI , ANI , EIA , EIA , and EIA –NSF Partnerships for Advanced Computational Infrastructure (PACI) cooperative agreement ACI to the National Computational Science Alliance –State of Illinois I-WIRE Program, and UIC cost sharing –Northwestern University for providing space, engineering and management Argonne National Laboratory for StarLight and I-WIRE network engineering and planning leadership NSF/ANIR, Bill St. Arnaud of CANARIE, Kees Neggers of SURFnet, and Olivier Martin and Harvey Newman of CERN for global networking leadership; NSF/ACIR and NCSA/SDSC for DTF/TeraGrid opportunities UCAID/Abilene for Internet2 and their ITN CA*net3/4 and CENIC/Pacific Light Wave for planned North America and West Coast transit
September, 2002, Amsterdam, The Netherlands Grid 2 oo 2 University of Illinois at Chicago and Indiana University in collaboration with The GigaPort Project and SURFnet5 of The Netherlands Grid-Intensive Application Control of Lambda- Switched Networks i Maxine Brown STAR TAP/StarLight co-Principal Investigator Associate Director, Electronic Visualization Laboratory A showcase of applications that are “early adopters” of very high bandwidth national and international networks Coming…..
Further Information Ed by Foster & Kesselmann
“Bring Us Your Lambdas!”