Feb. 27 2006 - ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure iGrid2005 Cyber-infrastructure Paola Grosso GigaPort project UvA.

Slides:



Advertisements
Similar presentations
StarLight, TransLight And the Global Lambda Integrated Facility (GLIF) Tom DeFanti, Dan Sandin, Maxine Brown, Jason Leigh, Alan Verlo, University of Illinois.
Advertisements

GLIF Global Lambda Integrated Facility Kees Neggers CCIRN Cairns Australia, 3 July 2004.
Erik-Jan Bos GLIF Tech co-chair GLIF, the Global Lambda Integrated Facility CCIRN meeting Xian, China August 26, 2007 GLIF Tech (and other WGs)
Indiana University Global NOC Chris Robb The Hybrid Packet and Optical Initiative as a Connectivity Solution Presented to the APAN NOC & Resource Allocation.
Resource Brokering: Your Ticket Into NetherLight Paola Grosso Jeroen van der Ham Cees de Laat UvA - AIR group.
StarLight Located in Abbott Hall, Northwestern University’s Chicago Campus Operational since summer 2001, StarLight is a 1GigE and 10GigE switch/router.
Global Lambda Integrated Facility (GLIF) Kees Neggers SURFnet Internet2 Fall meeting Austin, TX, 27 September 2004.
GLIF tech update UCSD La Jolla, 29 September 2005
Optical networking research in Amsterdam Paola Grosso UvA - AIR group.
Kees Neggers Managing Director SURFnet GLIF, the Global Lambda Integrated Facility TERENA Networking Conference 6-9 June 2005, Poznan, Poland.
Feb On*Vector Workshop Semantic Web for Hybrid Networks Dr. Paola Grosso SNE group University of Amsterdam The Netherlands.
S t a r P l a n e S t a r P l a n e Application Specific Management of Photonic Networks Li XU & JP Velders SNE group meeting
May TNC2007 Network Description Language - Semantic Web for Hybrid Networks Network Description Language: Semantic Web for Hybrid Networks Paola.
May TERENA workshopStarPlane StarPlane: Application Specific Management of Photonic Networks Paola Grosso SNE group - UvA.
Oct RoN meetingiGrid2005: a lambda networking facility Paola Grosso AIR/ UvA.
GLIF Engineering (TEC) Working Group & SURFnet6 Blue Print Erik-Jan Bos Director of Network Services, SURFnet I2 Fall meeting, Austin, TX, USA September.
SURFnet and the LHC Erik-Jan Bos Director of Network Services, SURFnet Co-chair of GLIF TEC LHC T0/1 Network Meeting, Amsterdam January 21, 2005.
Global Connectivity Joint venture of two workshops Kees Neggers & Dany Vandromme e-IRG Workshop Amsterdam, 13 May 2005.
ICFA HEP Grid and Digital Divide Workshop May 2005 Daegu, Korea George McLaughlin Director, International Developments AARNet Kees Neggers Managing.
IGrid Workshop: September 26-29, 2005 GLIF Meeting: September 29-30, 2005 Maxine Brown and Tom DeFanti, Co-Chairs Larry Smarr and Ramesh Rao, Hosts Calit2.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
LambdaGRID the NREN (r)Evolution Kees Neggers Managing Director SURFnet Reykjavik, 26 August 2003.
09-Sept-2004 NeSC UKLight Town Meeting Peter Clarke, UKLight Town Meeting Welcome, background and & programme for the day Peter Clarke.
Kees Neggers SURFnet SC2003 Phoenix, 20 November 2003.
The Singapore Advanced Research & Education Network.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
Pacific NorthWest Gigapop, Pacific Wave & International Peering David Morton, PNWGP RENU Technology Workshop.
High-quality Internet for higher education and research GigaPort  Overview SURFnet6 Niels den Otter SURFnet EVN-NREN Meeting Amsterdam October 12, 2005.
Chicago/National/International OptIPuter Infrastructure Tom DeFanti OptIPuter Co-PI Distinguished Professor of Computer Science Director, Electronic Visualization.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
GigaPort NG Network SURFnet6 and NetherLight Kees Neggers SURFnet Amsterdam October 12, 2004.
GLIF Infrastructure Kees Neggers SURFnet SC2004 Pittsburgh, PA 12 November 2004.
Techs in Paradise 2004, Honolulu / Lambda Networking BOF / Jan 27 NetherLight day-to-day experience APAN lambda networking BOF Erik Radius Manager Network.
National Center for Supercomputing Applications Barbara S. Minsker, Ph.D. Associate Professor National Center for Supercomputing Applications and Department.
Erik Radius Manager Network Services SURFnet, The Netherlands Joint Techs Workshop Columbus, OH - July 20, 2004 GigaPort Next Generation Network & SURFnet6.
TOPS “Technology for Optical Pixel Streaming” Paul Wielinga Division Manager High Performance Networking SARA Computing and Networking Services
Dynamic Lightpath Services on the Internet2 Network Rick Summerhill Director, Network Research, Architecture, Technologies, Internet2 TERENA May.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation Enabling.
A DVANCE AND I MMEDIATE R ESERVATIONS OF V IRTUALIZED N ETWORK R ESOURCES Laia Ferrao Fundació i2CAT
Advanced Networks: The Past and the Future – The Internet2 Perspective APAN 7 July 2004, Cairns, Australia Douglas Van Houweling, President & CEO Internet2.
STAR TAP, Euro-Link, and StarLight Tom DeFanti April 8, 2003.
1 Recommendations Now that 40 GbE has been adopted as part of the 802.3ba Task Force, there is a need to consider inter-switch links applications at 40.
GigaPort NG Network SURFnet6 and NetherLight Erik-Jan Bos Director of Network Services, SURFnet GDB Meeting, SARA&NIKHEF, Amsterdam October 13, 2004.
CineGrid HPA Tech Retreat 2006 Laurin Herr President, Pacific Interface Inc.
High Performance Research Networking Department, Supercomputing Center Lambda Networking Activities in KREONet2/GLORIAD-KR Min-Ah Kim HPcN Development.
Thoughts on International e-Science Infrastructure Kevin Thompson U.S. National Science Foundation Office of Cyberinfrastructure iGrid2005 9/27/2005.
Future R&E networks: Technology challenges Roberto Sabatino CTO DANTE.
SURFnet 6 NetherLight and GLIF Kees Neggers Managing Director SURFnet Questnet/APAN Cairns Australia, July 5th, 2004.
30 November 2001 Advisory Panel on Cyber Infrastructure National Science Foundation Douglas Van Houweling November 30, 2001 National Science Foundation.
TransLight Tom DeFanti 50 years ago, 56Kb USA to Netherlands cost US$4.00/minute Now, OC-192 (10Gb) costs US$2.00/minute* That’s 400,000 times cheaper.
25-September-2005 Manjit Dosanjh Welcome to CERN International Workshop on African Research & Education Networking September ITU, UNU and CERN.
Possible Governance-Policy Framework for Open LightPath Exchanges (GOLEs) and Connecting Networks June 13, 2011.
NSF International Research Network Connections (IRNC) Program: TransLight/StarLight Maxine D. Brown and Thomas A. DeFanti Electronic Visualization Laboratory.
SCARIe: using StarPlane and DAS-3 Paola Grosso Damien Marchel Cees de Laat SNE group - UvA.
Use of high speed photonic networks for efective use of very high quality media production Michal Krsek (CESNET) Mirek Sochor (CESNET/UPP)
DutchGrid KNMI KUN Delft Leiden VU ASTRON WCW Utrecht Telin Amsterdam Many organizations in the Netherlands are very active in Grid usage and development,
Grid Optical Burst Switched Networks
SURFnet6: the Dutch hybrid network initiative
StarPlane: Application Specific Management of Photonic Networks
The SURFnet Project Bram Peeters, Manager Network Services
Tailor slide to customer industry/pain points
Agenda Global Lambda Integrated Facility (GLIF) Function of GOLE’s
Global Lambda Integrated Facility
GLIF Global Lambda Integrated Facility
TOPS “Technology for Optical Pixel Streaming”
TOPS “Technology for Optical Pixel Streaming”
Optical Networking Activities in NetherLight
Presentation transcript:

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure iGrid2005 Cyber-infrastructure Paola Grosso GigaPort project UvA

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure Outline The question: iGrid showed impressive science that used a custom built network. What happened behind the scenes to make it happen? With some background information: What is iGrid and how it has evolved. What is this optical networking about. The answer: Where, who and how the iGrid 2005 infrastructure took shape.

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure What is iGrid? The official web sites contains the mission statement: the 4th community-driven biennial International Grid event, is a coordinated effort to accelerate the use of multi-10Gb international and national networks, to advance scientific research, and to educate decision makers, academicians and industry researchers on the benefits of these hybrid networks. Three key points: - community driven - multi-10Gb networks - hybrid networks

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure History of previous iGrids The themes were already there from the beginning… iGrid1998: Empowering Global Research Community Networking Applications and technologies depend on end-to-end delivery of multi-tens-of- megabits bandwidth with quality of service control, and need the capabilities of emerging Internet protocols for resource control and reservation. iGrid2000 : An International Grid Application Research Demonstration at INET2000 Demonstrate how the power of todays’ research networks enables access to remote computing resources, distribution of digital media, and collaboration with distant collegues. iGrid2002: The International Virtual Laboratory Demonstrate application demand for increased bandwidth.

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure Lambda networking The iGrid2005 cyber-infrastructure provided a lambda networking facility to demonstrators. In the scientific arena, lambda networking indicates: -use of different light wavelengths (i.e. light paths) to provide independent services over the same strand of optical fiber -creation of dedicated and application-specific paths Main lambda networking characteristics of the iGrid setup: -broad international connectivity -large available bandwidth -(user driven) light path provisioning -reconfigurable and flexible setup

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure Where and when? The event took place: in the CalIT2 building in the UCSD campus in San Diego; between Sep September Challenge: the building inauguration had not yet taken place: the network was built while the building was being finished up.

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure What and who? There were two main activities: demonstrations and symposium sessions. Over 300 participants …plus the committee members. Demonstrations A global effort: - 49 demonstrations; - 12 countries as main demo contacts; - 20 participating countries; - 4 continents. Symposium In the auditorium: - 6 keynote speakers; - 12 panels sessions; - 3 master classes.

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure Demonstrations types A closer look at the demonstrations types: -Data Services: 7 demos -E-Science: 4 demos -Lambda Services: 10 demos -Scientific Instrument Services: 3 demos -Supercomputing Services: 3 demos -Video Streaming Services: 5 demos -Visualization Services: 17 demos

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure How? … thanks to the effort of: -16 sponsors -38 organizing institutions -15 organizing committee members -10 subcommittees On the cyber-infrastructure side: -Cyber-infrastructure CalIT2 Co-Chairs and Committee members -Cyber-infrastructure Int’l/National Co-Chairs and Committee members

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure Demos requirements The guiding principle: ask what they want, and sometimes tell them what they need. A questionnaire that tried to understand the demos’ needs for: -On-site computers, data storage and visualization displays -Remote computers and storage -Software -Special-purpose equipment -Audio -Networking topology

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure Demo stations The demos were distributed across 4 spaces: TeraScale Room Cave Room Multipurpose Room Auditorium 3 demo stations: Rice: 2-Panel display Goodhue: 2-Panel Display Quin: 4-Panel display 3 demo stations: Couts: C-Wall Spreckels: 100 Mpixel Bushyhead: 3D Auto-stereo 2 demo stations: Sessions: Stereo Projection Bandini: Side-by-side Proj...plus Research Channel 2 demo stations: Swing: Sony 4K Projections Harrison: Side-by-side Projection

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure Onsite resources Two of the jewels: Tiled Display: 11x5 tiled display of NEC 20” 1600x1200 LCD panels Sony 4k Projection

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure Onsite resources (II) Another way to look at it: 24 10GE ports: -5 interfaces for common infrastructure equipment: 3 x10GE nodes, 2 x 10GE ports for HP switch used for the Tiled Display in Spreckels -19 interfaces for demonstrator equipment, network switches and nodes 11 1GE fiber ports: - 11to demonstrator equipment, network switches and nodes 53 1GE copper ports: -19 for common infrastructure equipment -34 for demonstrators equipment

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure SunLight To satisfy the needs of the demos: SunLight, the optical exchange built for iGrid at CalIT2. Ingredients: -Lots (lots!) of planning. -Committees members met several times before the workshop time -Network equipment donated by vendors: -Cisco, Force10, Nortel primarily -Setup in the weeks preceding the workshop -Circuits delivery and installation

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure SunLight (II) Connections To CAVEwave To outside resources Connections To Tiled Display Connections To SDSC T320 Connections To local hosts Connections To local hosts Connections To local hosts Cisco ONS Optical switch To outside resources Nortel OME 6500 Optical switch Nortel HDXc Optical switch Force10 E1200 Ethernet switch Cisco 6509 Ethernet switch Cisco 7609 Ethernet switch HP Ethernet switch ch1 ch4 ch3 ch2

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure External connectivity From SunLight: 10 x 10gbps = 100gbps available to the demonstrators. (Side note: during iGrid2002 it was 1 x 10GE) Some paths to be mentioned: CaveWave link to Chicago, used for many of the visualization demos. Layer1 circuits - few. Layer2 circuits - the majority. Layer3 circuits - for the routed connectivity.

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure Layer1/2 int’l connectivity

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure Layer1/2 int’l connectivity (II) An international effort to reach the demonstrators’ countries: Asia - China, Korea, Japan, Taiwan North America - Canada, Mexico, US Europe - Czech Republic, Netherlands, Poland, Spain, UK A central role played by the various optical exchanges: PacificWave in Seattle KRLight in Seoul T-LEX in Tokyo StarLight/TransLight in Chicago MANLAN in New York NetherLight in Amsterdam UKLight in London CZLight in Prague NorthernLight in Stockholm … all part of the GLIF. The GLIF meeting followed iGrid

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure Layer3 infrastructure

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure Routing Did I hear well?… Not surprising: routing is a component in hybrid networks. Routing needed: Internet connectivity to demonstrators, via commodity peering from UCSD and connection to major NRENs; Demos using Layer3 paths via NRENs; Routing in SunLight to direct multiple demos to shared resources, for example to Tiled Display.

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure The NOC Committee members and vendor engineers provided the NOC support during the workshop. The NOC: -setup the infrastructure: racking, pulling fibers -configure the equipment -provide continuing support to the demonstrators The biggest challenge: -automatic versus manual configuration. -scheduling of common links Missing: the user/application _really_ configuring the light paths. Not all demos were “NOC-independent” after the kick-off.

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure Light paths What is in a name? For every demonstrator light paths meant something else: -optical path without L2 or L3 services; -L2 path over completely dedicated circuits, with possible need for scheduling; -L2 path over shared link (coexisting demos); -Mix of L3 and L2 features. For each demo the NOC needed to do the “translation” among the various meaning.

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure “Dutch” lightpaths An easy way to see this: 4 demos with a Dutch label NL101,NL102, NL103, NL104… NL101/2 VM Turntable, Token-based network element access control and path selection NL103 IPv4 Link-local addressing for optical networks NL104 Dead cat demo Effort?… Medium. VLAN configuration Difficult when L2 is multi- domain Effort?… Low. Routing does It all but performance Needs to be tuned. Effort?… High. AMS CHI SAN AMS Routed Internet SAN VLAN NL103 CaveWave link IRNC link AMS CHINY SAN SEA

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure Just after iGrid: SC05 Using the experience gained in September, many tried again.

Feb ON*VECTOR Photonics Workshop iGrid 2005 Cyber-Infrastructure Lessons learned 1.It was a lot of work, but the achievements were rewarding. 2.Global lambdas are a reality and a need. 3.The community is focusing on the tools for automatic engineering and setup needed on hybrid networks. Submitted an article on the topic: The network infrastructure at iGrid2005: lambda networking in action - Paola Grosso, Pieter de Boer and Linda Winkler.