Gigi Karmous-Edwards Optical Control Plane International ICFA workshop Daegu Korea May 25th 2005.

Slides:



Advertisements
Similar presentations
INDIANAUNIVERSITYINDIANAUNIVERSITY Update on US National and International Networking Initiatives James Williams Indiana University
Advertisements

Erik-Jan Bos GLIF Tech co-chair GLIF, the Global Lambda Integrated Facility CCIRN meeting Xian, China August 26, 2007 GLIF Tech (and other WGs)
All rights reserved © 2006, Alcatel Grid Standardization & ETSI (May 2006) B. Berde, Alcatel R & I.
DELOS Highlights COSTANTINO THANOS ITALIAN NATIONAL RESEARCH COUNCIL.
The Role of Environmental Monitoring in the Green Economy Strategy K Nathan Hill March 2010.
Application of GMPLS technology to traffic engineering Shinya Tanaka, Hirokazu Ishimatsu, Takeshi Hashimoto, Shiro Ryu (1), and Shoichiro Asano (2) 1:
Global Lambda Integrated Facility (GLIF) Kees Neggers SURFnet Internet2 Fall meeting Austin, TX, 27 September 2004.
NLR – National LambdaRail CAN2004 November 29, 2004 Tom West.
FP6−2004−Infrastructures−6-SSA [ Empowering e Science across the Mediterranean ] Grids and their role towards development F. Ruggieri – INFN (EUMEDGRID.
Telecom Italia GRID activities for 6th FP Program Maurizio Cecchi 3/4 October 2002.
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM DWDM RAM BUSINESS WITHOUT BOUNDARIES.
Introduction and Overview “the grid” – a proposed distributed computing infrastructure for advanced science and engineering. Purpose: grid concept is motivated.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
1© Copyright 2015 EMC Corporation. All rights reserved. SDN INTELLIGENT NETWORKING IMPLICATIONS FOR END-TO-END INTERNETWORKING Simone Mangiante Senior.
Global Connectivity Joint venture of two workshops Kees Neggers & Dany Vandromme e-IRG Workshop Amsterdam, 13 May 2005.
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
Distributed Real-Time Systems for the Intelligent Power Grid Prof. Vincenzo Liberatore.
Nlr.net © 2004 National LambdaRail, Inc NLR Update July 26, 2006.
Nlr.net © 2004 National LambdaRail, Inc Owning, Controlling and Managing the Network Infrastructure: Bridging Research Communities Tom West, CEO National.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Interoperable Intelligent Optical Networking: Key to future network services and applications OIF Carrier Group Interoperability: Key issue for carriers.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE II - Network Service Level Agreement (SLA) Establishment EGEE’07 Mary Grammatikou.
The Future of the Internet CERNET 10 th Anniversary 25 December 2004 Douglas Van Houweling, President & CEO Internet2.
TERENA Networking Conference 2004, Rhodes, Greece, June Differentiated Optical Services and Optical SLAs Afrodite Sevasti Greek Research and.
The Singapore Advanced Research & Education Network.
What are the main differences and commonalities between the IS and DA systems? How information is transferred between tasks: (i) IS it may be often achieved.
NLR Layer2/Layer3 Users BOF NLR status update Philadelphia Internet2 Member Meeting 19 September 2005 Brent Sweeny, Indiana University.
GLIF Global Lambda Integrated Facility Maxine Brown Electronic Visualization Laboratory University of Illinois at Chicago.
UNIT 6 SEMINAR Unit 6 Chapter 7 and 8, plus Lab 12 Course Name – IT482 Network Design Instructor – David Roberts – Office Hours:
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
A Wide Range of Scientific Disciplines Will Require a Common Infrastructure Example--Two e-Science Grand Challenges –NSF’s EarthScope—US Array –NIH’s Biomedical.
The OIF: A Beacon for Industry Progress and Convergence.
GigaPort NG Network SURFnet6 and NetherLight Kees Neggers SURFnet Amsterdam October 12, 2004.
OIF NNI: The Roadmap to Non- Disruptive Control Plane Interoperability Dimitrios Pendarakis
Leveraging the NLR: Creation of the Florida LambdaRail Larry Conrad Associate VP and CIO, FSU Chair of Board, Florida LambdaRail, LLC.
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
Gigi Karmous-Edwards Network Development and Grid Requirements E-IRG May 13th, 2005.
Lucy Yong Young Lee IETF CCAMP WG GMPLS Extension for Reservation and Time based Bandwidth Service.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Optical Architecture Invisible Nodes, Elements, Hierarchical, Centrally Controlled, Fairly Static Traditional Provider Services: Invisible, Static Resources,
GGF12 -Brussels draft-ggf-ghpn-opticalnets-2 Proposal for a New Draft n Grid Optical Burst Switched Networks (GOBS) n A realistic optical transport technology.
Advanced Networks: The Past and the Future – The Internet2 Perspective APAN 7 July 2004, Cairns, Australia Douglas Van Houweling, President & CEO Internet2.
GigaPort NG Network SURFnet6 and NetherLight Erik-Jan Bos Director of Network Services, SURFnet GDB Meeting, SARA&NIKHEF, Amsterdam October 13, 2004.
OptIPuter Networks Overview of Initial Stages to Include OptIPuter Nodes OptIPuter Networks OptIPuter Expansion OPtIPuter All Hands Meeting February 6-7.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Tom Afferton Member of OIF Board of Directors Division Manager – AT&T Labs OIF Website:
National LambdaRail/ Florida LambdaRail Larry Conrad Associate VP and CIO, FSU Board Chair, Florida LambdaRail, LLC.
1 © 2004, Cisco Systems, Inc. All rights reserved. Aiken & Boromound Network Research Infrastructure “Back to the Future” (aka National Lambda Rail – NLR)
Nlr.net © 2004 National LambdaRail, Inc NLR Update Jt. Techs, Madison July 17, 2006.
Nlr.net © 2004 National LambdaRail, Inc 1 NLR Update John Silvester University of Southern Califonia
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
DICE: Authorizing Dynamic Networks for VOs Jeff W. Boote Senior Network Software Engineer, Internet2 Cándido Rodríguez Montes RedIRIS TNC2009 Malaga, Spain.
Light the future N L R National LambdaRail Update Joint Techs 2005 Salt Lake City, Utah Dave Jent.
CENIC/NLR Optical Network Management Winter 2006 Joint Techs Workshop February 7, 2006 Chris Costa CENIC
Fall 2006 I2 Member Meeting Global Control of Research Networks Gigi Karmous-Edwards International task Force panel.
Light the future N L R National LambdaRail Update Joint Techs ‘05 Salt Lake City, Utah Steve Cotter February 13, 2005.
GGF12 -Brussels draft-ggf-ghpn-opticalnets-2 Opticalnets-2.
Admela Jukan jukan at uiuc.edu March 15, 2005 GGF 13, Seoul Issues of Network Control Plane Interactions with Grid Applications.
HOPI Update - Internet 2 Project Hybrid Optical Packet Infrastructure Peter O’Neil NCAB May 19, 2004.
Franco Travostino and Admela Jukan jukan at uiuc.edu June 30, 2005 GGF 14, Chicago Grid Network Services Architecture (GNSA) draft-ggf-ghpn-netserv-2.
Gigi Karmous-Edwards GLIF/NLR GGF13 March 15 th 2005.
MPLS Introduction How MPLS Works ?? MPLS - The Motivation MPLS Application MPLS Advantages Conclusion.
GGF11 - Honolulu draft-ggf-ghpn-opticalnets-1 Grid High-Performance Networking Research Group Optical Network Infrastructure for Grid Dimitra Simeonidou.
Copyright 2005 National LambdaRail, Inc
Grid Optical Burst Switched Networks
Clouds , Grids and Clusters
The SURFnet Project Bram Peeters, Manager Network Services
Mobile Computing.
National LambdaRail Update Dave Jent NLR Layer 2/3 Service Center Indiana University July 18, 2005.
Presentation transcript:

Gigi Karmous-Edwards Optical Control Plane International ICFA workshop Daegu Korea May 25th 2005

Agenda E-science and their requirements on the Network Today’s Research Network Infrastructure Optical Control Plane Grid Computing Optical Control Plane Research Conclusions

Topic E-science and their requirements on the Network

E-Science Community Migration of the E-science community towards Grid Computing emerged from three converging trends which also promotes reducing the digital divide; i)Advances in optical networking technologies. Widespread deployment of the fiber infrastructure has led to low-cost, high-capacity optical connections. ii)Affordability of the required computational resources through sharing. The increasing demand of computational power and bandwidth by the new e-science applications is proving to be a financially difficult and nearly impossible task unless resources are shared across research institutions on a global basis. iii)Need for interdisciplinary research. The growing complexity of scientific problems is driving the need for increasing numbers of scientists from diverse disciplines and locations to work together in order to achieve breakthrough results.

What do we mean when we say E-science application – Big e-science applications - new generation of applications combines scientific instruments, distributed data archives, sensors, and computing resources to solve complex scientific problems. – Characteristics: i) very large data sets, terabytes, petabytes, etc. ii) high-end computing resources, teraflops, super computers, cluster computing, etc. iii) remote instrumentation and sensors for data collection iv) powerful visualization tools for analysis V) sometimes highly dynamic

Advances in Optical technologies Dark Fiber every where …. Fiber is much cheaper…US Headlines: companies giving away dark fiber! – RONS buy their own and operate it with out the big bell companies – AT&T made available at no-cost to SURA 8,000 miles of dark fiber All-optical switches are getting faster and smaller (ns switch reconfiguration) Layer one Optical switches relatively cheaper than other technologies Fiber, optical impairments control, and transceiver technology continue to advance while reducing prices! 10Gig Capacity

Global E-science Network Requirements – High bandwidth pipes along very long distances – terabyte transfers, petabyte, etc – Network resources coordinated with other vital Grid resources – CPU, and Storage – Advanced reservation of networking resources – End-to-end network resources for short periods of time – Deterministic end-to-end connections – low jitter, low latency

Global E-science Network Requirements – Applications/end-users/sensors/instruments requesting optical networking resources host-to- host connections - on demand – Near-real-time feedback of network performance measurements to the applications and middleware – Exchange data with sensors via potentially other physical resources – Destination may not be known initially rather only a service is requested from source

Challenges of Next-gen E-science Highly Dynamic Astrophysics Simulations – Coordination of all three (CPU, Storage, Network) resources based on near-real-time availability – Near-real-time simulation results requires spawning of more simultaneous simulations running on other clusters – Large amounts of data needed to be transferred to available Grid resources (namely clusters with enough storage capacity as well) – Re-adjusting resource usage based on near-real-time monitoring information

Astrophysics Scenario Grid Cluster Grid Cluster Grid Cluster Grid Cluster Grid Cluster Grid Cluster Grid Cluster Initialize Simulation Not available initially Not enough memory Spawn 2nd Simulation Grid Cluster Real-time remote visualization Highly dynamic Scenario

Feedback Loop Application Policy: Users Network Grid Policy: Users Network Grid Resources Grid Resources Network Control plane Network Control plane

Topic Today’s Research Network Infrastructure

NLR WaveNet, FrameNet & PacketNet PoP NLR WaveNet & FrameNet PoP NLR WaveNet PoP NLR owned fiber Managed waves PoP for primary connection point by a member (“MetaPoP”) PoP needed because of signal regeneration requirements but can also be used for secondary connection by a member PoP established by NLR for members regional needs PoP established at exchange points BAT SAN STA CHI SLC HOU DAL SYR TUL PEN ELP KAN PHO LAX ALB PIT WDC OGD BOI CLE ATL POR RAL NYC SAA DEN SVL SEA JAC NLR Footprint & PoP Types – Phase 1 and 2

How will we as a community use these networks? Two categories of users: (1) Black box user - Application and Middleware researchers needing high-speed network to transfer data to and from different parts of the Nation – At SC all point-to-point connections GigE (2) Gray (combination of black and white parts) Box user- Network Researcher – part of the box will be black (or none) – The rest will be white - experiment with network protocols and control plane – Different layers in the stack

GLIF Control Plane and Grid Integration working group Mission To agree on the interfaces and protocols that talk to each other on the control planes of the contributed Lambda resources. People working in this field already meet regularly in conjunction with other projects, notably the NSF-funded OptIPuter and MCNC Controlplane initiatives. several key areas we need to focus on. -Define and understand real operational scenarios -Defining a set of basic services: *precise definitions *developing semantics the whole community agrees to -Interdomain exchange of information *determine what information needs to be monitored *how to abstract monitored information to share -Determine what existing standards are useful vs. where Grid requirements are unique and new services and concepts. * how do we standardize mechanisms and protocols that are unique to the -Grid community *Define a Grid control plane architecture *Work closely with E-science applications to provide vertical integration

Global Lambda Integrated Facility World Map – December 2004 Predicted international Research & Education Network bandwidth, to be made available for scheduled application and middleware research experiments by December Visualization courtesy of Bob Patterson, NCSA.

Topic Optical Control Plane

One Definition of Control Plane “Infrastructure and distributed intelligence that controls the establishment and maintenance of connections in the network, including protocols and mechanisms to disseminate this information; and algorithms for engineering an optimal path between end points.” Draft-ggf-ghpn-opticalnets-1

Another definition of Optical Control plane Moving centralized Network management functions (FCAPS) down to the network elements in a distributed manner… – This speeds up reaction time for most functions – Reduces operational time and costs – Allows the optical network to be more agile – Interacts with Grid middleware

Optical Control plane Migrating functionality from centralized control to distributed at optical layer – Distributed Fault management Self-healing opportunities at the optical layer – Distributed Performance management Dynamically adjust the information to be collected to match context and near-real-time usability – Distributed Configuration Management Autodiscovery Provisioning using signaling - GMPLS, OBS, OPS etc Determine what functionality makes sense from a centralized management plane vs. a distributed control plane

Control plane Functional Areas Routing - Intra-domain and Inter-domain – 1) automatic topology and resource discovery – 2) path computation Signaling - standard communications protocols between network elements for the establishment and maintenance of connections Neighbor discovery - Network elements sharing of details of connectivity to all its neighbors Local resource management - accounting of local available resources

Industry vs. Grid Community Industry Standards - a unified IP control plane, IETF - GMPLS OIF - UNI and NNI Motivation- reduce operation cost from manual provisioning Grid networking community - borrow from standards when we can, rethink concepts when we must! Motivation for Grid computing: – Vertical Integration - Application down to the optical resources

Topic Grid Computing Optical Control Plane Research

Where does the Optical Control Plane fit in? – Application accesses the control plane to initiate/ delete connections – Network resources coordinated with other vital Grid resources – CPU, and Storage - control plane monitoring exchange information with Grid middleware – Advanced reservation of networking resources - Grid Scheduler (middleware) interacts with control plane – Applications requesting optical networking resources – host-to-host connections (applications interacting w/ control plane (this is not done today) – Very dynamic use of end-to-end networking resources - feedback loop between control plane and Application – Near-real-time feedback of network performance measurements to the applications and middleware - to be used by the control plane – Interoperation across Global Grid networks - network interdomain protocols for Grid infrastructure rather than between operators

Why is the Control Plane important to GLIF? – Today to set up an End-to-end connection between two laboratories across national borders: 1) takes “lots of phone calls” 2)takes “lots of s” 3)tens of people 4) connection becomes relatively static 4) over three weeks!!!! – Failed link results in days of out- of service – What do we want? – We want : – applications/sensors/end- users/instruments to initiate an end-to-end connection – Resources for short periods of time – We want automatic recovery - restoration/protection – How do we as a community go from where we are today to what we really want? – We need to use the Morphnet concept in the GLIF community…. Part of the infrastructure for vertical integration research and others as production

Optical Control Plane initiatives Many Global initiatives have been discussed at “International Optical Control Plane for the Grid Community” Workshops:

CALL FOR PAPERS GridNets Co-located with BroadNets Boston October 6th and 7th, 2005 The Thursday and Friday before GGF in Boston!

CALL FOR PAPERS IEEE Communications Magazine Feature Topic Optical Control Plane for Grid Networks: Opportunities, Challenges and the Vision Guest Editors: Gigi Karmous-Edwards and Admela Jukan Manuscripts due: June 20, 2005

Topic Conclusions

Conclusion 1. Control Plane research is vital to meeting future generation Grid computing - with a strong focus on “vertical integration” 2. Reconfigurability is essential top bring down cost and meet application requirements. 3. Understanding what infrastructure exists for GLIF and for how long 4. Accounting and billing needs to be understood/developed for this community. 5. Currently, we have a view of the behavior of potential future enterprise applications by focusing on the needs of Big E-science applications, but it is also important to understand the requirements of Industry. 6. Next generation networks could be vastly different than today’s mode of operation - should not constrain research to today’s model 7. The Research networks are the ones that will take these bold steps not the carriers… apply lessons learned to production quickly. 8. International Collaboration is a very Key ingredient for the future of Scientific discovery - The Optical network plays the most critical role in achieving this! International collaborative funding is necessary. 9.