Gigi Karmous-Edwards Network Development and Grid Requirements E-IRG May 13th, 2005.

Slides:



Advertisements
Similar presentations
INDIANAUNIVERSITYINDIANAUNIVERSITY Update on US National and International Networking Initiatives James Williams Indiana University
Advertisements

-Grids and the OptIPuter Software Architecture Andrew A. Chien Director, Center for Networked Systems SAIC Chair Professor, Computer Science and Engineering.
1 On the Management Issues over Lambda Networks 2005 / 08 / 23 Te-Lung Liu Associate Researcher NCHC, Taiwan.
All rights reserved © 2006, Alcatel Grid Standardization & ETSI (May 2006) B. Berde, Alcatel R & I.
An ecosystem for freight information services: the iCargo project
Introduction CSCI 444/544 Operating Systems Fall 2008.
NLR – National LambdaRail CAN2004 November 29, 2004 Tom West.
Telecom Italia GRID activities for 6th FP Program Maurizio Cecchi 3/4 October 2002.
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM DWDM RAM BUSINESS WITHOUT BOUNDARIES.
Introduction and Overview “the grid” – a proposed distributed computing infrastructure for advanced science and engineering. Purpose: grid concept is motivated.
S t a r P l a n e S t a r P l a n e Application Specific Management of Photonic Networks Li XU & JP Velders SNE group meeting
1© Copyright 2015 EMC Corporation. All rights reserved. SDN INTELLIGENT NETWORKING IMPLICATIONS FOR END-TO-END INTERNETWORKING Simone Mangiante Senior.
Introduction to Grid Computing Ann Chervenak Carl Kesselman And the members of the Globus Team.
Global Connectivity Joint venture of two workshops Kees Neggers & Dany Vandromme e-IRG Workshop Amsterdam, 13 May 2005.
For more notes and topics visit:
A.V. Bogdanov Private cloud vs personal supercomputer.
Gigi Karmous-Edwards Optical Control Plane International ICFA workshop Daegu Korea May 25th 2005.
Distributed Real-Time Systems for the Intelligent Power Grid Prof. Vincenzo Liberatore.
Nlr.net © 2004 National LambdaRail, Inc NLR Update July 26, 2006.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE II - Network Service Level Agreement (SLA) Establishment EGEE’07 Mary Grammatikou.
The Future of the Internet CERNET 10 th Anniversary 25 December 2004 Douglas Van Houweling, President & CEO Internet2.
Grid Computing - AAU 14/ Grid Computing Josva Kleist Danish Center for Grid Computing
TERENA Networking Conference 2004, Rhodes, Greece, June Differentiated Optical Services and Optical SLAs Afrodite Sevasti Greek Research and.
Copyright 2004 National LambdaRail, Inc Connecting to National LambdaRail 6/23/2004 Debbie Montano Director, Development & Operations
NLR Layer2/Layer3 Users BOF NLR status update Philadelphia Internet2 Member Meeting 19 September 2005 Brent Sweeny, Indiana University.
The Grid System Design Liu Xiangrui Beijing Institute of Technology.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
InSAR Working Group Meeting, Oxnard, Oct IT discussion Day 1 afternoon breakout session
GigaPort NG Network SURFnet6 and NetherLight Kees Neggers SURFnet Amsterdam October 12, 2004.
Leveraging the NLR: Creation of the Florida LambdaRail Larry Conrad Associate VP and CIO, FSU Chair of Board, Florida LambdaRail, LLC.
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
ICCS WSES BOF Discussion. Possible Topics Scientific workflows and Grid infrastructure Utilization of computing resources in scientific workflows; Virtual.
Cyberinfrastructure What is it? Russ Hobby Internet2 Joint Techs, 18 July 2007.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Optical Architecture Invisible Nodes, Elements, Hierarchical, Centrally Controlled, Fairly Static Traditional Provider Services: Invisible, Static Resources,
GGF12 -Brussels draft-ggf-ghpn-opticalnets-2 Proposal for a New Draft n Grid Optical Burst Switched Networks (GOBS) n A realistic optical transport technology.
Advanced Networks: The Past and the Future – The Internet2 Perspective APAN 7 July 2004, Cairns, Australia Douglas Van Houweling, President & CEO Internet2.
OptIPuter Networks Overview of Initial Stages to Include OptIPuter Nodes OptIPuter Networks OptIPuter Expansion OPtIPuter All Hands Meeting February 6-7.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
National LambdaRail/ Florida LambdaRail Larry Conrad Associate VP and CIO, FSU Board Chair, Florida LambdaRail, LLC.
1 © 2004, Cisco Systems, Inc. All rights reserved. Aiken & Boromound Network Research Infrastructure “Back to the Future” (aka National Lambda Rail – NLR)
Nlr.net © 2004 National LambdaRail, Inc NLR Update Jt. Techs, Madison July 17, 2006.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
30 November 2001 Advisory Panel on Cyber Infrastructure National Science Foundation Douglas Van Houweling November 30, 2001 National Science Foundation.
Advanced research and education networking in the United States: the Internet2 experience Heather Boyles Director, Member and Partner Relations Internet2.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
DICE: Authorizing Dynamic Networks for VOs Jeff W. Boote Senior Network Software Engineer, Internet2 Cándido Rodríguez Montes RedIRIS TNC2009 Malaga, Spain.
Light the future N L R National LambdaRail Update Joint Techs 2005 Salt Lake City, Utah Dave Jent.
CENIC/NLR Optical Network Management Winter 2006 Joint Techs Workshop February 7, 2006 Chris Costa CENIC
Fall 2006 I2 Member Meeting Global Control of Research Networks Gigi Karmous-Edwards International task Force panel.
Light the future N L R National LambdaRail Update Joint Techs ‘05 Salt Lake City, Utah Steve Cotter February 13, 2005.
GGF12 -Brussels draft-ggf-ghpn-opticalnets-2 Opticalnets-2.
Admela Jukan jukan at uiuc.edu March 15, 2005 GGF 13, Seoul Issues of Network Control Plane Interactions with Grid Applications.
HOPI Update - Internet 2 Project Hybrid Optical Packet Infrastructure Peter O’Neil NCAB May 19, 2004.
Franco Travostino and Admela Jukan jukan at uiuc.edu June 30, 2005 GGF 14, Chicago Grid Network Services Architecture (GNSA) draft-ggf-ghpn-netserv-2.
1 Network related topics Bartosz Belter, Wojbor Bogacki, Marcin Garstka, Maciej Głowiak, Radosław Krzywania, Roman Łapacz FABRIC meeting Poznań, 25 September.
Gigi Karmous-Edwards GLIF/NLR GGF13 March 15 th 2005.
GGF11 - Honolulu draft-ggf-ghpn-opticalnets-1 Grid High-Performance Networking Research Group Optical Network Infrastructure for Grid Dimitra Simeonidou.
GGF 17 - May, 11th 2006 FI-RG: Firewall Issues Overview Document update and discussion The “Firewall Issues Overview” document.
Virtual Laboratory Amsterdam L.O. (Bob) Hertzberger Computer Architecture and Parallel Systems Group Department of Computer Science Universiteit van Amsterdam.
Copyright 2005 National LambdaRail, Inc
Grid Optical Burst Switched Networks
nlr.net © 2004 National LambdaRail, Inc
Clouds , Grids and Clusters
Grid Network Services: Lessons from SC04 draft-ggf-bas-sc04demo-0.doc
Grid Computing.
University of Technology
GRID COMPUTING PRESENTED BY : Richa Chaudhary.
به نام خدا Big Data and a New Look at Communication Networks Babak Khalaj Sharif University of Technology Department of Electrical Engineering.
National LambdaRail Update Dave Jent NLR Layer 2/3 Service Center Indiana University July 18, 2005.
Presentation transcript:

Gigi Karmous-Edwards Network Development and Grid Requirements E-IRG May 13th, 2005

Agenda E-science and their requirements on the Network Today’s Research Network Infrastructure Optical Control Plane Grid Computing Optical Control Plane Research Conclusions

Topic E-science and their requirements on the Network

What do we mean when we say E-science application – Big e-science applications - new generation of applications combines scientific instruments, distributed data archives, sensors, and computing resources to solve complex scientific problems. – Characteristics: i) very large data sets, terabytes, petabytes, etc. ii) high-end computing resources, teraflops, super computers, cluster computing, etc. iii) remote instrumentation and sensors for data collection iv) powerful visualization tools for analysis V) sometimes highly dynamic

Advances in Optical technologies Dark Fiber every where …. Fiber is much cheaper…US Headlines: companies giving away dark fiber! – RONS buy their own and operate it with out the big bell companies – AT&T made available at no-cost to SURA 8,000 miles of dark fiber All-optical switches are getting faster and smaller (ns switch reconfiguration) Layer one Optical switches relatively cheaper than other technologies Fiber, optical impairments control, and transceiver technology continue to advance while reducing prices! 10Gig Capacity

Global E-science Grid Network Requirements – High bandwidth pipes along very long distances – terabyte transfers, petabyte, etc – Network resources coordinated with other vital Grid resources – CPU, and Storage – Advanced reservation of networking resources – Deterministic end-to-end connections – low jitter, low latency – Applications requesting optical networking resources host-to-host connections - on demand – Near-real-time feedback of network performance measurements to the applications and middleware – Exchange data with sensors via potentially other physical resources

Current Problem Space Grid Applications requiring network determinism, interactive - computational steering – Routed networks cannot provide determinism » Queues » Fairness » load changes – Solution - dedicated connection Large File transfer over Long distances – Transport protocol Behavior – result in very slow transfer for large data sets – Routed networks are expensive – Solution – dedicated connection and modified TPs » Lots of work in this area (FAST, GTP, etc.) Grid Applications requiring near-real-time reaction to events and changing environments – GridLAB Apps – Creation of a virtual Compute system – Solution - fast connection set-up Is Connection Oriented technology the only solution?

Challenges of Next-gen E-science Highly Dynamic Astrophysics Simulations – Coordination of all three (CPU, Storage, Network) resources based on near-real-time availability – Near-real-time simulation results requires spawning of more simultaneous simulations running on other clusters – Large amounts of data needed to be transferred to available Grid resources (namely clusters with enough storage capacity as well) – Re-adjusting resource usage based on near-real-time monitoring information

Astrophysics Scenario Grid Cluster Grid Cluster Grid Cluster Grid Cluster Grid Cluster Grid Cluster Grid Cluster Initialize Simulation Not available initially Not enough memory Spawn 2nd Simulation Grid Cluster Real-time remote visualization Highly dynamic Scenario

Feedback Loop Application Policy: Users Network Grid Resources Network Control plane

Topic Today’s Research Network Infrastructure

NLR WaveNet, FrameNet & PacketNet PoP NLR WaveNet & FrameNet PoP NLR WaveNet PoP NLR owned fiber Managed waves PoP for primary connection point by a member (“MetaPoP”) PoP needed because of signal regeneration requirements but can also be used for secondary connection by a member PoP established by NLR for members regional needs PoP established at exchange points BAT SAN STA CHI SLC HOU DAL SYR TUL PEN ELP KAN PHO LAX ALB PIT WDC OGD BOI CLE ATL POR RAL NYC SAA DEN SVL SEA JAC NLR Footprint & PoP Types – Phase 1 and 2

GLIF Control Plane and Grid Integration working group Mission To agree on the interfaces and protocols that talk to each other on the control planes of the contributed Lambda resources. People working in this field already meet regularly in conjunction with other projects, notably the NSF-funded OptIPuter and MCNC Controlplane initiatives. several key areas we need to focus on. -Define and understand real operational scenarios -Defining a set of basic services: *precise definitions *developing semantics the whole community agrees to -Interdomain exchange of information *determine what information needs to be monitored *how to abstract monitored information to share -Determine what existing standards are useful vs. where Grid requirements are unique and new services and concepts. * how do we standardize mechanisms and protocols that are unique to the -Grid community *Define a Grid control plane architecture *Work closely with E-science applications to provide vertical integration

Global Lambda Integrated Facility World Map – December 2004 Predicted international Research & Education Network bandwidth, to be made available for scheduled application and middleware research experiments by December Visualization courtesy of Bob Patterson, NCSA.

Topic Optical Control Plane

One Definition of Control Plane “Infrastructure and distributed intelligence that controls the establishment and maintenance of connections in the network, including protocols and mechanisms to disseminate this information; and algorithms for engineering an optimal path between end points.” Draft-ggf-ghpn-opticalnets-1

Another definition of Optical Control plane Moving centralized Network management functions (FCAPS) down to the network elements in a distributed manner… – This speeds up reaction time for most functions – Reduces operational time and costs – Allows the optical network to be more agile – Interacts with Grid middleware

Topic Grid Computing Optical Control Plane Research

Where does the Network fit in? – Application accesses the control plane to initiate/ delete connections – Network resources coordinated with other vital Grid resources – CPU, and Storage - control plane monitoring exchange information with Grid middleware – Advanced reservation of networking resources - Grid Scheduler (middleware) interacts with control plane – Applications requesting optical networking resources – host-to-host connections (applications interacting w/ control plane (this is not done today) – Very dynamic use of end-to-end networking resources - feedback loop between control plane and Application – Near-real-time feedback of network performance measurements to the applications and middleware - to be used by the control plane – Interoperation across Global Grid networks - network interdomain protocols for Grid infrastructure rather than between operators

Optical Control Plane Research Areas Advanced Optical technology architectures -OPS, OBS Optical connection signaling and provisioning Optical layer Recovery (protection and Restoration) Layer interactions - optical interacting w/ transport protocol layer Optical network performance monitoring, metrics and analysis Resource availability monitoring (network, CPU, storage) Security - AAA - Resource discovery Topology state information dissemination Intra-domain and Inter-domain Routing Centralized vs. Distributed control functionality OGSA integration and WEB services Interaction and coordination with other Grid resources - CPU, Storage Advanced resource reservation

Optical Control Plane initiatives Integration of advanced optical technologies – Introduction of all-photonic architectures - OBS, OPS – Experimentation of with advanced photonics: High speed all- optical switches Optical 3R Optical header recognition New fiber technologies Etc. A well designed optical control plane architecture should accommodate advancing optical technologies

Topic Conclusions

Conclusion 1. Network research is vital to meeting future generation Grid computing - with a strong focus on “vertical integration” 2. Reconfigurability is essential top bring down cost. 3. Funding Policy for joint infrastructure is critical 4. Accounting and billing needs to be developed for this community. 5. Interdisciplinary research is vital from application to the network - vertical integration. 6. Currently, we have a view of the behavior of potential future enterprise applications by focusing on the needs of Big E-science applications, but it is also important to understand the requirements of Industry. 7. Next generation networks could be vastly different than today’s mode of operation - should not constrain research to today’s model 8. Cost should be part of the equation but NOT the highest priority for Federal funded research - industry will do that anyway 9. International Collaboration is a very Key ingredient for the future of Scientific discovery - The Optical network plays the most critical role in achieving this! International collaborative funding is necessary. 10.