Presentation is loading. Please wait.

Presentation is loading. Please wait.

Future Internet Architectures (FIA) And GENI

Similar presentations


Presentation on theme: "Future Internet Architectures (FIA) And GENI"— Presentation transcript:

1 Future Internet Architectures (FIA) And GENI
Darleen Fisher Program Director Division of Computer & Network Systems Directorate for Computer and Information Science and Engineering National Science Foundation

2 Outline FIA Vision Future Internet Architecture (FIA) Projects
FIA Projects’ Current Ideas about Using GENI Going Forward—What might be next for FIA & GENI?

3 Future Internet – The Vision
Society’s needs for an IT infrastructure may no longer be met by the present trajectory of incremental changes to the current Internet Society needs the research community to create the trustworthy Future Internets that meet the needs and challenges of the 21st Century. Research should include intellectually distinctive ideas, driven by the requirement for long-range concepts unfettered by the current limitations of or the requirement for immediate application to the current Internet. Architecture includes all needed functionalities (overarching architecture). Research on Future Internets create a community better informed and educated about network architecture design

4 Future Internet Architectures (FIA)
NSF issued a call for proposals: To support innovative and creative projects that conceive, design, and evaluate trustworthy Future Internet architectures 4 projects awarded – A diverse portfolio (smaller projects under consideration and expected submit to NeTS Large)

5 FIA Projects and their View of the Future
Mobility First The future is mobile (Cellular, wireless sensors, machine-to-machine, smart grid & vehicular nets integrate physical world awareness and control into Internet applications) NEBULA 24x7 Utility for secure communication, computation and storage Named Data Network (NDN) Content is the future driver eXpressive Internet Architecture (XIA) Design for evolution of: usage (host-host, content retrieval, services) and technology (link, storage, computing)

6 MobilityFirst Mobility Path disruption tolerance (path redundancy)
Principle Investigator: Dipankar Raychaudhuri, Rutgers Collaborating Institutions: Duke Univ., Massachusetts Institute of Technology, Univ. of Massachusetts/Amherst, Univ. of Massachusetts/Lowell, Univ. of Michigan, Univ. of Nebraska/Lincoln, Univ. of North Carolina/Chapel Hill, Univ. of Wisconsin/Madison Underlying architectural principles Mobility is the norm without gateways or overlay accommodations The architecture uses generalized delay-tolerant networking (GDTN) to provide robustness even in presence of link/network disconnections. GDTN integrated with the use of self-certifying public key addresses to provide an inherently trustworthy network. Wired networks special case. Mobility Path disruption tolerance (path redundancy) Delay-tolerant networking

7 M-F Overview - Component Architecture
Location Service Other Application Services Context Addressing Content Addressing Host/Entity Addressing Application Services Encoding/Certifying Layer Global Name Resolution Service (GNRS) Network-Support Services Storage Aware Routing (STAR) Context-Aware / Late-bind Routing Management Link state and Path Measurements Core Network Services Locator-X Routing (e.g., GUID-based) IP Routing (DNS, BGP, IGP) Monitor, Diagnosis and Control

8 Named Data Networking Content distribution Policy Enforcement
Principle Investigator: Lixia Zhang, UCLA Collaborating Institutions: Colorado State University, PARC, Univ. of Arizona, Univ. of Illinois/Urbana-Champaign, UC Irvine, Univ. of Memphis, UC San Diego, Washington Univ., and Yale Univ. Underlying architectural principles Content is what users and applications care about; By naming data not location data become a first-class entity. Packets indicate what (content) , not who/where (IP address) Packet is a <name, data, signature> Securing named data potentially allows trust to be more user-centric. Retain the hourglass in the architecture Separate routing and forwarding Content distribution Policy Enforcement Attribution

9 Named Data Networking (NDN)
The architecture retains the hourglass shape Change the thin waist from using IP addresses to using data names Always retrieve data from closest copy on a path to source; use memory for intrinsic multicast distribution IP addresses name locations; retrieving data by names eliminates a fundamental hurdle in mobility support Retrieving data by names facilitates new application development in sensor networking Robust security from per packet signature The new strategy layer enables intelligent data delivery via broadcast, multicast, and multiple paths ISP ISP

10 NEBULA DP/CP Redundancy Data plane and control plane separation
Principle Investigator: Jonathan Smith, Univ. of Penn. Collaborating Institutions: Cornell Univ., Massachusetts Institute of Technology, Princeton Univ., Purdue Univ., Stanford Univ., Stevens Institute of Technology, Univ. of California/Berkley, Univ. of Delaware, Univ. of Illinois/Urbana-Champaign, Univ. of Texas, Univ. of Washington Underlying architectural principles Always-on utility where cloud computing data centers are the primary repositories of data and the primary locus of computation Storage, computation, and applications move into the "cloud“ Data centers are connected by a high-speed, extremely reliable and secure backbone network. Parallel paths between data center and core Secure access and transit, policy-based path selection and authentication during connection establishment DP/CP Redundancy Data plane and control plane separation Virtualization Fast, token-based Data plane, feature-rich control plane

11 NEBULA Architecture NDP (NEBULA Data Plane) distributed multiple-path establishment and policy enforcement NVENT (NEBULA Virtual and Extensible Networking Technologies) extensible control plane Ncore (NEBULA Core) redundancy-connected, high-availability routers

12 eXpressive Internet Architecture (XIA)
Principle Investigator: Peter Steenkiste, Carnegie Mellon Univ. Collaborating Institutions: Boston Univ., Univ. of Wisconsin/Madison Underlying architectural principles XIA offers support for communication between current communicating principals--including hosts, content, and services--while accommodating unknown future principals. For each type of principal, XIA defines a narrow waist that dictates the application programming interface (API) for communication and the network communication mechanisms. XIA enables flexible context-dependent mechanisms for establishing trust between the communicating principals. API-centric approach (Narrow Waist) Path decisions based on principal type, host, content, service Trustworthy Computing Store-and-forward

13 XIA Components and Interactions

14 FIA Projects’ Current Ideas to use GENI
Just began September 1, 2010; Are at different levels of maturity; as are Their plans for experimentation and how they might use GENI.

15 Potential use of GENI in NEBULA*
GENI Technology: Enables experiments involving multiple sites Isolates NEBULA experiments to a single VLAN Eliminates need for special HW & Address Translation Potential Uses: Multisite student collaboration on Ncore (NEBULA Core) Testbed for NDP (NEBULA Data Plane) experiments Platform for NVENT (NEBULA extensible control plane) * No GENI-enabled switches on NEBULA campuses-->so preliminary thoughts 15

16 XIA Testbed Requirements
Run fairly large, geographically diverse experiments Several tens or more nodes High speed packet processing platform Evaluating Openflow – XIA is very different from IP Diverse access network technologies Evaluate XIA over diverse networks using applications Short learning curve for students Avoid time sink that takes away time from research Essential for UG and MS student participation

17 NDN Experimental Infrastructure
Pervasive/mobile computing “infrastructure-less” testbeds with embedded hardware Real world settings for Internet-of-Things scenarios Open Network Lab (ONL) Controlled small-scale experiments, especially forwarding NDN Overlay Testbed on public Internet Live application testing/use under realistic conditions Routing and incremental deployment PlanetLab Large-scale experiments Supercharged PlanetLab Platform (SPP) Nodes High-performance CCNx/NDN forwarding Infrastructureless TB: lightweight wireless devices . Use applications: UCLA Estrin + jeff Burke= lighting; tarek Ab applications. Used IP TB with each having ugly middleware Hypothesis is that NDN is better. Implementing in NDN ONL racks PCs and switches. Lower left picture is graphic of lab and use GUI to design topology and can establish VLAN and have experiment of 40 machines. Verify performance of NDN architecture—key is look up and want to forward interest requests. Using novel hashtable and algorithm think can improve performance . Can implement and experiment in real SW/HW Overlay TB lower right. Each institution put up 1-2 machines with reference implementation and all routes populated. Useful for evaluating routing protocols. PlanetLab—expects that once vet routing ideas in overlay can increase scope using PlanetLab . SPPs programmable routers. Inside boxes is same as commercial routers. SW implementations of layer 3, but G/ethernet interfaces. Can run own layer 3; Up to 5 G/sec, but not have that BW between nodes. (could go faster)

18 NDN and GENI Using SPP nodes No other clear needs identified yet
Initial software running on 5 nodes now Lead: Patrick Crowley No other clear needs identified yet Possibilities: Large numbers of nodes with significant topology control including local broadcast Running natively over something other than IP NDN “PlanetLab” NDN Participating Institutions Deployed SPP Nodes 5 SPP nodes are part of and connected by GENI. Take advantage of ethernet broadcast in LAN, NDN can provide data to all who want data when IP has a problem to broadcast. NDN “PlanetlLab” Larry released PL node AND management SW. So can create own PlanetLab. (Europe and Asia does this with PlanetLab Central management SW (accounts, statistics and reporting, error data) SPP could be controlled by PlanetLab Central Salt Lake City Yale PARC Washington DC Kansas City ColoState UIUC UCLA UCI WashU CAIDA/UCSD Arizona Memphis Atlanta Houston

19 Mobility-First Phased Approach
Content Addressing Stack Host/Device Addressing Stack Context Addressing Stack Encoding/Certifying Layer Global Name Resolution Service (GNRS) Storage Aware Routing Locator-X Routing (e.g., GUID-based) Context-Aware / Late-bind Routing Simulation/Emulation Emulation/Limited Testbed Testbed/‘Live’ Deployment Evaluation Platform Standalone Components Cross Layer Integration Deployment ready Prototyping Status 19

20 Phase1: Global Name Resolution Service (GNRS) Evaluation - ProtoGENI Mapping
Phase 1 evaluation of distributed network services, e.g. GNRS Backbone bandwidth and delay representative of Internet core Edge substrates interconnected via backbone Required Testbed Infrastructure (ProtoGENI nodes, OpenFlow switches, GENI Racks, ORBIT node clients) Domain-1 Domain-2 Router Domain-3 Clients PoP 20

21 Phase 1: Wireless/Mobile Edge Substrate
Phase 1 evaluation of storage-aware routing in edge network Network: Ad hoc, multiple wireless technologies – WiFi, WiMAX Evaluate routing with mobility, handoff, multi-homing Single Wireless Domain WiMAX AP Multi-homed device WiFi BTS Movement Handoff Cell tower Ad hoc network Required Testbed Infrastructure: GENI WiMax, ORBIT grid & campus net, DOME/DieselNET WiNGS

22 Phase 1: GENI WiMAX & ORBIT Testbeds
Multi-radio indoor and outdoor nodes - WiMAX, WiFi, Linux-based Click implementation of routing protocols 22

23 GENI WiMax/OF campus nets, ORBIT, ProtoGENI
Phase 2: Core + Edge Evaluations Multi-site experiments with both (wired) core and (wired + wireless) edge networks Evaluate: Core-to-edge routing Cross-layer interaction between GNRS and routing services In-core router storage resources in STAR routing 1Gbps Required Testbed Infrastructure: GENI WiMax/OF campus nets, ORBIT, ProtoGENI 23

24 Wireless Egde (4G & WiFI) Wireless Edge (4G & WiFI)
Phase 3: Live Edge-Core-Edge Deployment OpenFLow Backbones OpenFlow WiMAX ShadowNet Internet 2 National Lambda Rail Legend Domain-1 Router Domain-2 Wireless Egde (4G & WiFI) Wireless Edge (4G & WiFI) Domain-3 Services Full MF Stack at routers, BS, etc. Mapping onto GENI Infrastructure ProtoGENI nodes, OpenFlow switches, GENI Racks, WiMAX/outdoor ORBIT nodes, DieselNet bus, etc. Inter-domain mobility Intra-domain mobility Opt-in users Deployment Target: Large scale, multi-site Mobility centric Realistic, live 24

25 Going Forward FIA team members continue to participate in GENI
GENI-FIA-like Workshop??? FIA testbed/experimentalists Reps GENI GPO Reps Working Groups Reps Other researchers working on architecture projects Other ideas?


Download ppt "Future Internet Architectures (FIA) And GENI"

Similar presentations


Ads by Google