Lecture 8: Testbeds Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Material uses slides from Larry Peterson, Jay Lapreau, and GENI.net.

Slides:



Advertisements
Similar presentations
VINI and its Future Directions
Advertisements

Seungmi Choi PlanetLab - Overview, History, and Future Directions - Using PlanetLab for Network Research: Myths, Realities, and Best Practices.
PlanetLab: An Overlay Testbed for Broad-Coverage Services Bavier, Bowman, Chun, Culler, Peterson, Roscoe, Wawrzoniak Presented by Jason Waddle.
1 Chapter 11: Data Centre Administration Objectives Data Centre Structure Data Centre Structure Data Centre Administration Data Centre Administration Data.
PlanetLab V3 and beyond Steve Muir Princeton University September 17, 2004.
Jennifer Rexford Princeton University MW 11:00am-12:20pm Network Virtualization COS 597E: Software Defined Networking.
System Center 2012 R2 Overview
PlanetLab Architecture Larry Peterson Princeton University.
The Case for Enterprise Ready Virtual Private Clouds Timothy Wood, Alexandre Gerber *, K.K. Ramakrishnan *, Jacobus van der Merwe *, and Prashant Shenoy.
PlanetLab Operating System support* *a work in progress.
1 PlanetLab: A globally distributed testbed for New and Disruptive Services CS441 Mar 15th, 2005 Seungjun Lee
PlanetLab: Present and Future Steve Muir 3rd August, 2005 (slides taken from Larry Peterson)
Introduction CSCI 444/544 Operating Systems Fall 2008.
Xen , Linux Vserver , Planet Lab
A Flexible Model for Resource Management in Virtual Private Networks Presenter: Huang, Rigao Kang, Yuefang.
PlanetLab: An open platform for developing, deploying, and accessing planetary-scale services Overview Adapted from Peterson.
Emulab.net: An Emulation Testbed for Networks and Distributed Systems Jay Lepreau and many others University of Utah Intel IXA University Workshop June.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
Emulab: Recent Work, Ongoing Work Jay Lepreau University of Utah DETER Community Meeting January 31, 2006.
1 GENI: Global Environment for Network Innovations Jennifer Rexford Princeton University
1 GENI: Global Environment for Network Innovations Jennifer Rexford Princeton University
An Overlay Data Plane for PlanetLab Andy Bavier, Mark Huang, and Larry Peterson Princeton University.
1 GENI: Global Environment for Network Innovations Jennifer Rexford On behalf of Allison Mankin (NSF)
Integrated Scientific Workflow Management for the Emulab Network Testbed Eric Eide, Leigh Stoller, Tim Stack, Juliana Freire, and Jay Lepreau and Jay Lepreau.
Chapter 13 Embedded Systems
1 GENI: Global Environment for Network Innovations Jennifer Rexford Princeton University
1 A Large-Scale Network Testbed Jay Lepreau Chris Alfeld David Andersen Kevin Van Maren University of Utah September.
An Integrated Experimental Environment for Distributed Systems and Networks B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad, M. Newbold, M. Hibler,
The Future of Internet Research Scott Shenker (on behalf of many networking collaborators)
How To Use It...  Submit ns script via web form  Relax while emulab …  Generates config from script & stores in DB  Maps specified virtual topology.
1 GENI: Global Environment for Network Innovations Jennifer Rexford Princeton University See for.
Slide 3-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 3 Operating System Organization.
Bandwidth DoS Attacks and Defenses Robert Morris Frans Kaashoek, Hari Balakrishnan, Students MIT LCS.
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
Edge Based Cloud Computing as a Feasible Network Paradigm(1/27) Edge-Based Cloud Computing as a Feasible Network Paradigm Joe Elizondo and Sam Palmer.
1 The SpaceWire Internet Tunnel and the Advantages It Provides For Spacecraft Integration Stuart Mills, Steve Parkes Space Technology Centre University.
CRON: Cyber-infrastructure for Reconfigurable Optical Networks PI: Seung-Jong Park, co-PI: Rajgopal Kannan GRA: Cheng Cui, Lin Xue, Praveenkumar Kondikoppa,
Hosting Virtual Networks on Commodity Hardware VINI Summer Camp.
Virtual Machine Hosting for Networked Clusters: Building the Foundations for “Autonomic” Orchestration Based on paper by Laura Grit, David Irwin, Aydan.
B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad, M. Newbold, M. Hibler, C. Barb, A. Joglekar Presented by Sunjun Kim Jonathan di Costanzo 2009/04/13.
Introduction and Overview Questions answered in this lecture: What is an operating system? How have operating systems evolved? Why study operating systems?
Copyright © 2011 EMC Corporation. All Rights Reserved. MODULE – 6 VIRTUALIZED DATA CENTER – DESKTOP AND APPLICATION 1.
1 COMPSCI 110 Operating Systems Who - Introductions How - Policies and Administrative Details Why - Objectives and Expectations What - Our Topic: Operating.
GEC3www.geni.net1 GENI Spiral 1 Control Frameworks Global Environment for Network Innovations Aaron Falk Clearing.
MDC417 Follow me on Working as Practice Manager for Insight, he is a subject matter expert in cloud, virtualization and management.
Resource Representations in GENI: A path forward Ilia Baldine, Yufeng Xin Renaissance Computing Institute,
Virtual Private Ad Hoc Networking Jeroen Hoebeke, Gerry Holderbeke, Ingrid Moerman, Bard Dhoedt and Piet Demeester 2006 July 15, 2009.
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
PlanetLab Architecture Larry Peterson Princeton University.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
 Virtual machine systems: simulators for multiple copies of a machine on itself.  Virtual machine (VM): the simulated machine.  Virtual machine monitor.
1 Testbeds Breakout Tom Anderson Jeff Chase Doug Comer Brett Fleisch Frans Kaashoek Jay Lepreau Hank Levy Larry Peterson Mothy Roscoe Mehul Shah Ion Stoica.
GENI Architecture: Transition July 18, 2007 Facility Architecture Working Group.
Large-scale Virtualization in the Emulab Network Testbed Mike Hibler, Robert Ricci, Leigh Stoller Jonathon Duerig Shashi Guruprasad, Tim Stack, Kirk Webb,
Shivkumar Kalyanaraman Rensselaer Polytechnic Institute 1 Based upon slides from Jay Lepreau, Utah Emulab Introduction Shiv Kalyanaraman
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Virtualization as Architecture - GENI CSC/ECE 573, Sections 001, 002 Fall, 2012 Some slides from Harry Mussman, GPO.
ProtoRINA over ProtoGENI What is RINA? [1][2] References [1] John Day. “Patterns in Network Architecture: A Return to Fundamentals”. Prentice Hall, 2008.
An Integrated Experimental Environment for Distributed Systems and Networks B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad, M. Newbold, M. Hibler,
Unit 2 VIRTUALISATION. Unit 2 - Syllabus Basics of Virtualization Types of Virtualization Implementation Levels of Virtualization Virtualization Structures.
SERVERS. General Design Issues  Server Definition  Type of server organizing  Contacting to a server Iterative Concurrent Globally assign end points.
Welcome Network Virtualization & Hybridization Thomas Ndousse
Container-based Operating System Virtualization: A scalable, High-performance Alternative to Hypervisors Stephen Soltesz, Herbert Potzl, Marc E. Fiuczynski,
GGF15 – Grids and Network Virtualization
Software Defined Networking (SDN)
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Cloud-Enabling Technology
GENI Exploring Networks of the Future
Presentation transcript:

Lecture 8: Testbeds Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Material uses slides from Larry Peterson, Jay Lapreau, and GENI.net

2 References EmuLab : artifact-free, auto-configured, fully controlled EmuLab  A configurable Internet emulator  2001: 200 nodes, 500 wires, 2x BFS (switch)  2006: 350 PCs, 7 IXPs, 40 WANodes, nodes PlanetLab : real environment PlanetLab GENI 670 machines spanning 325 sites and 35 countries nodes within a LAN-hop of > 3M users Supports distributed virtualization each of 600+ network services running in their own slice

3 Emulab philosophy Live-network experimentation  Achieves realism  Surrenders repeatability  e.g., MIT “RON” testbed, PlanetLab Pure emulation  Introduces controlled packet loss and delay  Requires tedious manual configuration Emulab approach  Brings simulation’s efficiency and automation to emulation  Artifact free environment  Arbitrary workload: any OS, any ”router” code, any program, for any user  So default resource allocation policy is conservative:  allocate full real node & link: no multiplexing; assume max. possible traffic

4 Emulab Allow experimenter complete control, i.e., bare hardware with lots of tools for common cases  OS’s, disk loading, state mgmt tools, IP, traffic generation, batch,... Virtualization  of all experimenter-visible resources  topology, links, software, node names, network interface names, network addresses  Allows swapin/swapout Remotely accessible Persistent state maintenance (in database) Separate control network Configuration language: ns

5 Experiment Life Cycle $ns duplex-link $A $B 1.5Mbps 20ms BA DB ABBA SpecificationGlobal Resource AllocationNode Self-ConfigurationExperiment ControlSwap OutParsingSwap In

6 assign: Mapping Local Cluster Resources Maps virtual resources to local nodes and VLANs General combinatorial optimization approach to NP-complete problem Based on simulated annealing Minimizes inter-switch links, # switches & other constraints … All experiments mapped in less than 3 secs [100 nodes] WANassign for Mapping Global Resources (uses genetic algorithm)

7 Frisbee: Disk Loading Loads full disk images (bulk download) Performance techniques:  Overlaps block decompression and device I/O  Uses a domain-specific algorithm to skip unused blocks  Delivers images via a custom reliable multicast protocol 13 GB generic IDE 7200 rpm drives Was 20 minutes for 6 GB image Now 88 seconds

8 IDE planned for Emulab Evolve Emulab to be the network-device- independent control and integration center for experimentation, research, development, debugging, measurement, data management, and archiving  Collaboratory: Emulab’s project abstraction  Workbench: Emulab’s experiment abstraction  Device-independent: Emulab’s builtin abstractions for all things network-related

9 Collaboratory Subsystems Source repository:Sourceforge, CVS, Subversion Datapository “My Wikis” Mailing list(s) Bug database Chat/IM, chatroom management Moodle? Approach  Transparently do authentication, authorization and membership mgmt: “single signon”  Use separate server for information and resource security and management  Support flexible access policies: default is project- private, but project leader can change, per-subsytem  Private, public read-only, public read/write

10 Experimentation Workbench Four types:  Workflow management (processes), including  Measurement and feedback steps  mandatory pipelines  Experiment management  Data management  Analyses

11 Workbench: “Time Travel” and Stateful Swapout Time-travel of distributed systems for debugging  Generalize disk image format and handling  Periodic disk checkpointing  Full state-save on swapout  Xen-based virtual machines  Challenge: network state (packets in flight)  Pragmatic approach: quiesce senders, flush buffers Stateful swapout/swapin [easier]  Allows transparent pre-emption experiment Related to workbench: history, tree traversal  Can share some mechanisms, some UI

12 Planetlab: Requirements 1) It must provide a global platform that supports both short-term experiments and long-running services.  services must be isolated from each other  multiple services must run concurrently  must support real client workloads Key Ideas  Slices  Virtualization  multiple architectures on a shared infrastructure  Programmable  virtually no limit on new designs  Opt-in on a per-user / per-application basis  attract real users  demand drives deployment / adoption

13 PlanetLab: Slices

14 Slices

15 Slices

16 User Opt-in Server NAT Client

17 Virtualization: Per Node View Virtual Machine Monitor (VMM) Node Mgr Owner VM VM 1 VM 2 VM n … Linux kernel (Fedora Core) + Vservers (namespace isolation) + Schedulers (performance isolation) + VNET (network virtualization) Auditing service Monitoring services Brokerage services Provisioning services

18 Global View ……… PLC

19 Requirements 2) It must be available now, even though no one knows for sure what “it” is  deploy what we have today, and evolve over time  make the system as familiar as possible (e.g., Linux)  accommodate third-party management services

20 Brokerage Service PLC (SA) VMM NMVM … (broker contacts relevant nodes) Bind(slice, pool) VM User BuyResources( ) Broker

21 Requirements 3) Convince sites to host nodes running code written by unknown researchers from other organizations.  protect the Internet from PlanetLab traffic  must get the trust relationships right  trusted intermediary: PLC Node Owner PLC Service Developer (User) ) PLC expresses trust in a user by issuing it credentials to access a slice 2) Users trust PLC to create slices on their behalf and inspect credentials 3) Owner trusts PLC to vet users and map network activity to right user 4) PLC trusts owner to keep nodes physically secure

22 Requirements 4) Sustaining growth depends on support for site autonomy and decentralized control  sites have final say over the nodes they host  must minimize (eliminate) centralized control Owner autonomy  owners allocate resources to favored slices  owners selectively disallow unfavored slices Delegation  PLC grants tickets that are redeemed at nodes  enables third-party management services Federation  create “private” PlanetLabs using MyPLC  establish peering agreements

23 Requirements 5) It must scale to support many users with minimal resources available  expect under-provisioned state to be the norm  shortage of logical resources too (e.g., IP addresses) Decouple slice creation and resource allocation  given a “fair share” by default when created  acquire additional resources, including guarantees Fair share with protection against thrashing  1/Nth of CPU  1/Nth of link bandwidth  owner limits peak rate  upper bound on average rate (protect campus bandwidth)  disk quota

24 Slice Creation PLC (SA) VMM NMVM PI SliceCreate( ) SliceUsersAdd( ) User/Agent GetTicket( ) VM … (redeem ticket with plc.scs) CreateVM(slice) plc.scs

25 Combining PlanetLab and Emulab: “Pelab” Motivation:  PlanetLab (sort of) sees the “real Internet”  But its hosts are hugely overloaded, unpredictable  Internet and host variability  Takes many many runs to get statistical significance  Hard to debug  Emulab provides predictable, dedicated host resources and a controlled, repeatable environment  But network model is completely fake

26 Approach App runs on Emulab nodes  Chosen Plab nodes peered with Emulab nodes  App-traffic gen and measurement stubs start up on Plab  Send real time network conditions to Emulab  Develop & continuously run adaptive Plab path-condition monitor  Pour results into Datapository  Use for initial conditions or when app goes idle on certain pairs

27 GENI Design Key Idea  Slices embedded in a substrate of networking resources Two central pieces  Physical network substrate  expandable collection of building block components  nodes / links / subnets  Software management framework  knits building blocks together into a coherent facility  embeds slices in the physical substrate

28 National Fiber Facility

29 + Programmable Routers

30 + Clusters at Edge Sites

31 + Wireless Subnets

32 + ISP Peers MAE-West MAE-East

33 Closer Look Internet backbone wavelength backbone switch Sensor Network Edge Site Wireless Subnet Customizable Router Dynamic Configurable Switch

34 Summary of Substrate Node Components  edge devices  customizable routers  optical switches Bandwidth  national fiber facility  tail circuits (including tunnels) Wireless Subnets  urban  wide-area 3G/WiMax  cognitive radio  sensor net  emulation

35 Management Framework GMC Management Services Substrate Components - name space for users, slices, & components - set of interfaces (“plug in” new components) - support for federation (“plug in” new partners)

36 GENI Management Core (GMC) Resource ControllerAuditing Archive Slice Manager GMC node control sensor data CM Virtualization SW Substrate HW CM Virtualization SW Substrate HW CM Virtualization SW Substrate HW