Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 8: Testbeds Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Material uses slides from Larry Peterson, Jay Lapreau, and GENI.net.

Similar presentations


Presentation on theme: "Lecture 8: Testbeds Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Material uses slides from Larry Peterson, Jay Lapreau, and GENI.net."— Presentation transcript:

1 Lecture 8: Testbeds Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Material uses slides from Larry Peterson, Jay Lapreau, and GENI.net

2 2 References EmuLab : artifact-free, auto-configured, fully controlled EmuLab  A configurable Internet emulator  2001: 200 nodes, 500 wires, 2x BFS (switch)  2006: 350 PCs, 7 IXPs, 40 WANodes, 27+ 802.11nodes PlanetLab : real environment PlanetLab GENI 670 machines spanning 325 sites and 35 countries nodes within a LAN-hop of > 3M users Supports distributed virtualization each of 600+ network services running in their own slice

3 3 Emulab philosophy Live-network experimentation  Achieves realism  Surrenders repeatability  e.g., MIT “RON” testbed, PlanetLab Pure emulation  Introduces controlled packet loss and delay  Requires tedious manual configuration Emulab approach  Brings simulation’s efficiency and automation to emulation  Artifact free environment  Arbitrary workload: any OS, any ”router” code, any program, for any user  So default resource allocation policy is conservative:  allocate full real node & link: no multiplexing; assume max. possible traffic

4 4 Emulab Allow experimenter complete control, i.e., bare hardware with lots of tools for common cases  OS’s, disk loading, state mgmt tools, IP, traffic generation, batch,... Virtualization  of all experimenter-visible resources  topology, links, software, node names, network interface names, network addresses  Allows swapin/swapout Remotely accessible Persistent state maintenance (in database) Separate control network Configuration language: ns

5 5 Experiment Life Cycle $ns duplex-link $A $B 1.5Mbps 20ms BA DB ABBA SpecificationGlobal Resource AllocationNode Self-ConfigurationExperiment ControlSwap OutParsingSwap In

6 6 assign: Mapping Local Cluster Resources Maps virtual resources to local nodes and VLANs General combinatorial optimization approach to NP-complete problem Based on simulated annealing Minimizes inter-switch links, # switches & other constraints … All experiments mapped in less than 3 secs [100 nodes] WANassign for Mapping Global Resources (uses genetic algorithm)

7 7 Frisbee: Disk Loading Loads full disk images (bulk download) Performance techniques:  Overlaps block decompression and device I/O  Uses a domain-specific algorithm to skip unused blocks  Delivers images via a custom reliable multicast protocol 13 GB generic IDE 7200 rpm drives Was 20 minutes for 6 GB image Now 88 seconds

8 8 IDE planned for Emulab Evolve Emulab to be the network-device- independent control and integration center for experimentation, research, development, debugging, measurement, data management, and archiving  Collaboratory: Emulab’s project abstraction  Workbench: Emulab’s experiment abstraction  Device-independent: Emulab’s builtin abstractions for all things network-related

9 9 Collaboratory Subsystems Source repository:Sourceforge, CVS, Subversion Datapository “My Wikis” Mailing list(s) Bug database Chat/IM, chatroom management Moodle? Approach  Transparently do authentication, authorization and membership mgmt: “single signon”  Use separate server for information and resource security and management  Support flexible access policies: default is project- private, but project leader can change, per-subsytem  Private, public read-only, public read/write

10 10 Experimentation Workbench Four types:  Workflow management (processes), including  Measurement and feedback steps  mandatory pipelines  Experiment management  Data management  Analyses

11 11 Workbench: “Time Travel” and Stateful Swapout Time-travel of distributed systems for debugging  Generalize disk image format and handling  Periodic disk checkpointing  Full state-save on swapout  Xen-based virtual machines  Challenge: network state (packets in flight)  Pragmatic approach: quiesce senders, flush buffers Stateful swapout/swapin [easier]  Allows transparent pre-emption experiment Related to workbench: history, tree traversal  Can share some mechanisms, some UI

12 12 Planetlab: Requirements 1) It must provide a global platform that supports both short-term experiments and long-running services.  services must be isolated from each other  multiple services must run concurrently  must support real client workloads Key Ideas  Slices  Virtualization  multiple architectures on a shared infrastructure  Programmable  virtually no limit on new designs  Opt-in on a per-user / per-application basis  attract real users  demand drives deployment / adoption

13 13 PlanetLab: Slices

14 14 Slices

15 15 Slices

16 16 User Opt-in Server NAT Client

17 17 Virtualization: Per Node View Virtual Machine Monitor (VMM) Node Mgr Owner VM VM 1 VM 2 VM n … Linux kernel (Fedora Core) + Vservers (namespace isolation) + Schedulers (performance isolation) + VNET (network virtualization) Auditing service Monitoring services Brokerage services Provisioning services

18 18 Global View ……… PLC

19 19 Requirements 2) It must be available now, even though no one knows for sure what “it” is  deploy what we have today, and evolve over time  make the system as familiar as possible (e.g., Linux)  accommodate third-party management services

20 20 Brokerage Service PLC (SA) VMM NMVM …............ (broker contacts relevant nodes) Bind(slice, pool) VM User BuyResources( ) Broker

21 21 Requirements 3) Convince sites to host nodes running code written by unknown researchers from other organizations.  protect the Internet from PlanetLab traffic  must get the trust relationships right  trusted intermediary: PLC Node Owner PLC Service Developer (User) 1 2 3 4 1) PLC expresses trust in a user by issuing it credentials to access a slice 2) Users trust PLC to create slices on their behalf and inspect credentials 3) Owner trusts PLC to vet users and map network activity to right user 4) PLC trusts owner to keep nodes physically secure

22 22 Requirements 4) Sustaining growth depends on support for site autonomy and decentralized control  sites have final say over the nodes they host  must minimize (eliminate) centralized control Owner autonomy  owners allocate resources to favored slices  owners selectively disallow unfavored slices Delegation  PLC grants tickets that are redeemed at nodes  enables third-party management services Federation  create “private” PlanetLabs using MyPLC  establish peering agreements

23 23 Requirements 5) It must scale to support many users with minimal resources available  expect under-provisioned state to be the norm  shortage of logical resources too (e.g., IP addresses) Decouple slice creation and resource allocation  given a “fair share” by default when created  acquire additional resources, including guarantees Fair share with protection against thrashing  1/Nth of CPU  1/Nth of link bandwidth  owner limits peak rate  upper bound on average rate (protect campus bandwidth)  disk quota

24 24 Slice Creation PLC (SA) VMM NMVM PI SliceCreate( ) SliceUsersAdd( ) User/Agent GetTicket( ) VM …............ (redeem ticket with plc.scs) CreateVM(slice) plc.scs

25 25 Combining PlanetLab and Emulab: “Pelab” Motivation:  PlanetLab (sort of) sees the “real Internet”  But its hosts are hugely overloaded, unpredictable  Internet and host variability  Takes many many runs to get statistical significance  Hard to debug  Emulab provides predictable, dedicated host resources and a controlled, repeatable environment  But network model is completely fake

26 26 Approach App runs on Emulab nodes  Chosen Plab nodes peered with Emulab nodes  App-traffic gen and measurement stubs start up on Plab  Send real time network conditions to Emulab  Develop & continuously run adaptive Plab path-condition monitor  Pour results into Datapository  Use for initial conditions or when app goes idle on certain pairs

27 27 GENI Design Key Idea  Slices embedded in a substrate of networking resources Two central pieces  Physical network substrate  expandable collection of building block components  nodes / links / subnets  Software management framework  knits building blocks together into a coherent facility  embeds slices in the physical substrate

28 28 National Fiber Facility

29 29 + Programmable Routers

30 30 + Clusters at Edge Sites

31 31 + Wireless Subnets

32 32 + ISP Peers MAE-West MAE-East

33 33 Closer Look Internet backbone wavelength backbone switch Sensor Network Edge Site Wireless Subnet Customizable Router Dynamic Configurable Switch

34 34 Summary of Substrate Node Components  edge devices  customizable routers  optical switches Bandwidth  national fiber facility  tail circuits (including tunnels) Wireless Subnets  urban 802.11  wide-area 3G/WiMax  cognitive radio  sensor net  emulation

35 35 Management Framework GMC Management Services Substrate Components - name space for users, slices, & components - set of interfaces (“plug in” new components) - support for federation (“plug in” new partners)

36 36 GENI Management Core (GMC) Resource ControllerAuditing Archive Slice Manager GMC node control sensor data CM Virtualization SW Substrate HW CM Virtualization SW Substrate HW CM Virtualization SW Substrate HW


Download ppt "Lecture 8: Testbeds Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Material uses slides from Larry Peterson, Jay Lapreau, and GENI.net."

Similar presentations


Ads by Google