Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Programming Sensor Networks: A Tale of Two Perspectives Ramesh Govindan Embedded Networks Laboratory

Similar presentations


Presentation on theme: "1 Programming Sensor Networks: A Tale of Two Perspectives Ramesh Govindan Embedded Networks Laboratory"— Presentation transcript:

1 1 Programming Sensor Networks: A Tale of Two Perspectives Ramesh Govindan ramesh@usc.edu Embedded Networks Laboratory http://enl.usc.edu

2 2 Wireless Sensing Motes: 8 or 16 bit sensor devices 32-bit embedded single-board computers

3 3 Platform Challenges Wireless communication is noisy  Loss rates far higher than in wired environments Battery life is a constraint, and determines network lifetime Other constraints on the mote platforms  Processors (16 bit, low MIPS)  Memory (a few KBs to few MBs)

4 4 State of Software Motes are programmed using  TinyOS  nesC TinyOS  An event-based OS with non-preemptive multi-tasking nesC  A dialect of C that has language support for TinyOS abstractions

5 5 But, lots of applications! Volcano City Soil Habitat Bridge Bird’s Nests

6 6 Wireless Sensing Research Processor PlatformsRadiosSensors Operating Systems LocalizationTime SynchronizationMedium AccessCalibration Collaborative Signal Processing Data-centric RoutingData-centric Storage Querying, Triggering Aggregation and Compression Collaborative Event Processing Monitoring Security Programming Systems Lots of research!

7 7 … some of it from our Lab Architecture Macro-programming Structural Health Monitoring Routing and Data Dissemination Data-Centric Storage Measurements and Testbeds

8 8 But, there is a problem! Six pages of 158 pages of code from a wireless structural data acquisition system called Wisden Programming these networks is hard!

9 9 Three Responses Event-based programming on an OS that supports no isolation, preemption, memory management or a network stack is hard. Therefore, we need OSes that support preemption and memory management, we need virtual machines, we need higher-level communication abstractions. OS/Middleware

10 10 Three Responses Tiny sensor nodes (motes) are resource- constrained, and we cannot possibly be re- programming them for every application. Therefore, we need a network architecture that constrains what you can and cannot do on the motes. Networking

11 11 Three Responses Today, we ’ re programming sensor networks in the equivalent of assembly language. What we need is a macroprogramming system, where you program the network as a whole, and hide all the complexity in the compiler and the runtime Programming Languages

12 12 Three Responses Programming Languages OS/MiddlewareNetworking The Tenet Architecture The Pleaides Macroprogramming System

13 13 The Tenet Architecture Omprakash Gnawali, Ben Greenstein, Ki-Young Jang, August Joki, Jeongyeup Paek, Marcos Vieira, Deborah Estrin, Ramesh Govindan, Eddie Kohler, The TENET Architecture for Tiered Sensor Networks, In Proceedings of the ACM Conference on Embedded Networked Sensor Systems (Sensys), November 2006.

14 14 The Problem Sensor data fusion within the network  … can result in energy-efficient implementations But implementing collaborative fusion on the motes for each application separately  … can result in fragile systems that are hard to program, debug, re-configure, and manage We learnt this the hard way, through many trial deployments

15 15 An Aggressive Position Why not design systems without sensor data fusion on the motes? A more aggressive position: Why not design an architecture that prohibits collaborative data fusion on the motes? Questions:  How do we design this architecture?  Will such an architecture perform well? No more on-mote collaborative fusion

16 16 Enable flexible deployment of dense instrumentation Tiered Sensor Networks Motes Low-power, short-range radios Contain sensing and actuation Allow multi-hop communication Masters 32-bit CPUs (e.g. PC, Stargate) Higher-bandwidth radios Larger batteries or powered Provide greater network capacity, larger spatial reach Many real-world sensor network deployments are tiered Real world deployments at,  Great Duck Island (UCB, [Szewczyk,`04]),  James Reserve (UCLA, [Guy,`06]),  Exscal project (OSU, [Arora,`05]),  … Future large-scale sensor network deployments will be tiered

17 17 Tenet Principle Multi-node data fusion functionality and multi-node application logic should be implemented only in the master tier. The cost and complexity of implementing this functionality in a fully distributed fashion on motes outweighs the performance benefits of doing so. Aggressively use tiering to simplify system !

18 18 and may return responses Tenet Architecture Motes process data, No multi-node fusion at the mote tier Masters control motes Applications run on masters, and masters task motes

19 19 What do we gain ? Simplifies application development Application writers do not need to write or debug embedded code for the motes  Applications run on less-constrained masters

20 20 What do we gain ? Enables significant code re-use across applications Simple, generic, and re-usable mote tier  Multiple applications can run concurrently with simplified mote functionality Robust and scalable network subsystem  Networking functionality is generic enough to support various types of applications

21 21 More bits communicated than necessary?Communication over longer hops? Challenges Fusion Not an issue  Typically the diameter of the mote tier will be small  Can compensate by more aggressive processing at the motes In most deployments, there is a significant temporal correlation  Mote-local processing can achieve significant compression … but little spatial correlation  Little additional gains from mote tier fusion Mote-local processing provides most of the aggregation benefits. The costs will be small, as we shall see…

22 22 System Overview Tasking SubsystemNetworking Subsystem Tasking Language Task Parser Tasklets and Runtime Reliable Transport Routing Task Dissemination Tenet System How to disseminate tasks and deliver responses? How to express tasks?

23 23 Tasking Subsystem Networking Subsystem Tasking Language Task Parser Tasklets and Runtime Reliable Transport Routing Task Dissemination Tenet System Goals  Flexibility: Allow many applications to be implemented  Efficiency: Respect mote constraints  Generality: Avoid embedding application-specific functionality  Simplicity: Non-expert users should be able to develop applications

24 24 Tasking Language Linear data-flow language allowing flexible composition of tasklets  A tasklet specifies an elementary sensing, actuation, or data processing action  Tasklets can have several parameters, hence flexible  Tasklets can be composed to form a task Sample( 500ms, REPEAT, ADC0, LIGHT )  Send() No loops, branches: eases construction and analysis  Not Turing-complete: aggressively simple, but supports wide range of applications Data-flow style language natural for sensor data processing

25 25 Classes of Tasklets System  Reboot  Get  Send Task Manipulation  Issue (Wait, Alarm)  DeleteTaskIf  DeleteActiveTaskIf Sensor/Actuator  Sample  Actuate Data Manipulation  Arithmetic  Comparison  Logical  Bitwise  Statistics  DeleteAttributeIf Miscellaneous  Storage  Count / Constant

26 26 Task Examples Sample and Send With time-stamp and seq. number Get memory status for node 10 If sample value is above 50, send sample data, node-id and time-stamp SampleSendCountStampTimeSendSampleMemStatsSendAddressNEQ(10)DeleteIfSampleLT(50)DeleteATaskIfAddressStampTimeSend

27 27 How many Tenet tasks can a mote run at the same time? + Memory is the constraint, and task complexity matters! Memory Usage (~5KB heap) Sample(1)Send 110Bytes Sample(1) Stamp Time CAGSDASend 180Bytes Sample(40)Send 190Bytes tasks Classify Amplitude Gather Statistics Delete Attribute Sample(1)Send Stamp Time 26 Classify Amplitude Sample(1)Send Stamp Time tasks, or 31 Sample(1)Send tasks 38 Wait Stamp Time Memory Stats Next Hop Send task 1 Network Monitoring tasks n Sensing Application (…on Tmote Sky motes)

28 28 Overview of Networking Subsystem Tasking System Networking Subsystem Tasking Language Task Parser Tasklets and Runtime Tenet System Goals  Scalability and robustness  Generality: Should support diverse application needs Reliable Transport Routing Task Dissemination

29 29 Task Dissemination Reliably flood task from any master to all motes  Uses the master tier for dissemination  Per-active-master cache with LRU replacement and aging

30 30 Reliable Transport Delivering task responses from motes to masters  Works transparently across the two tiers Three different types of transport mechanism for different application requirements  Best-effort delivery (least overhead)  End-to-end reliable packet transport  End-to-end reliable stream transport

31 31 Putting it all together …

32 32 Application Case Study: PEG Goal  Compare performance with an implementation that performs in-mote multi-node fusion Pursuit-Evasion Game  Pursuers (robots) collectively determine the location of evaders, and try to corral them

33 33 Mote-PEG vs. Tenet-PEG Pursuer Evader Detected Re-task the motesTask the motes Evader Pursuer Evader Detected Evader Leader Election Leader Mote-PEGTenet-PEG

34 34 Error in Position EstimateReporting Message Overhead PEG Results Comparable positional estimate error Comparable reporting message overhead

35 35 PEG Results Latency is nearly identical A Tenet implementation of an application can perform as well as an implementation with in-mote collaborative fusion

36 36 More bits communicated than necessary?Communication over longer hops? Why is this surprising? Fusion Not an issue  Typically the diameter of the mote tier will be small  Can compensate by more aggressive processing at the motes In most deployments, there is a significant temporal correlation  Mote-local processing can achieve significant compression … but little spatial correlation  Little additional gains from mote tier fusion Mote-local processing provides most of the aggregation benefits. Can achieve performance comparable to mote-native implementation

37 37 N W Real-world Tenet deployment on Vincent Thomas Bridge 570 ft 120 ft 30 ft Mote Master Ran successfully for 24 hours 100% reliable data delivery Deployment time: 2.5 hours Total sensor data received: 860 MB

38 38 Interesting Observations Fundamental mode agrees with previously published measurement Faulty sensor! Consistent modes across sensors

39 39 Related Work Architecture  SNA [Culler,`05], [Polastre,`05], Internet [Saltzer, `84], … Virtual Machines  Mate [Levis,`02], … Tasking  VanGo [Greenstein,`06], SNACK [Greenstein,`04] Dissemination  Deluge [Hui,`04], Trickle [Levis,`04], … Reliable Transport  RMST [Stann,`03], Wisden [Xu,`04], … Routing  ESS [Guy,`06], Centroute [Stathopoulos,`05], …

40 40 Summary Applications Simplifies application development Networking SubsystemRobust and scalable network Tasking SubsystemRe-usable generic mote tier Simple, generic and re-usable system

41 41 Software Available Master tier  Cygwin  Linux Fedora Core 3  Stargate  MacOS X Tiger Mote tier  Tmote Sky  MicaZ  Maxfor http://tenet.usc.edu

42 42 The Pleaides Macroprogramming System Nupur Kothari, Ramakrishna Gummadi, Todd Millstein, Ramesh Govindan, Reliable and Efficient Programming Abstractions for Wireless Sensor Networks, Proceedings of the SIGPLAN Conference on Programming Language Design and Implementation (PLDI), 2007.

43 43 What is Macroprogramming? Conventional sensornet programming Node-local program written in nesC Compiled to mote binary

44 44 What is Macroprogramming? Central program that specifies application behavior Node-local program written in nesC Compiled to mote binary Compiler Runtime + ?

45 45 Change of Perspective int val LOCAL; void main() { node_list all = get_available_nodes(); int max = 0; int max = 0; for ( int i = 0, node n = get_node(all, i); n != -1; n = get_node(all, ++i)) { n = get_node(all, ++i)) { if (val@n > max) max = val@n; }} Easily recognizable maximum computation loop

46 46 int val LOCAL; void main() { node_list all = get_available_nodes(); int max = 0; int max = 0; Pleaides Language for ( int i = 0, node n = get_node(all, i); n != -1; n = get_node(all, ++i)) { n = get_node(all, ++i)) { if (val@n > max) max = val@n; }} Node-local variable Central variable List of nodes in network Network Node Access node-local variable at node

47 47 Other Language Features Sensors Timers Waiting for asynchronous events unsigned int temp SENSOR(TEMP) LOCAL; unsigned int timer TIMER(100) LOCAL; event_wait(n, &temp, &timer);

48 48 Advantages Readability nesC programs harder to read Programmer has to manage communication TinyOS event model complicates control flow

49 49 Advantages Reliability Getting predictable semantics from nesC programs, while still ensuring efficiency, can result in complex programs

50 50 Car Parking Find nearest empty parking spot such that no two cars are assigned the same spot

51 51 Advantages Extensibility Slightly modifying application semantics is easier in Pleaides than it would be in nesC Find a parking spot such that no two cars are assigned the same spot Find nearest parking spot such that at most two cars are assigned the same spot

52 52 But… Not a new idea!  Shared-memory models in parallel computing  … and we all know how that turned out! Are shared memory models workable for sensor networks?

53 53 Implementing Pleaides The Pleaides Compiler ConcurrencyPartitioning How to efficiently distribute code across nodes How to support parallel code execution reliably

54 54 The Need for Partitioning Naïve approach Code executes here Node-local variables read from or written to over the network Can incur high communication cost

55 55 The Need for Partitioning Take the computation to the data Control-flow migration Access node-local variables from nearby nodes How does the compiler do this automatically?

56 56 Nodecuts Control-flow graph for max example Nodecuts generated by the Pleaides compiler Property: The location of variables accessed within a nodecut are known before its execution

57 57 Control-Flow Migration Runtime attempts to find lowest communication cost node to execute nodecut Even sophisticated sensor network programs have a small number (5-7) of nodecuts

58 58 Implementing Pleaides The Pleaides Compiler ConcurrencyPartitioning How to efficiently distribute code across nodes How to support parallel code execution reliably

59 59 Concurrency Sensor network programs can leverage concurrent execution Challenges How to specify concurrent execution? What consistency semantics to offer?

60 60 cfor ( int i = 0, node n = get_node(all, i); n != -1; n = get_node(all, ++i)) { n = get_node(all, ++i)) { if (val@n > max) max = val@n; }} Programmer-Directed Concurrency int val LOCAL; void main() { node_list all = get_available_nodes(); int max = 0; int max = 0; Concurrent-for loop

61 61 Serializability cfor ( int i = 0; i < 5; i++) { j = i; j++; } cfor execution cannot be purely concurrent, since program output would be unpredictable 01234 40132 Serializability: cfor execution corresponds to some sequential execution of the loop’s iterations

62 62 Ensuring Serializability Challenge: To ensure serializability while allowing performance gains through concurrent execution Approach: Distributed locking, with multiple- reader/single-writer locks

63 63 Distributed Locking cfor ( node n = 1; n < 5; n++) { if (val@n > max) max = val@n; } 20 10 0 5 6 val@n max 20 Technique generalizes to nested cfor loops using hierarchical locking Locking code generated entirely by the compiler

64 64 Technique generalizes to nested cfor loops using hierarchical locking 1020 Distributed Locking cfor (node n = 1; n < 5; n++) { if (temp@n > max) max = temp@n; } 20 10 5 6 temp@n max Compiler generates locking code Deadlocks possible, but dealt with using a simple deadlock detection and recovery scheme

65 65 Deadlocks Deadlock detection Lock coordinator at the top of the hierarchy can detect a deadlock easily Recovery Restarts the blocked instances sequentially

66 66 Looser Forms of Consistency Applications can use looser forms of consistency for performance Accesses to variables annotated LOOSE within a cfor are not serialized unsigned int beacon LOOSE;

67 67 Implementation Built as an extension to the CIL infrastructure for C analysis and transformation Pleaides compiler generates nesC code; each nodecut corresponds to a nesC task Experience with some applications: pursuit- evasion, decentralized street parking

68 68 Pursuit-Evasion in Pleaides Pleaides-PEG has comparable error to mote- PEG and about a factor of two higher latency

69 69 Performance Locking overhead small Deadlock overhead also small High latency an implementation artifact

70 70 Performance Pleaides-PEG error is comparable to mote-PEG and its latency is twice that of mote-PEG

71 71 Related Work Macroprogramming  Regiment [Newton `04], Task Graphs [Bakshi 04], TinyDB [Madden `02], Spatial Views [Ni `05] Node-local programming  Abstract Regions [Welsh,`04], Role Assignment [Frank `05], Virtual Machines [Levis `03] Reliable Concurrent Programming  STM [Harris `05], Atomos [Carlstrom `06] Parallel Programming Languages  Linda [Gelernter,`92], Split-C [Culler,`93], … Automatic Generation of Distributed Programs  Coign [Hunt,`99], MagnetOS [Liu `05], …

72 72 Summary The Pleaides Compiler ConcurrencyPartitioning Automated nodecut generation and dynamic control-flow migration Programmer-directed concurrency and compiler-generated locking

73 73 Which is Better? Programming Languages Networking The Tenet Architecture The Pleaides Macroprogramming System

74 74 Head-to-Head TenetPleaides ExpressivityLow, by designHigh CutenessLow: Some interesting protocol design questions, but focus is on simplicity High: Lots of interesting compiler optimization questions, consistency models Time-to- develop ~ 3 student years Papers23, potential for more

75 75 Head-to-Head TenetPleaides Missing Components Sleep schedulingAny-to-any routing, energy management, robustness MaturitySeen one deployment, have external users Code still needs much handholding What I believe in √ What I like√

76 76 http://enl.usc.edu


Download ppt "1 Programming Sensor Networks: A Tale of Two Perspectives Ramesh Govindan Embedded Networks Laboratory"

Similar presentations


Ads by Google