Network Reprogramming & Programming Abstractions.

Slides:



Advertisements
Similar presentations
Trickle: Code Propagation and Maintenance Neil Patel UC Berkeley David Culler UC Berkeley Scott Shenker UC Berkeley ICSI Philip Levis UC Berkeley.
Advertisements

Sensor Network Platforms and Tools
Impala: A Middleware System for Managing Autonomic, Parallel Sensor Systems Ting Liu and Margaret Martonosi Princeton University.
Run-Time Dynamic Linking for Reprogramming Wireless Sensor Networks
Overview: Chapter 7  Sensor node platforms must contend with many issues  Energy consumption  Sensing environment  Networking  Real-time constraints.
1 Routing Techniques in Wireless Sensor networks: A Survey.
Towards a Sensor Network Architecture: Lowering the Waistline Culler et.al. UCB.
A Dynamic Operating System for Sensor Nodes (SOS) Source:The 3 rd International Conference on Mobile Systems, Applications, and Service (MobiSys 2005)
Leveraging IP for Sensor Network Deployment Simon Duquennoy, Niklas Wirstrom, Nicolas Tsiftes, Adam Dunkels Swedish Institute of Computer Science Presenter.
Reference: Message Passing Fundamentals.
Contiki A Lightweight and Flexible Operating System for Tiny Networked Sensors Presented by: Jeremy Schiff.
Incremental Network Programming for Wireless Sensors NEST Retreat June 3 rd, 2004 Jaein Jeong UC Berkeley, EECS Introduction Background – Mechanisms of.
Mica: A Wireless Platform for Deeply Embedded Networks Jason Hill and David Culler Presented by Arsalan Tavakoli.
Node-level Representation and System Support for Network Programming Jaein Jeong.
Generic Sensor Platform for Networked Sensors Haywood Ho.
Java for High Performance Computing Jordi Garcia Almiñana 14 de Octubre de 1998 de la era post-internet.
Incremental Network Programming for Wireless Sensors IEEE SECON 2004 Jaein Jeong and David Culler UC Berkeley, EECS.
Scripting Languages For Virtual Worlds. Outline Necessary Features Classes, Prototypes, and Mixins Static vs. Dynamic Typing Concurrency Versioning Distribution.
A Survey of Wireless Sensor Network Data Collection Schemes by Brett Wilson.
Generic Sensor Platform for Networked Sensors Haywood Ho.
TinyOS Software Engineering Sensor Networks for the Masses.
Chapter 2: Impact of Machine Architectures What is the Relationship Between Programs, Programming Languages, and Computers.
SNAL Sensor Networks Application Language Alvise Bonivento Mentor: Prof. Sangiovanni-Vincentelli 290N project, Fall 04.
Chess Review November 21, 2005 Berkeley, CA Edited and presented by Sensor Network Design Akos Ledeczi ISIS, Vanderbilt University.
November 18, 2004 Embedded System Design Flow Arkadeb Ghosal Alessandro Pinto Daniele Gasperini Alberto Sangiovanni-Vincentelli
Maté: A Tiny Virtual Machine for Sensor Networks Philip Levis and David Culler Presented by: Michele Romano.
Philip Levis UC Berkeley 6/17/20021 Maté: A Tiny Virtual Machine Viral Programs with a Certain Cosmopolitan Charm.
Introduction to Symmetric Multiprocessors Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Macroprogramming Sensor Networks for DDDAS Applications Asad Awan Department of Computer Science.
A System Architecture for Networked Sensors Jason Hill, Robert Szewczyk, Alec Woo, Seth Hollar, David Culler, Kris Pister
Programming Abstractions in Wireless Sensor Networks Professor Jack Stankovic Department of Computer Science University of Virginia.
Mihai GALOS - ICECS Dynamic reconfiguration in Wireless Sensor Networks Mihai GALOS, Fabien Mieyeville, David Navarro Lyon Institute of Nanotechnology.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
TinyOS By Morgan Leider CS 411 with Mike Rowe with Mike Rowe.
Rapid Development and Flexible Deployment of Adaptive Wireless Sensor Network Applications Chien-Liang Fok, Gruia-Catalin Roman, Chenyang Lu
Mate: A Tiny Virtual Machine for Sensor Networks Presented by: Mohammad Kazem Ghaforian Mazandaran University of Scince & Technology.
Rapid Development and Flexible Deployment of Adaptive Wireless Sensor Network Applications Chien-Liang Fok, Gruia-Catalin Roman, Chenyang Lu
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
Operating Systems Lecture 2 Processes and Threads Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of.
Mate: A Tiny Virtual Machine for Sensor Networks Philip Levis and David Culler Presented by: Damon Jo.
Korea Advanced Institute of Science and Technology Active Sensor Networks(Mate) (Published by Philip Levis, David Gay, and David Culler in NSDI 2005) 11/11/09.
CS542 Seminar – Sensor OS A Virtual Machine For Sensor Networks Oct. 28, 2009 Seok Kim Eugene Seo R. Muller, G. Alonso, and D. Kossmann.
Mate: A Tiny Virtual Machine for Sensor Networks Phil Levis and David Culler Presented by Andrew Chien CSE 291 Chien April 22, 2003 (slides courtesy, Phil.
한국기술교육대학교 컴퓨터 공학 김홍연 Habitat Monitoring with Sensor Networks DKE.
Data Collection and Dissemination. Learning Objectives Understand Trickle – an data dissemination protocol for WSNs Understand data collection protocols.
CS533 - Concepts of Operating Systems 1 The Mach System Presented by Catherine Vilhauer.
Xiong Junjie Node-level debugging based on finite state machine in wireless sensor networks.
1 Reprogramming/Re-tasking in Wireless Sensor Networks Part of slides are from Jonathon Hui, David A. Olsen and Jaein Jeong.
M. Accetta, R. Baron, W. Bolosky, D. Golub, R. Rashid, A. Tevanian, and M. Young MACH: A New Kernel Foundation for UNIX Development Presenter: Wei-Lwun.
Concurrency Properties. Correctness In sequential programs, rerunning a program with the same input will always give the same result, so it makes sense.
Computer Simulation of Networks ECE/CSC 777: Telecommunications Network Design Fall, 2013, Rudra Dutta.
Centroute, Tenet and EmStar: Development and Integration Karen Chandler Centre for Embedded Network Systems University of California, Los Angeles.
Link Layer Support for Unified Radio Power Management in Wireless Sensor Networks IPSN 2007 Kevin Klues, Guoliang Xing and Chenyang Lu Database Lab.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Lesson 1 1 LESSON 1 l Background information l Introduction to Java Introduction and a Taste of Java.
Region Streams Functional Macroprogramming for Sensor Networks Ryan Newton MIT CSAIL Matt Welsh Harvard University
Parallel Computing Presented by Justin Reschke
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Software Architecture of Sensors. Hardware - Sensor Nodes Sensing: sensor --a transducer that converts a physical, chemical, or biological parameter into.
Introduction to Operating Systems Concepts
INTRODUCTION TO WIRELESS SENSOR NETWORKS
Advanced Computer Systems
Self Healing and Dynamic Construction Framework:
Trickle: Code Propagation and Maintenance
Distributing Queries Over Low Power Sensor Networks
Lecture Topics: 11/1 General Operating System Concepts Processes
Chapter 2: Operating-System Structures
Chapter 2: Operating-System Structures
Presentation transcript:

Network Reprogramming & Programming Abstractions

2 Network reprogramming XNP: wireless reprogramming tool Mate: Virtual machine for WSN

3 Over NW Programming Wireless Sensors In-System Programming  A sensor node is plugged to the serial / parallel port  But, it can program only one sensor node at a time Network Programming  Delivers the program code to multiple nodes over the air with a single transmission  Saves the efforts of programming each individual node

4 Network Programming for TinyOS (XNP) Has been available since release 1.1 Originally made by Crossbow and modified by UCB Provides basic network programming capability Has some limitations  No support of multi-hop delivery  No support of incremental update

5 Background – Mechanisms of XNP (1)Host: sends program code as download msgs (2)Sensor node: stores the msgs in the external flash (3)Sensor node: calls the boot loader. The boot loader copies the program code to the program memory. User app SREC file External Flash Network Programming Host Program Boot loader User Application Section Program Memory Boot loader Section Network Programming Module Radio Packets Host MachineSensor Node (2) (3) (1)

6 Network reprogramming XNP: wireless reprogramming tool Mate: Virtual machine for WSN

7 Mate: A Virtual Machine for WSNs Why VM? Large number (100’s to 1000’s) of nodes in a coverage area Some nodes will fail during operation Change of function during the mission Related Work  PicoJava : assumes Java bytecode execution hardware  K Virtual Machine : requires 160 – 512 KB of memory  XML : too complex and not enough RAM  Scylla : VM for mobile embedded system

8 Mate features Small(16KB instruction memory, 1KB RAM) Concise(limited memory & bandwidth) Resilience(memory protection) Efficient(bandwidth) Tailorable(user defined instructions)

9 Mate in a nutshell (capsule?) Stack architecture Three concurrent execution contexts (clock, send, receive) Execution triggered by predefined events Tiny code capsules; self-propagate into network Built in communication and sensing instructions

10 When is Mate Preferable? For small number of executions  Bytecode version is preferable for a program running < 5 days  The energy saved in communicating new program via Mate compensates for the energy wasted due to running virtual machine bytecode interpreter In energy constrained domains Use Mate capsule as a general RPC engine, memory protection, virtualization

11 Mate Architecture Stack based architecture Single shared variable gets/sets Three events: Clock timer Message reception Message send Hides asynchrony Simplifies programming Less prone to bugs

12 Instruction Set One byte per instruction Three classes: basic, s-type, x-type basic: arithmetic, halting, LED operation s-type: messaging system x-type: pushc, blez  8 instructions reserved for users to define Instruction polymorphism e.g. add(data, message, sensing)

13 Code Example Display Counter to LED

14 Code Capsules One capsule = 24 instructions Fits into single TOS packet Atomic reception Code Capsule  Type and version information  Type: send, receive, timer, subroutine

15 Viral Code Capsule transmission: forw  Forwarding other installed capsule: forwo (use within clock capsule) Mate checks on version number on reception of a capsule -> if it is newer, install it Versioning: 32bit counter Disseminates new code over the network

16 Component Breakdown Mate runs on mica with 7286 bytes code, 603 bytes RAM

17 Network Infection Rate 42 node network in 3 by 14 grid Radio transmission: 3 hop network Cell size: 15 to 30 motes Every mote runs its clock capsule every 20 seconds Self-forwarding clock capsule

18 Bytecodes vs. Native Code Mate IPS: ~10,000 Overhead: Every instruction executed as separate TOS task

19 Customizing Mate Mate is general architecture; user can build customized VM  Bombilla in TinyOS for querying  Agilla (over Bombilla) for mobile agents in WSNs User can select bytecodes and execution events Issues:  Flexibility vs. Efficiency Customizing increases efficiency w/ cost of changing requirements  Java’s solution: General computational VM + class libraries  Mate’s approach: More customizable solution -> let user decide

20 Programming abstractions Macro-programming approaches Hood abstraction Region streams Kairos

21 Macroprogramming Program sensornet as a whole  Easier than programming at the level of individual nodes  e.g) Matrix multiplication Matrix notation vs. Parallel program in MPI  Compile into node-level programs Non CS researchers shall be able to program without worrying about distributed execution details  Abstract away the details of concurrency and communication

22 Taxonomy of Macroprogramming Macro-programming AbstractionsSupport Global behavior Local Behavior CompositionDistribution & Safe Execution Automatic Optimization Node- independent TAG, Cougar DFuse Node-dependent Kairos Regiment Split-C Data-Centric EIP, State- space Geometric Regions, Hood Sensorware SNACK Mate Tofu Trickle Deluge Impala

23 Hood (UC Berkeley) Neighborhood  A neighborhood in Hood is defined by a set of criteria for choosing neighbors and a set of variables to be shared.  A node can define multiple neighborhoods with different variables shared over each of them. Captures the essence of the neighborhood concepts needed by many existing applications Defines the relationship between several concepts fundamental to neighborhoods  membership, data sharing, data caching, and messaging.  decouples data sharing and caching  Integrate neighbor lists and caching with messaging  Mirror & filter Explicitly proposes the neighborhood-oriented programming

24 Region streams (Harvard) Purely functional macroprogramming language for sensornet Basic data abstraction: region streams  A time-varying collection of node state  e.g., “All sensor nodes within area R” form a region  The set of their periodic data samples form a region stream Example: tracking moving vehicle A region stream is created that represents the value of the proximity sensor on every node in the network Each value is also annotated with the location of the corresponding sensor. Data items that fall below the threshold are filtered out. The spatial centroid of the remaining collection of sensor values is computed to determine the approximate location of the object that generated the readings

25 Region streams (Harvard) Regiment: Functional Macroprogramming Language  Based on functional reactive programming concepts  Functional languages: “pure”, no input no output  cannot manipulate program state  allows the compiler to decide how and where the program state is kept in the volatile mesh of sensor nodes

26 Market Based Macroprogramming (Harvard) Basic model:  Nodes act as agents that sell goods (such as sensor readings or routed msgs)  Each good is produced by an associated action that produces it  Nodes attempt to maximize their profit, subject to energy constraints Each good has an associated price  Network is “programmed” by setting prices for each good Each action has an associated energy cost  e.g., Cost to sample a sensor << Cost to transmit a radio message material from Matt Welsh

27 How to program in MBM? First step: Set the price(s)  use one of many efficient dissemination protocols  update prices as need by the overall application goal Nodes select actions based on a utility function Utility depends on:  Price  Advertised by base station  Energy availability  Taking an action must stay within energy budget  Other dependencies  Cannot aggregate data until multiple samples have been received  Cannot transmit if nothing in local buffer material from Matt Welsh

28 Kairos (USC) In Kairos, a programmer writes a single sequential program using a simple centralized memory model Thread of control Sequential Program Read/write Centralized Sensor State mapped from Sensors

29 Advantage Centralized sequential programs easier to specify, code, understand and debug than hand-coded distributed versions  Reuse “textbook” algorithms for sophisticated tasks  Ignoring latency and energy considerations, a dumb but obviously trivial “distributed” implementation always possible, by shipping sensor nodes’ state to and from a central location

30 Kairos Features Three constructs with which to write programs  node (a first-class datatype) and node_list (iterator on nodes) that facilitate topology independent programming  get_neighbors() to obtain current one-hop neighbors of a node  to synchronously access data and program state of node ’s These constructs are language-agnostic They can be implemented in the preprocessor stage of compilation

31 Eventual Consistency Synchronization model called Loose Synchrony Useful when there is relatively static node state Did not work well for a dynamic vehicle tracking scenario Implemented a tighter semantic called Loop-level Synchrony Long term, we are exploring temporal abstractions as a fourth construct that can capture this requirement completely