Macroprogramming Sensor Networks for DDDAS Applications Asad Awan Department of Computer Science.

Slides:



Advertisements
Similar presentations
.NET Technology. Introduction Overview of.NET What.NET means for Developers, Users and Businesses Two.NET Research Projects:.NET Generics AsmL.
Advertisements

REST Introduction 吴海生 博克软件(杭州)有限公司.
Reconfigurable Sensor Networks with SOS Chih-Chieh Han, Ram Kumar Rengaswamy, Roy Shea and Mani Srivastava UCLA Networked and Embedded Systems Laboratory.
Decentralized Reactive Clustering in Sensor Networks Yingyue Xu April 26, 2015.
Sensor Network Platforms and Tools
Impala: A Middleware System for Managing Autonomic, Parallel Sensor Systems Ting Liu and Margaret Martonosi Princeton University.
Overview: Chapter 7  Sensor node platforms must contend with many issues  Energy consumption  Sensing environment  Networking  Real-time constraints.
Presented by: Thabet Kacem Spring Outline Contributions Introduction Proposed Approach Related Work Reconception of ADLs XTEAM Tool Chain Discussion.
A Dynamic Operating System for Sensor Nodes (SOS) Source:The 3 rd International Conference on Mobile Systems, Applications, and Service (MobiSys 2005)
TOSSIM A simulator for TinyOS Presented at SenSys 2003 Presented by : Bhavana Presented by : Bhavana 16 th March, 2005.
Chapter 13 Embedded Systems
Chapter 13 Embedded Systems Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design Principles,
1 Introduction to Wireless Sensor Networks. 2 Learning Objectives Understand the basics of Wireless Sensor Networks (WSNs) –Applications –Constraints.
Contiki A Lightweight and Flexible Operating System for Tiny Networked Sensors Presented by: Jeremy Schiff.
MIT iCampus iLabs Software Architecture Workshop June , 2006.
Zero-programming Sensor Network Deployment 學生:張中禹 指導教授:溫志煜老師 日期: 5/7.
Macroprogramming Heterogeneous Sensor Networks Using COSM OS Asad Awan Department of Computer Science Acknowledgement:
CS 501: Software Engineering Fall 2000 Lecture 16 System Architecture III Distributed Objects.
Generic Sensor Platform for Networked Sensors Haywood Ho.
Generic Sensor Platform for Networked Sensors Haywood Ho.
Chapter 13 Embedded Systems
TinyOS Software Engineering Sensor Networks for the Masses.
1 Introduction to Load Balancing: l Definition of Distributed systems. Collection of independent loosely coupled computing resources. l Load Balancing.
Maté: A Tiny Virtual Machine for Sensor Networks Philip Levis and David Culler Presented by: Michele Romano.
Copyright Arshi Khan1 System Programming Instructor Arshi Khan.
Sensor Network Simulation Simulators and Testbeds Jaehoon Kim Jeeyoung Kim Sungwook Moon.
Architectural Design.
Mihai GALOS - ICECS Dynamic reconfiguration in Wireless Sensor Networks Mihai GALOS, Fabien Mieyeville, David Navarro Lyon Institute of Nanotechnology.
Tufts Wireless Laboratory School Of Engineering Tufts University “Network QoS Management in Cyber-Physical Systems” Nicole Ng 9/16/20151 by Feng Xia, Longhua.
Introduction and Overview Questions answered in this lecture: What is an operating system? How have operating systems evolved? Why study operating systems?
Chapter 6 Operating System Support. This chapter describes how middleware is supported by the operating system facilities at the nodes of a distributed.
Low-Power Wireless Sensor Networks
Managing a Cloud For Multi Agent System By, Pruthvi Pydimarri, Jaya Chandra Kumar Batchu.
Eric Keller, Evan Green Princeton University PRESTO /22/08 Virtualizing the Data Plane Through Source Code Merging.
1 COMPSCI 110 Operating Systems Who - Introductions How - Policies and Administrative Details Why - Objectives and Expectations What - Our Topic: Operating.
Introduction to Wireless Sensor Networks
Software Support for Advanced Computing Platforms Ananth Grama Professor, Computer Sciences and Coordinated Systems Lab., Purdue University.
Scalable Web Server on Heterogeneous Cluster CHEN Ge.
Sensor Database System Sultan Alhazmi
Korea Advanced Institute of Science and Technology Active Sensor Networks(Mate) (Published by Philip Levis, David Gay, and David Culler in NSDI 2005) 11/11/09.
IntroductionRelated work 2 Contents Publish/Subscribe middleware Conclusion and Future works.
Overview of Sensor Networks David Culler Deborah Estrin Mani Srivastava.
Distributed Computing Environment (DCE) Presenter: Zaobo He Instructor: Professor Zhang Advanced Operating System Advanced Operating System.
Unit 2 Architectural Styles and Case Studies | Website for Students | VTU NOTES | QUESTION PAPERS | NEWS | RESULTS 1.
An Architecture to Support Context-Aware Applications
Abstract A Structured Approach for Modular Design: A Plug and Play Middleware for Sensory Modules, Actuation Platforms, Task Descriptions and Implementations.
Xiong Junjie Node-level debugging based on finite state machine in wireless sensor networks.
SATIRE: A Software Architecture for Smart AtTIRE R. Ganti, P. Jayachandran, T. F. Abdelzaher, J. A. Stankovic (Presented by Linda Deng)
Internet of Things. IoT Novel paradigm – Rapidly gaining ground in the wireless scenario Basic idea – Pervasive presence around us a variety of things.
Centroute, Tenet and EmStar: Development and Integration Karen Chandler Centre for Embedded Network Systems University of California, Los Angeles.
In-Network Query Processing on Heterogeneous Hardware Martin Lukac*†, Harkirat Singh*, Mark Yarvis*, Nithya Ramanathan*† *Intel.
1 Software Reliability in Wireless Sensor Networks (WSN) -Xiong Junjie
Introduction to Wireless Sensor Networks
From Use Cases to Implementation 1. Structural and Behavioral Aspects of Collaborations  Two aspects of Collaborations Structural – specifies the static.
From Use Cases to Implementation 1. Mapping Requirements Directly to Design and Code  For many, if not most, of our requirements it is relatively easy.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Software Architecture of Sensors. Hardware - Sensor Nodes Sensing: sensor --a transducer that converts a physical, chemical, or biological parameter into.
Enabling Grids for E-sciencE Agreement-based Workload and Resource Management Tiziana Ferrari, Elisabetta Ronchieri Mar 30-31, 2006.
INTRODUCTION TO WIRELESS SENSOR NETWORKS
SDN challenges Deployment challenges
COMPSCI 110 Operating Systems
Martin Casado, Nate Foster, and Arjun Guha CACM, October 2014
Parallel Programming By J. H. Wang May 2, 2017.
Distribution and components
Declarative Programming of Distributed Sensing Applications
An Introduction to Software Architecture
Chapter 6: Architectural Design
Task Manager & Profile Interface
From Use Cases to Implementation
Presentation transcript:

Macroprogramming Sensor Networks for DDDAS Applications Asad Awan Department of Computer Science

Wireless Sensor Networks Integrating computing with the “physical world” Sense  Process data  Consume – Dynamic data-driven system Large-scale self-organized network of tiny low-cost nodes with sensors – Resource constrained nodes: CPU: 7 MHz Memory: 4KB data, 128KB program Bandwidth: 32 kbps Power: 2 AA batteries Challenge: programming the “network” to efficiently collect and process data

Low level details –Resource constraints –Conserving battery life for long term unattended operation –Developing distributed algorithms for self-organization Communication and data routing between nodes Maintain scalability as the number of nodes in the network grow Resilience to dynamic changes (e.g., failures) Data processing challenges –Spatial and temporal correlation of data from several independent sources –Processing of disparate measurement information to estimate/analyze the “actual” physical phenomenon Providing a simple & high level interface for end-users to program data processing algorithms and global system behavior without the need to understand low-level issues WSN: DDDAS Challenges WSN

Macroprogramming WSNS The traditional approach to DS programming involves writing “network-enabled” programs for each node – The program specifies interactions between modules rather than the expected system behavior – This paradigm raises several issues: Program development is difficult due to the complexity of indirectly encoding the system behavior and catering to low-level details Program debugging is difficult due to hidden side effects and the complexity of interactions Lack of a formal distributed behavior specification precludes verification of compliance to “expected” behavioral properties Macroprogramming entails programming the system wide behavior of the WSN – Hides low system-level details, e.g., hardware interactions, network messaging protocols etc.

Reprogramming? Over-the-air reprogramming is a highly desirable feature for WSN systems – Deployment costs are high and nodes are often inaccessible or remotely located Reasons to reprogram – Iterative development cycles Change the fidelity or type of measurements Update data processing features – Removal of bugs Challenges: (1) Preserving system behavioral properties, (2) Allowing code reuse and versioning, (3) Minimizing update costs

Heterogeneous Sensor Networks Resource constraints of nodes necessitates use of heterogeneous devices in the network – High data rate sensors, e.g., laser disp. sensor – CPU/memory intensive processing, e.g., FFT – Bandwidth bottlenecks and radio range – Persistent storage Heterogeneity can be supported by deploying a hierarchical network The macroprogramming architecture should uniformly encompass heterogeneous devices – Supporting platform agnostic application development is trivial Challenge: Designing an architectural model that scales performance as resources increase

Objective To develop a second generation operating system suite that facilitates rapid macroprogramming of efficient self-organized distributed data-driven applications for WSN

Outline Challenges Related work Our approach Current status Future directions

Related Work TinyOS – Low footprint: applications and OS are tightly coupled – Costly reprogramming: update complete node image – Aimed at resource constrained nodes SOS – Interacting modules compose an application – OS and modules are loosely coupled – Modules can be individually updated: low cost – Lack of sufficient safety properties – Aimed at resource constrained nodes Maté – application specific virtual machine – Event driven bytecode modules run over an interpreter – Domain specific interpreter – Very low cost updates of modules – Major revision require costly interpreter updates – Ease to program using simple scripting language – Implemented for constrained nodes Impala – Rich routing protocols – Rich software adaptation subsystem – Aimed at resource rich nodes

Related Work TinyDB – An application on top of TinyOS – Specification of data processing behavior using SQL queries – Limitations in behavioral specifications (due to implementation) – Difficult to add new features or functionality – High footprint High level macroprogramming languages – Functional and intermediate programming languages – Programming interface is restrictive and system mechanisms can not be tuned – No mature implementations exist – No performance evaluation is available

Outline Challenges Related work Our approach Current status Future directions

Application Model Macroprogramming (application) centric OS design: top down approach Application model: – Application is composed of data processing components called processing elements (PE) – Application is a specification of data-driven macro system behavior: An annotated connection graph of PEs Capability based naming of devices in the heterogeneous network PE deployment map: assignment of tasks to named devices (sets) in the heterogeneous net.

Processing Elements Defines “typed” input/output interfaces – Implemented as data queues Performs a data processing operation on input data – Programmed in C – Transactional behavior Reads input  processes data  writes output  commits output enqueue & input dequeue Concurrency safety: independent of underlying system’s concurrency model Conceptually a single unit of execution – Isolation properties Enables independent arch, scaling – Asynchronous execution – Code reusability Average raw_t avg_t

Connection Graph A data-driven macro specification of system behavior Connection of instances of data sources (ports), PEs and services using an annotated graph Typed safety: connection interfaces are statically type checked Deterministic system behavior A simple example: raw_t C(r) A() C = Clock A = Accelerometer Threshold (0.5) raw_t K Filter () avg_t fil_t FS * Average () raw_t avg_t (50)

The Application Device naming (addressing) the last piece in the puzzle: – Devices are identified based on their capability sets For example, devices with photo sensors, devices with fast CPU Implemented as masks Individual node naming does not ACCELEROMETER_SENSOR_NODES: FAST_CPU_NODES: SERVER_NODE: k_filter, FS TRIGER(CLOCK(1,rate)[0])  ACC_SENSOR(2,)[0] ACC_SENSOR(2,)[0]  threshold (3,0.5) threshold (3,0.5)[0] –(50)  average (4,)[0] average (4,)[0]  k_filter (5,) | –(5)  average (4,)[1] k_filter (5,)  FS(1,)

The Application Device naming (addressing) the last piece in the puzzle: – Devices are identified based on their capability sets For example, devices with photo sensors, devices with fast CPU Implemented as masks Individual node naming does not scale t k FS ACCELEROMETER_SENSOR_NODES: FAST_CPU_NODES: SERVER_NODE: k_filter, FS TRIGER(CLOCK(1,rate)[0])  ACC_SENSOR(2,)[0] ACC_SENSOR(2,)[0]  threshold(3,0.5) threshold(3,0.5)[0] –(50)  average(4,)[0] average(4,)[0]  k_filter(5,) | –(5)  average(4,)[1] k_filter(5,)  FS(1,)

The Application Device naming (addressing) the last piece in the puzzle: – Devices are identified based on their capability sets For example, devices with photo sensors, devices with fast CPU Implemented as masks Individual node naming does not scale t k FS ACCELEROMETER_SENSOR_NODES: FAST_CPU_NODES: SERVER_NODE: k_filter, FS TRIGER(CLOCK(1,rate)[0])  ACC_SENSOR(2,)[0] ACC_SENSOR(2,)[0]  threshold(3,0.5) threshold(3,0.5)[0] –(50)  average(4,)[0] average(4,)[0]  k_filter(5,) | –(5)  average(4,)[1] k_filter(5,)  FS(1,)

The Application Device naming (addressing) the last piece in the puzzle: – Devices are identified based on their capability sets For example, devices with photo sensors, devices with fast CPU Implemented as masks Individual node naming does not scale t k FS a Application ACCELEROMETER_SENSOR_NODES: FAST_CPU_NODES: SERVER_NODE: k_filter, FS TRIGER(CLOCK(1,rate)[0])  ACC_SENSOR(2,)[0] ACC_SENSOR(2,)[0]  threshold(3,0.5) threshold(3,0.5)[0] –(50)  average(4,)[0] average(4,)[0]  k_filter(5,) | –(5)  average(4,)[1] k_filter(5,)  FS(1,)

OS Design Each node has a static OS kernel – Consists of platform depend and platform independent layers Each node runs service modules Each node runs a subset of the components that compose a macro-application Updateable User space Static OS Kernel Platform Independent Kernel App PE Services HW Drivers Hardware Abstraction Layer

Outline Challenges Related work Our approach Current status Future directions

BOWEN ECN Net Interne t b Peer-to-Peer FM 433MHz Laser attached via serial port to Stargate computers MICA2 motes with ADXL 202 Currently laser readings can be viewed for from anywhere over the Internet (conditioned on firewall settings) Pilot deployment at BOWEN labs

Current Status: OS We have completed an initial prototype of our operating system for AVR μc (Mica2) Introductory paper in ICCS 2006 Current activities – Exhaustive testing and debugging – Performance evaluation – Enhancing generic routing modules – Enhancing application loading service – Porting to different platforms (POSIX)

Outline Challenges Related work Our approach Current status Future directions

Future Directions Implement common data processing modules that can be reused – E.g., aggregation, filtering, FFT Release the OS code Complete deployment on a real-world large- scale heterogeneous test bed: BOWEN labs – Iteratively develop a DDDAS system for structural health monitoring WYSIWYG application design utility, high level functional programming abstractions Exploring other application domains Exploring distributed algorithms: – E.g. PE allocation, routing, aggregation, etc.

Questions? Thank you!