Presentation is loading. Please wait.

Presentation is loading. Please wait.

Niosha Behnam CMPE 259 – Fall 2011.  Real-time data availability is not required for all sensor networks.  Robust disconnected operation is a needed.

Similar presentations


Presentation on theme: "Niosha Behnam CMPE 259 – Fall 2011.  Real-time data availability is not required for all sensor networks.  Robust disconnected operation is a needed."— Presentation transcript:

1 Niosha Behnam CMPE 259 – Fall 2011

2  Real-time data availability is not required for all sensor networks.  Robust disconnected operation is a needed for some applications. Environment & wildlife monitoring, for example.  Growing size of low-power flash memory points to greater capacity in sensor nodes.  In-network storage and opportunistic upload a model for such networks. Minimize data loss through:  Redistribution & data mules.

3  Redistribution within network islands between sensor nodes.  Redistribution between islands via data mules.  Delivery to sink via data mules.

4  Service running on a PC/Server.  Identified by IP address & TCP Port.  Alternatively/Additionally, 802.15.4 interface to redirect data to Sink File Service.  Receive data uplinked from data mules (generally).

5  Mule Types: Intentional; data mules that visit for the purpose of data recovery or redistribution from network islands. (ex. maintenance operators) Unintentional; data mules with mobility patterns independent of data upload needs.

6  Energy Efficient Does not attempt to perfectly balance storage amongst nodes. Lazy Offload delays redistribution until necessary.  Neighborhood-Based Redistribution Algorithm Local remaining storage information utilized to determine in-network redistribution.

7  Periodic Storage Information Advertisement. Low frequency, for energy efficiency. (~1 min.) Additional updates if storage changes exceed an advertisement threshold.  Redistribution to Under-Loaded Nodes Non-zero possibility to redistribute to any under- loaded node.  Data Transfers Prevent Thrashing

8  Energy Aware Balances storage and energy depletion.

9  Accomplished Via Data Mules Intentional & unintentional. Mules advertised storage based on global average storage usage. Mule advertisements occur more often than those of sensor nodes. Under-loaded nodes re-advertise in presence of mules for redistribution.

10  Local Storage Structure – Continuous log based storage.  Local Log Access – Writing / reading log items.  Neighborhood Monitor – sends advertisements and tracks neighbor status via table.  Data Transfer – Initiates transfer to neighbor.  Reliable Onehop Unicast – Verifies successful log reception.  User Interface – Handles writes from application layer. Written as Log-Arrays or Log Sequences.

11  Deployment Configuration:  Impact of In-network Redistribution

12  Comparison of Data Storage Rate  Data Dist. w/wo Mules

13  Strengths: Describes design in an intuitive manner. Solution for application set not previously addressed in storage architectures.  Weaknesses: Does not describe certain anomalies in results. Evaluation relies on extreme inequalities in sensing.  Summary: EnviroStore provides greatly increased in-network storage capacity in unbalanced sensor networks in disconnected operation.

14 Niosha Behnam CMPE 259 – Fall 2011

15  File-based storage abstraction inappropriate for all sensing applications.  “Rich” object storage abstraction better aligned with applications usage. Include streams, queues, lists, arrays, and files.  Growing size of low-power NAND flash memory aligned with use as application backing store.  Optimize storage around energy and memory constraints.

16  Applications – varied, favoring differing object storage paradigms  Object Storage Layer – exposes data structure like interface for objects.  Checkpoints – provides rollback and checkpoint support for objects  Flash Abstraction Layer (FAL) – log structured storage with write caching and compaction  Flash Storage – NAND / NOR Flash

17  Fixed Costs for accessing pages.  Per-Byte cost for writing/reading.  Write fixed costs are significantly greater than read costs.  Overwriting requires erasing data.  Limitations on simultaneous writes.

18  Buffered Log Based Data in a interleaved and written in an append- only manner. Direct Flash access for raw reads/writes supporting checkpointing. Entries have 2 byte header from object layer.  Memory Reclamation Spaces exhaustion dealt with by deleting data through cleaner task. Coordinates with object layer for cleaning, requiring the objects to perform required cleaning/compaction.

19  Error Handling Due to Flash memory single bit errors, checksums are utilized. Page level single-error-correction double-error- detection (SECDED).  Block Allocation Block access to flash for checkpoint support and application needs.

20  Basic Objects Stack Queue Stream Static-Index  Composite Objects Files Stream-Index  Checkpointing and Rollback

21  FAL Write Buffer Size Ideally, Maximize to Page Size for Energy Efficiency.  Object Layer Read Buffering Diminishing returns after 64 bytes.

22  Energy Consumption By Object Operation Write operations less expensive due to FAL write buffering. Reads relatively expensive.  Sequential vs. Random Array Operation Cost Significant difference in performance depending on node size.

23  Compaction Cost Energy & Time  Component Energy Consumption  Performance Comparison

24  Strengths: Storage concept aligns well with the way applications use data. Methodical and well organized.  Weaknesses: Lack of performance comparison in real sensing application.  Summary: Capsule provides storage better aligning with application needs. Efficient data centric flash storage abstraction through buffered log based writing.


Download ppt "Niosha Behnam CMPE 259 – Fall 2011.  Real-time data availability is not required for all sensor networks.  Robust disconnected operation is a needed."

Similar presentations


Ads by Google