Presentation is loading. Please wait.

Presentation is loading. Please wait.

Applications of Wireless Sensor Networks. Outline Quick overview of applications in different real-world domains Two basic problems: Monitoring: sampling.

Similar presentations


Presentation on theme: "Applications of Wireless Sensor Networks. Outline Quick overview of applications in different real-world domains Two basic problems: Monitoring: sampling."— Presentation transcript:

1 Applications of Wireless Sensor Networks

2 Outline Quick overview of applications in different real-world domains Two basic problems: Monitoring: sampling the environment Tracking: know the trajectory of targets Three case studies Habitat monitoring Structural monitoring Target tracking

3 Warning Researchers are generally not good at applications We are a little bit far from real-world application scenarios We are too busy to invent/develop the technology, and have not cared too much about how to use it We as researchers tend to make things complex To show how smart we are You may know much better than me !

4 Habitat monitoring Environmental observation and forecasting systems: Columbia River Estuary Smart Dust Biomedical sensors Target tracking Examples of Applications of Sensor Nets in Reality

5 Estuarine Environmental Observation and Forecasting System Observation and forecasting system for the Columbia River Estuary

6 CORIE Approach Real-time observations Estuarine and offshore stations Numerical modeling Produce forecast, circulation Virtualization & application Vessel survey, navigation fishing, etc…

7 Smart Dust : Mote Tiny & light-communication

8 Military Applications of Smart Dust

9 Biomedical Sensors Sensors help to create vision

10 Classifications of Sensor Nets Sensor position Static (Habitat, CORIE, Biomedical) Mobile (Smart Dust, Biomedical) Goal-driven Monitoring: Real-time/Not-real-time (Habitat, Smart Dust) Forecasting (CORIE) Function substitution (Biomedical) … Communication medium Radio Frequency (Habitat, CORIE, Biomedical) Light (Smart Dust)

11 Application-specific Constraints Material Constraints Bio-Compatibility Inconspicuous Imitative to environment Detect-proof: e.g. stealth flight Secure Data Communications Regulatory Requirements – such as FDA

12 Limited Computation and Data Storage Sensor design Multi-objective sensors and single (a few)-objective sensors. Cooperation among sensors Data aggregation and interpretation

13 Low Power Consumption Low power functional components Power-manageable components Several functional state (low state-transition overhead) Deep-sleep, Sleep, On Provide different QoS with different power consumption. Power Management Power measurement Power budget allocation Control transitions between different power states.

14 Wireless Communication Communication mediums Radio Frequency: Habitat monitoring, Biomedical sensors and CORIE estuarine observation Light (active and passive): Smart Dust Ad hoc versus infrastructure modes Topology Routing

15 What we have learned from these cases? The quality of applications depends on your imagination and real-world awareness The number of applications is growing bigger day by day The sky is the limit

16 Case Study 1: Habitat Monitoring What can we learn from such a simple application? Progressive refinement: from application scenario to the detailed component technology

17 How much can they vary? What are the occupancy patterns during incubation? What environmental changes occurs in the burrows and their surroundings during the breeding season? Questions What environmental factors make for a good nest? Motivation: Application Scenario

18 Problem Formulation & Translation Solution Deployment of a sensor network The impact of human presence can distort results by changing behavioral patterns and destroy sensitive populations Repeated disturbance will lead to abandonment of the colony Problems Seabird colonies are very sensitive to disturbances

19 Great Duck Island Project

20 Sensor Network Architecture Patch Network Gateway (low power) Base-Remote Link Data Service Internet Client Data Browsing and Processing Transit Network Base-station (house-hold power) Sensor Patch Sensor Node (  power)

21 Mica Sensor Node Left: Mica II sensor node 2.0x1.5x0.5 cu. In. Right: weather board with temperature, thermopile (passive IR), humidity, light, acclerometer sensors, connected to Mica II node Single channel, 916 Mhz radio for bi-directional radio @40kps 4MHz micro-controller 512KB flash RAM 2 AA batteries (~2.5Ah), DC boost converter (maintain voltage) Sensors are pre-calibrated (±1- 3%) and interchangeable

22 Sensor Node Power Limited Resource (2 AA batteries) Estimated supply of 2200 mAh at 3 volts Each node has 8.128 mAh per day (9 months) Sleep current 30 to 50 uA (results in 6.9 mAh/day for tasks) Processor draws apx 5 mA => can run at most 1.4 hours/day Nodes near the gateway will do more forwarding 75 minutes Power Management

23 Communication Routing Routing directly from node to gateway not possible Approach proposed for scheduled communication: Determine routing tree Each gate is assigned a level based on the tree Each level transmits to the next and returns to sleep Process continues until all level have completed transmission The entire network returns to sleep mode The process repeats itself at a specified point in the future

24 Network Re-tasking Initially collect absolute temperature readings After initial interpretation, could be realized that information of interest is contained in significant temperature changes Full reprogramming process is costly: Transmission of 10 kbit of data Reprogramming application: 2 minutes @ 10 mA Equals one complete days energy Virtual Machine based retasking: Only small parts of the code needs to be changed

25 Sensed Data including the Nasty Portion Raw thermopile data from GDI during 19-day period from 7/18- 8/5/2002. Show difference between ambient temperature and the object in the thermopile’s field of view. It indicates that the petrel left on 7/21, return on 7/23, and between 7/30 and 8/1

26 Health and Status Monitoring Monitor the mote’s health and the health of neighboring motes Duty cycle can be dynamically adjusted to alter lifetime Periodically include battery voltage level with sensor readings (0~3.3volts) Can be used to infer the validity of the mote’s sensor readings

27 What have we learned? A simple application scenario may yield enough systems insights Among different architectural options and component choices, which worked? Which did not? E.g., Single hop, multihop? What is the main impeding factor for systems design goals? E.g., energy? Accuracy? There are enough problems coming out of a real application scenario! E.g., re-tasking No need to fabricate new ones in your mind!

28 Case Study 2: Structural Monitoring: Wisden Structural health monitoring (SHM) Detection and localization of damages in structures Structural response Ambient vibration (earthquake, wind etc) Forced vibration (large shaker) Current SHM systems Sensors (accelerometers) placed at different structure location Connected to the centralized location Wires (cables) Single hop wireless links Wired or single hop wireless data acquisition system

29 Real-World Deployment Scenario Seismic test structure Full scale model of an actual hospital ceiling structure Four Seasons building Damaged four-storey office building subjected to forced- vibration

30 Sensor Network for Seismic Test Structure Scenario 10 node deployment Sampling at 50 Hz along three axes Transmission rate at 0.5 packets/sec Impulse excitation using hydraulic actuators

31 Motivation Are wireless sensor networks an alternative? Why WSN? Scalable Finer spatial sampling Rapid deployment Wisden Wireless multi-hop data acquisition system

32 Challenges Reliable data delivery SHM intolerant to data losses High aggregate data rate Each node sampling at 100 Hz or above About 48Kb/sec (10 node,16-bit sample, 100Hz) Data synchronization Synchronizing samples from different sources at the base station Resource constraints Limited bandwidth and memory Energy efficiency Future work

33 Wisden Architecture ChallengesArchitecture Component Description Reliable data delivery Reliable Data Transport Hybrid hop-by- hop and end-to- end error recovery High data rateCompressionSilence suppression Wavelet based compression Data Synchronization Residence time calculation in the network

34 Reliable Data Transport Routing Nodes self-organize in a routing tree rooted at the base station Used Woo et al.’s work on routing tree construction Reliability Hop-by-hop recovery How ? NACK based Piggybacking and overhearing Why hop by hop? High packet loss NACK Retransmission NACK Retransmission NACK Retransmission

35 Reliable Data Transport (cont.) End to End packet recovery How ? Initiated by the base station (PC) Same mechanism as hop-by-hop NACK Why ? Topology changes leads to loss of missing packet information Missing packet information may exceed the available memory Data Transmission rate Rate at which a node injects data Currently pre-configured for each node at R/N R = nominal radio bandwidth N = total number of nodes Adaptive rate allocation part of future work.

36 Compression Sampled data: significant fraction of radio bandwidth Event based compression Detect Event Based on maximum difference in sample value over a variable window size Quiescent period Run length encoding Non-quiescent period No compression Saving proportional to duty- cycle of vibration Drawback High latency Quiescent Period EventQuiescent Period CompressionNo Compression Compression

37 Compression For Low Latency Progressive storage and transmission Event detection Wavelet decomposition and local storage Compression Low – resolution components are transmitted Raw data, if required available from local storage Current Status Evaluated on standalone implementation To be integrated into Wisden Wavelet Decomposition Quantization, Thresholding, Run length coding Sink Flash Storage To sink on demand Reliable Data Transport Event Low resolution components

38 Data Synchronization Synchronize data samples at the base station Generation time of each sample in terms of base station clock Network wide clock synchronization not necessary Light-weight approach As each packet travels through the network Time spent at each node calculated using local clock and added to the field “residence time” Base station subtracts residence time from current time to get sample generation time. Time spent in the network defines the level of accuracy S qAqA A qAqA q A + q B B qBqB T A =T-(q A + q B )T C =T-(q C + q D ) qCqC C qCqC q C + q D D qDqD

39 Implementation Hardware Mica2 motes Vibration card (MDA400CA from Crossbow) High frequency sampling (up to 20KHz) 16 bit samples Programmable anti- aliasing filter Software TinyOS Additional software 64-bit clock component Modified vibration card firmware

40 Seismic Test Structure Setup Setup 10 node deployment Sampling at 50 Hz along three axes Transmission rate at 0.5 packets/sec Impulse excitation using hydraulic actuators For validation A node sending data to PC over serial port (Wired node) A co-located node sending data to the PC over the wireless multihop network (Wisden node)

41 Results: Frequency Response Low frequency modes captured High frequency modes lost Artifact of compression scheme we used Power spectral density: Wisden nodePower spectral density: Wired node

42 Results: Packet Reception and Latency Packet reception 99.87 % (cumulative over all nodes) 100 %, if we had waited longer Latency 7 minutes to collect data for 1 minute of vibration

43 What we have learned from this case study? 1.Application requirements motivate networking protocol design Data reliability --> hop-by-hop + end to end Time synchronization --> base station is the reference 2.In-network processing is needed Compression helps to reduce data communication 3. Again, no need to forge problems to solve at school or in your lab Look at applications in reality!

44 Case Study 3: Object Tracking Given a sensor network, use the sensors to determine the motion of one or more targets Canonical domain for DSNs - much of what we have seen so far is applicable data routing, query propagation, wireless protocols Typically requires more cooperation among entities than other examples we have seen Compare: “is there an elephant out there?” vs. “where has that particular elephant been?”

45 Tracking Challenges Data dissemination and storage Resource allocation and control Operating under uncertainty Real-time constraints Data fusion (measurement interpretation) Multiple target disambiguation Track modeling, continuity and prediction Target identification and classification

46 Tracking Domains Appropriate strategy depends on the sensors’ capabilities, domain goals and environment Requires multiple measurements? Bounded communication? Target movement characteristics? No single solution for all problems For example… Limited bandwidth encourages local processing Limited sensors requires cooperation

47 Why Not Centralized? Scale! Data processing combinatorics Resource bottleneck (communication, processing) Single point of failure Ignores benefits of locality

48 Why Not (fully) Distributed? (i.e. everyone tracks) Redundant information and computation Can increase uncertainty Lack of unified view High communication costs (exception: overhearing [Fitzpatrick 2003])

49 Organization-Based Tracking Use structure, roles to control data & action flow Can be static, or dynamically evolved [Brooks 2003]: Spontaneous coalition formation [Horling 2003]: Partitions, mediated clustering [Li 2002]: Hierarchical information fusion [Yadgar 2003]: Hierarchical teams [Wang 2003]: Roles and group formation [Zhao 2002]: Geographic groups

50 Distributed Target Tracking Single target Fixed, acoustic sensors Requires multiple measurements Limited ad-hoc wireless network Track and classify target (classification, which uses a supervised learning technique, is not discussed here)

51 Location-Centric Tracking Operation steps at each node: Initialization: disseminate sensor information Receive candidates: describe approaching targets Local detections: gather measurements Merge detections: form track, compare candidates Determine confidence: estimate uncertainty Estimate track: predict future target location Transmit track: notify relevant sensors

52 Location-Centric Tracking “Closest point of approach” (CPA) measurements Target detection causes cell formation Cells formed around the target’s estimated location Intended to include relevant sensors Manager is selected Node with greatest signal strength Manager collects local CPA’s Linear regression over CPA node locations CPA Time Magnitude

53 Location-Centric-Tracking Estimated location compared to prior tracks Projections from candidate tracks Cell created for track in new area Size is a function of target velocity Track information propagated to cell Tracking repeats…

54 Location-Centric Advantages Avoids combinatorial explosion of track association Centralized: n targets, n candidate locations = n 2 Distributed: 1 target, n candidate locations = n Reduces communication costs (multi-hop ad hoc) Saves energy

55 Results No FilteringKalmanLateral Inhibition RMS Comm 18.1 14,512 (?) 8.9 254,552 11.3 21,792

56 Organization-based Tracking Fixed doppler radars Requires multiple, coordinated measurements Multiple targets Shared 8-channel RF communication

57 Sensor Characteristics Hardware Fixed location, orientation Three 120° radar heads Agent controller Doppler radar Amplitude and frequency data One (asynchronous) measurement at a time

58 Scaling Issues Information retrieval Sensor locations, related target information Task assignment Scan scheduling, tracking Conflict propagation Competition for sensor time How far must information travel? How to obtain data from many sources? How to cope with large quantities of data?

59 Organizational Control Use organization to address scaling issues Environment is partitioned Constrains information propagation Reduces information load Exploits locality Agents take on one or more roles Limits sources of information Facilitates data retrieval Other techniques also built into negotiation protocol and individual role behaviors

60 Organization Overview Sector Manager Tracking Manager Scanning Agent Tracking Agent

61 Typical Node Layout Nodes are arranged or scattered, and have varied orientations. One agent is assigned to each node.

62 Partitioning of Nodes The environment is first partitioned into sectors. Sector managers are then assigned.

63 Competition for Sensor Agents Sector members send their capabilities to their managers. Each manager then generates and disseminates a scan schedule.

64 Track Manager Selection Nodes in the scan schedule perform scanning actions. Detections reported to manager, and a track manager selected.

65 Managing Conflicted Resources Track manager discovers and coordinates with tracking nodes. New tracking tasks may conflict with existing tasks at the node.

66 Data Fusion (Track Generation) Tracking data sent to an agent which performs the fusion. Results sent back to track manager for path prediction.

67 Communication Protocols > 20 message types currently in use Results, sector management, track management, target detection, negotiation, directory services, reliable messaging, etc. (results from a typical three minute, two target, eight node scenario)

68 Protocol Usage Map Sector Manager Tracking Manager Scanning Agent Tracking Agent DrA DrQ DrR TB RR TD PT C RB PC DA TB U ES Protocols

69 Sector Size This one parameter affects many things… Sector manager load Smaller sector –› smaller manager directory Larger sector –› better sector coverage Track manager actions Smaller sector –› fewer update messages Larger sector –› fewer directory queries Communication distance, agent activity, RMS error, message type counts Empirical evaluation of varying this parameter

70 Experimental Setup Radsim simulator 36 sensors 1-36 equal sized sectors 4 mobile targets 10 runs per configuration Hypothesis: sector size of 6- 10 agents is best

71 Communication Characteristics Larger sectors with more agents leads to less messaging overall Less tracking control Fewer directory queries More sectors to query More tracking data

72 Load Disparity Large sectors increase SM comm. load More messages to handle Greater disparity - SM is a “hotspot” Greater disparity in activity load Average action totals are constant

73 Domain Metrics Communication distance increases with larger sectors Track migration triggered by boundaries …but better RMS error More measurements due to lower control overhead

74 What’s Best? Find inflection point in graphs’ intersection Empirical evidence supports sector size from 5-10 sensors This would vary, depending on sensor and environmental characteristics

75 What we have learned from this case study? 1.Scaling motivates in-network processing 1. The large number of nodes is not a trouble maker, but a blessing --> exploit it! 2. Hierarchical structure is useful to address scaling 2.Location-based design is a useful approach! 3.There are a lot of tradeoffs in different performance metrics! 1. Demonstrated via parameter tuning 2. Select one based on your application scenario

76 Summary  Applications are as good as your creativity I may not be an expert on it  There are no magic solutions that work in all cases Application-dependent/specific solution is the way to go Tradeoff between different factors! Which factor is more important depends on your application requirements! Some care more about energy Some care more about data quality (no lost data) Some care more about latency


Download ppt "Applications of Wireless Sensor Networks. Outline Quick overview of applications in different real-world domains Two basic problems: Monitoring: sampling."

Similar presentations


Ads by Google