Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Sensor Node Energy Roadmap 200020022004 10,0001,000100101.1 Average Power (mW) Deployed (5W) PAC/C Baseline (.5W) (50 mW)  (1mW) Rehosting to Low Power.

Similar presentations


Presentation on theme: "1 Sensor Node Energy Roadmap 200020022004 10,0001,000100101.1 Average Power (mW) Deployed (5W) PAC/C Baseline (.5W) (50 mW)  (1mW) Rehosting to Low Power."— Presentation transcript:

1 1 Sensor Node Energy Roadmap 200020022004 10,0001,000100101.1 Average Power (mW) Deployed (5W) PAC/C Baseline (.5W) (50 mW)  (1mW) Rehosting to Low Power COTS (10x) (10x) -System-On-Chip -Adv Power Management Algorithms (50x) Source: ISI & DARPA PAC/C Program

2 2 Communication/Computation Technology Projection Assume: 10kbit/sec. Radio, 10 m range. Large cost of communications relative to computation continues Source: ISI & DARPA PAC/C Program

3 3 Design Issues Unattended Long Life Communication is more energy-expensive than computation 10E3 to 10E6 operations for the same energy to transmit one bit to 10-100 meters. Self-organizing Ad hoc Unpredictable and always changing. Scalable Scale to the size of networks Requires distributed control

4 4 Sample Layered Architecture In-network: Application processing, Data aggregation, Query processing Adaptive topology control, Routing MAC, Time, Location Phy: comm, sensing, actuation, SP User Queries, External Database Data dissemination, storage, caching Congestion control Source: Kurose’s slide Today’s lecture

5 5 Impact of Data Aggregation in Wireless Sensor Networks Slides adapted from the slides from the authors B. Krishnamachari, D. Estrin, and S. Wicker

6 6 Aggregation in Sensor Networks Redundant Data/events Some services are amenable for in-network computations. “The network is the sensor” Communication can be more expensive than computation. By performing “computation” on data en route to the sink, we can reduce the amount of data traffic in the network. Increases energy efficiency as well as scalability The bigger the network, the more computational resources.

7 7 Data Aggregation Temperature Reading (source 2) Temperature Reading (source 1) Give Me The Average Temperature? ( sink ) source 1 source 2 source 1 & 2 Aggregates the data before routing it In this example average would aggregate to:

8 8 Transmission modes AC vs DC Source 1Source 2 A B Sink Source 1Source 2 A B Sink a)Address-Centric (AC) Routing (no aggregation) b) Data-Centric (DC) Routing (in-network aggregation) 1 1 2 2 2 1 1+2 Data Aggregation

9 9 Let there be k sources located within a diameter X, each a distance d i from the sink. Let N A, N D be the number of transmissions required with AC and optimal DC protocols respectively. 1. The following are bounds on N D : 2. Asymptotically, for fixed k, X, as d = min(d i ) is increased, Theoretical Results on Aggregation

10 10 Theoretical results (DC) N D Upper bound. k – 1 sources  1 source nearest sink Each X hop away: ( k – 1 )X + min(di) N D Lower bound if X = 1 or all sources are at one hop to the nearest source. N A : >= k min(di)

11 11 Optimal Aggregation Tree Steiner Trees *A minimum-weight tree connecting a designated set of vertices, called terminals, in a weighted graph or points in a space. The tree may include non- terminals, which are called Steiner vertices or Steiner points bdg a e c h f 5 2 5 41 12 32 3 2 31 2 1 bdg a eh 1 2 1 31 *Definition taken from the NIST site. http://www.nist.gov/dads/HTML/steinertre e.html

12 12 Aggregation Techniques Center at Nearest Source (CNSDC): All sources send the information first to the source nearest to the sink, which acts as the aggregator. Shortest Path Tree (SPTDC): Opportunistically merge the shortest paths from each source wherever they overlap. Greedy Incremental Tree (GITDC): Start with path from sink to nearest source. Successively add next nearest source to the existing tree.

13 13 Aggregation Techniques a) Clustering based CNS Cluster Head SINK Shortest paths b) Shortest Path Tree Nearest source Shortest path c) Greedy Incremental

14 14 Source Placement Models I: Event Radius (ER)

15 15 Source Placement Models II: Random Sources (RS)

16 16 Energy Costs in Event-Radius Model As R increases, the number of hops to the sink increases. CNS approaches the optimal when R is large.

17 17 Energy Costs in Event-Radius Model More saving with more sources

18 18 Energy Costs in Random Sources Model GIT does not achieve optimal

19 19 Energy Costs in Random Sources Model

20 20 Aggregation Delay in Event-Radius Model In AC protocols, there is no aggregation delay. Data can start arriving with a latency proportional to the distance of the nearest source to the sink. In DC protocols the worst case delay is proportional to the distance of the farthest source from the sink.

21 21 Aggregation Delay in Random Sources Model Although bigger in energy saving, it incurs more latency.

22 22 Conclusions Data aggregation can result in significant energy savings for a wide range of operational scenarios Although NP-hard in general, polynomial heuristics such as the opportunistic SPTDC and greedy GITDC are near-optimal in general and can provide optimal solutions in useful special cases. The gains from aggregation are paid for with potentially higher delay.

23 23 Congestion Control in Wireless Sensor Networks Adapted from the slides from: 1. Kurose and Ross, Computer Networking, A top-down approach. 2. Mitigating Congestion in Wireless Sensor Networks, B. Hull et al.

24 24 Principles of Congestion Control Congestion: informally: “ too many sources sending too much data too fast for the network to handle ” different from flow control! manifestations: lost packets (buffer overflow at routers) long delays (queueing in router buffers) a top-10 problem!

25 25 Causes/costs of congestion scenario 1 two senders, two receivers one router, infinite buffers no retransmission large delays when congested maximum achievable throughput unlimited shared output link buffers Host A in : original data Host B out

26 26 Causes/costs of congestion scenario 2 one router, finite buffers sender retransmission of lost packet finite shared output link buffers Host A in : original data Host B out ' in : original data, plus retransmitted data

27 27 Causes/costs of congestion scenario 2 always: (goodput) “ perfect ” retransmission only when loss: retransmission of delayed (not lost) packet makes larger than perfect case for same in out = in out > in out “ costs ” of congestion: more work (retrans) for given “ goodput ” unneeded retransmissions: link carries multiple copies of pkt R/2 in out b. R/2 in out a. R/2 in out c. R/4 R/3

28 28 Causes/costs of congestion scenario 3 four senders multihop paths timeout/retransmit in Q: what happens as and increase ? in finite shared output link buffers Host A in : original data Host B out ' in : original data, plus retransmitted data

29 29 Causes/costs of congestion scenario 3 Another “ cost ” of congestion: when packet dropped, any “ upstream transmission capacity used for that packet was wasted! HostAHostA HostBHostB o u t Congestion Collapse

30 30 Goals of congestion control Efficient use of network resources Try to keep input rate as close to output rate while keeping network utilization high. Fairness Many flows competing for resources; flows need to share resources  No starvation.  Many fairness definitions Equitable use is not necessarily fair.

31 31 Congestion is a problem in wireless networks Difficult to provision bandwidth in wireless networks Unpredictable, time-varying channel A channel (i.e., air) shared by multiple neighboring nodes. Network size, density variable Diverse traffic patterns But if unmanaged, congestion leads to congestion collapse.

32 32 Outline Quantify the problem in a sensor network testbed Examine techniques to detect and react to congestion Evaluate the techniques Individually and in concert Explain which ones work and why

33 33 Investigating congestion 55-node Mica2 sensor network Multiple hops Traffic pattern All nodes route to one sink B-MAC [Polastre], a CSMA MAC layer 100 ft. 16,076 sq. ft.

34 34 Congestion dramatically degrades channel quality

35 35 Why does channel quality degrade? Wireless is a shared medium Hidden terminal collisions Many far-away transmissions corrupt packets Sender Receiver

36 36 Per-node throughput distribution

37 37 Per-node throughput distribution

38 38 Per-node throughput distribution

39 39 Per-node throughput distribution

40 40 Hop-by-hop flow control Queue occupancy- based congestion detection Each node has an output packet queue Monitor instantaneous output queue occupancy If queue occupancy exceeds α, indicate local congestion

41 41 Hop-by-hop congestion control Hop-by-hop backpressure Every packet header has a congestion bit If locally congested, set congestion bit Snoop downstream traffic of parent Congestion-aware MAC Priority to congested nodes 01 Packet

42 42 Rate limiting Source rate limiting Count your parent ’ s number of sourcing descendents (N). Send one (per source) only after the parent sends N. Limit your sourced traffic rate, even if hop-by-hop flow control is not exerting backpressure

43 43 Related work Hop-by-hop congestion control Wan et al., SenSys 2003 ATM, switched Ethernet networks Rate limiting Ee and Bajcsy, SenSys 2004 Wan et al., SenSys 2003 Woo and Culler, MobiCom 2001 Prioritized MAC Aad and Castelluccia, INFOCOM 2001

44 44 Congestion control strategies No congestion controlNodes send at will Occupancy-based hop-by-hop CC Detects congestion with queue length and exerts hop-by-hop backpressure Source rate limitingLimits rate of sourced traffic at each node FusionCombines occupancy-based hop-by-hop flow control with source rate limiting

45 45 Evaluation setup Periodic workload Three link-level retransmits All nodes route to one sink using ETX Average five hops to sink – 10 dBM transmit power 10 neighbors average 100 ft. 16,076 sq. ft.

46 46 Metric: network efficiency Penalizes Dropped packets (buffer drops, channel losses) Wasted retransmissions Interpretation: the fraction of transmissions that contribute to data delivery. 2 packets from bottom node, no channel loss, 1 buffer drop, 1 received: η = 2/(1+2) = 2/3 1 packet, 3 transmits, 1 received: η = 1/3

47 47 Hop-by-hop CC improves efficiency

48 48 Hop-by-hop CC conserves packets No congestion controlHop-by-hop CC

49 49 Metric: imbalance ζ=1: deliver all received data ζ ↑: more data not delivered Interpretation: measure of how well a node can deliver received packets to its parent i

50 50 Periodic workload: imbalance

51 51 Rate limiting decreases sink contention No congestion controlWith only rate limiting

52 52 Rate limiting provides fairness

53 53 Hop-by-hop flow control prevents starvation

54 54 Fusion provides fairness and prevents starvation

55 55 Synergy between rate limiting and hop-by-hop flow control

56 56 Alternatives for congestion detection Queue occupancy Packet loss rate TCP uses loss to infer congestion Keep link statistics: stop sending when drop rate increases Channel sampling [Wan03] Carrier sense the channel periodically Congestion: busy carrier sense more than a fraction of the time

57 57 Comparing congestion detection methods

58 58 Correlated-event workload Goal: evaluate congestion under an impulse of traffic Generate events seen by all nodes at the same time At the event time each node:  Sends B back-to-back packets ( “ event size ” )  Waits long enough for the network to drain

59 59 Small amounts of event-driven traffic cause congestion

60 60 Software architecture Fusion implemented as a congestion- aware queue above MAC Apps need not be aware of congestion control implementation Application Routing Fusion Queue MAC CC1000 Radio

61 61 Summary Congestion is a problem in wireless sensor networks Fusion ’ s techniques mitigate congestion Queue occupancy detects congestion Hop-by-hop flow control improves efficiency Source rate limiting improves fairness Fusion improves efficiency by 3× and eliminates starvation http://nms.csail.mit.edu/fusion


Download ppt "1 Sensor Node Energy Roadmap 200020022004 10,0001,000100101.1 Average Power (mW) Deployed (5W) PAC/C Baseline (.5W) (50 mW)  (1mW) Rehosting to Low Power."

Similar presentations


Ads by Google