SCALABLE INFORMATION-DRIVEN SENSOR QUERYING AND ROUTING FOR AD HOC HETEROGENEOUS SENSOR NETWORKS Paper By: Maurice Chu, Horst Haussecker, Feng Zhao Presented.

Slides:



Advertisements
Similar presentations
1 ECE 776 Project Information-theoretic Approaches for Sensor Selection and Placement in Sensor Networks for Target Localization and Tracking Renita Machado.
Advertisements

Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
DECISION TREES. Decision trees  One possible representation for hypotheses.
Linear Regression.
Fast Algorithms For Hierarchical Range Histogram Constructions
CLUSTERING IN WIRELESS SENSOR NETWORKS B Y K ALYAN S ASIDHAR.
Presented By- Sayandeep Mitra TH SEMESTER Sensor Networks(CS 704D) Assignment.
1 Scalable Information-Driven Sensor Querying and Routing for ad hoc Heterogeneous Sensor Networks Maurice Chu, Horst Haussecker and Feng Zhao Based upon.
Integration of sensory modalities
Bayesian Decision Theory
A Novel Cluster-based Routing Protocol with Extending Lifetime for Wireless Sensor Networks Slides by Alex Papadimitriou.
Query Evaluation. An SQL query and its RA equiv. Employees (sin INT, ename VARCHAR(20), rating INT, age REAL) Maintenances (sin INT, planeId INT, day.
Visual Recognition Tutorial
Pattern Recognition and Machine Learning
On Computing Compression Trees for Data Collection in Wireless Sensor Networks Jian Li, Amol Deshpande and Samir Khuller Department of Computer Science,
Evaluating Hypotheses
A Hierarchical Energy-Efficient Framework for Data Aggregation in Wireless Sensor Networks IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 55, NO. 3, MAY.
Probabilistic Data Aggregation Ling Huang, Ben Zhao, Anthony Joseph Sahara Retreat January, 2004.
Scalable Information-Driven Sensor Querying and Routing for ad hoc Heterogeneous Sensor Networks Maurice Chu, Horst Haussecker and Feng Zhao Xerox Palo.
Online Data Gathering for Maximizing Network Lifetime in Sensor Networks IEEE transactions on Mobile Computing Weifa Liang, YuZhen Liu.
Visual Recognition Tutorial
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Radial Basis Function Networks
Introduction to variable selection I Qi Yu. 2 Problems due to poor variable selection: Input dimension is too large; the curse of dimensionality problem.
Network Aware Resource Allocation in Distributed Clouds.
Portfolio Theory Chapter 7
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Channel Capacity.
Value of Information (VOI) Theory Advisor: Dr Sushil K Prasad By: DM Rasanjalee Himali.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
1 EZIO BIGLIERI (work done with Marco Lops) USC, September 20, 2006.
COMMUNICATION NETWORK. NOISE CHARACTERISTICS OF A CHANNEL 1.
1 E. Fatemizadeh Statistical Pattern Recognition.
Approximate Dynamic Programming Methods for Resource Constrained Sensor Management John W. Fisher III, Jason L. Williams and Alan S. Willsky MIT CSAIL.
REECH ME: Regional Energy Efficient Cluster Heads based on Maximum Energy Routing Protocol Prepared by: Arslan Haider. 1.
SCALABLE INFORMATION-DRIVEN SENSOR QUERYING AND ROUTING FOR AD HOC HETEROGENEOUS SENSOR NETWORKS Paper By: Maurice Chu, Horst Haussecker, Feng Zhao Presented.
Statistical Decision Theory Bayes’ theorem: For discrete events For probability density functions.
Chapter 7 Expected Return and Risk. Explain how expected return and risk for securities are determined. Explain how expected return and risk for portfolios.
Basic Concepts of Information Theory Entropy for Two-dimensional Discrete Finite Probability Schemes. Conditional Entropy. Communication Network. Noise.
Review of fundamental 1 Data mining in 1D: curve fitting by LLS Approximation-generalization tradeoff First homework assignment.
The Problem of Location Determination and Tracking in Networked Systems Weikuan Yu, Hui Cao, and Vineet Mittal The Ohio State University.
An Introduction To The Kalman Filter By, Santhosh Kumar.
Tufts Wireless Laboratory School Of Engineering Tufts University Paper Review “An Energy Efficient Multipath Routing Protocol for Wireless Sensor Networks”,
LOCALIZED MINIMUM - ENERGY BROADCASTING IN AD - HOC NETWORKS Paper By : Julien Cartigny, David Simplot, And Ivan Stojmenovic Instructor : Dr Yingshu Li.
Spectrum Sensing In Cognitive Radio Networks
Jayanth Nayak, Ertem Tuncel, Member, IEEE, and Deniz Gündüz, Member, IEEE.
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
Submitted by: Sounak Paul Computer Science & Engineering 4 th Year, 7 th semester Roll No:
Chance Constrained Robust Energy Efficiency in Cognitive Radio Networks with Channel Uncertainty Yongjun Xu and Xiaohui Zhao College of Communication Engineering,
Learning Kernel Classifiers 1. Introduction Summarized by In-Hee Lee.
1 Information Content Tristan L’Ecuyer. 2 Degrees of Freedom Using the expression for the state vector that minimizes the cost function it is relatively.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Deep Feedforward Networks
(5) Notes on the Least Squares Estimate
Weikuan Yu, Hui Cao, and Vineet Mittal The Ohio State University
Distributed Energy Efficient Clustering (DEEC) Routing Protocol
Latent Variables, Mixture Models and EM
LECTURE 05: THRESHOLD DECODING
Filtering and State Estimation: Basic Concepts
LECTURE 05: THRESHOLD DECODING
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
LECTURE 05: THRESHOLD DECODING
Information Theoretical Analysis of Digital Watermarking
Further Topics on Random Variables: 1
Berlin Chen Department of Computer Science & Information Engineering
Overview: Chapter 4 (cont)
Probabilistic Surrogate Models
Optimization under Uncertainty
Presentation transcript:

SCALABLE INFORMATION-DRIVEN SENSOR QUERYING AND ROUTING FOR AD HOC HETEROGENEOUS SENSOR NETWORKS Paper By: Maurice Chu, Horst Haussecker, Feng Zhao Presented By: D.M. Rasanjalee Himali

INTRODUCTION Problem addressed: How to dynamically query sensors and route data in a network so that information gain is maximized while latency and bandwidth consumption is minimized Approach: information driven sensor querying (IDSQ) optimize sensor selection and constrained anisotropic diffusion routing (CADR) direct data routing and incrementally combine sensor measurements so as to minimize an overall cost function.

INTRODUCTION Use information utility measures to optimize sensor selection Use incremental belief update Each node can : evaluate an information/cost objective, make a decision, update its belief state, and route data based on the local information/cost gradient and end-user requirement.

SENSING MODEL AND MEASURE OF UNCERTAINTY uses standard estimation theory. z i (t): The time-dependent measurement of sensor i λ i (t):sensor i characteristics, x(t): unknown target position h: possibly non-linear function depending on x(t) and parameterized by λ i (t). Characteristics of λ i (t) about sensor i: Sensing modality (type of sensor i ) Sensor position xi, Noise model of sensor i Node power reserve of sensor i

SENSING MODEL AND MEASURE OF UNCERTAINTY BELIEF: Is a representation of the current a posteriori distribution of x given measurements z1,..., zN Typically, the expected value of this distribution is considered to be the estimate: Residual uncertainty is approximated by the covariance:

SENSING MODEL AND MEASURE OF UNCERTAINTY knowledge of the measurement value zi and sensor characteristics λ i normally resides only in sensor i. To compute the belief based on measurements from several sensors, we must pay a cost for communicating that information.

SENSING MODEL AND MEASURE OF UNCERTAINTY Incorporating measurements into the belief are now assigned costs Therefore, should intelligently choose a subset of sensor measurements which: provide “good” information for constructing a belief state and minimize the communication cost of sensor measurements Information Content of sensor i: a measure of the information a sensor measurement can provide to a belief state.

SENSOR SELECTION Given the current belief state, need to incrementally update the belief by incorporating measurements of previously not considered sensors. However, among all available sensors in the network, not all provide useful information that improves the estimate. Furthermore, some information might be useful, but redundant. The task is to select an optimal subset and to decide on an optimal order of how to incorporate these measurements into our belief update. This provides a faster reduction in estimation uncertainty

SENSOR SELECTION Assume there are N sensors labeled from 1 to N and the corresponding measured values of the sensors are {z i }. 1<=i<=N Let U ⊂ {1,...,N} be the set of sensors whose measurements have been incorporated into the belief. The current belief is:

SENSOR SELECTION Information utility function The sensor selection task is to choose a sensor which has not been incorporated into the belief yet which provides the most information Def (Information Utility):  acts on the class P(R d ) of all probability distributions on R d and returns a real number with d being the dimension of x.

SENSOR SELECTION  assign a value to each element p ∈ P(R d ) which indicates how spread out or uncertain the distribution p is. Smaller values represent a more spread out distribution while larger values represent a tighter distribution.

SENSOR SELECTION Incorporating a measurement z j, where j ∉ U, into the current belief state p (x|{z i } i ∈ U) is accomplished by further conditioning the belief with the new measurement. Hence, the new belief state is

SENSOR SELECTION Incorporating a measurement z j has the effect of mapping an element of P(R d ) to another element of P(R d ). Since ψ gives a measure of how “tight” a distribution in P(R d ) is, it is clear that the best sensor j ∈ A={1,...,N} − U to choose is

SENSOR SELECTION However, in practice, we only have knowledge of h and λ i to determine which sensor to choose. We don't know the measurement value z j before it is being sent. Nevertheless, we wish to select the “most likely” best sensor. Hence, it is necessary to marginalize out the particular value of z j.

SENSOR SELECTION For any given value of z j for sensor j, we get a particular value for ψ acting on the new belief state p(x|{Z i } i ∈ U  {Z j }) For each sensor j, consider the set of all values of ψ ( ) for choices of zj: Best average case Maximizing worst case Maximizing best case

INFORMATION UTILITY MEASURES To quantify the information gain provided by a sensor measurement, it is necessary to define a measure of information utility. The intuition: information content is inversely related to the “size” of the high probability uncertainty region of the estimate of x. Ex: Covariance-Based Fisher Information Matrix Entropy of Estimation Uncertainty Volume of High Probability Region

INFORMATION UTILITY MEASURES Covariance-Based Used in the simplest case of a uni-modal posterior distribution that can be approximated by a Gaussian Derive utility measures based on the covariance Σ of the distribution p x (X). The determinant det( Σ ) is proportional to the volume of the rectangular region enclosing the covariance ellipsoid. Hence, the information utility function for this approximation can be chosen as:

INFORMATION UTILITY MEASURES Entropy of Estimation Uncertainty If the distribution of the estimate is highly non- Gaussian, then the covariance Σ is a poor statistic of the uncertainty. One possible utility measure is the information- theoretic notion of information: the entropy of a random variable. For a discrete random variable X taking values in a finite set S, the Shannon entropy H(X) is defined to be:

INFORMATION UTILITY MEASURES Entropy is a measure of uncertainty which is inversely proportional to our notion of information utility. Thus we can define the information utility as:

COMPOSITE OBJECTIVE FUNCTION Up till now, we have ignored : the communication cost of transmitting information across the network, and which sensor actually holds the current belief. Leader Node: sensor, l, which holds the current belief

COMPOSITE OBJECTIVE FUNCTION Leader node might act as a relay station to the user, the belief resides at this node for an extended time interval all information has to travel to this leader. Another scenario: the belief itself travels through the network, and nodes are dynamically assigned as leaders. Depending on the network architecture and the measurement task, both or a mixture of both cases can be implemented.

COMPOSITE OBJECTIVE FUNCTION Assume that : The leader node temporarily holds the belief state and Information has to travel a certain distance through the network to be incorporated into the belief state.

COMPOSITE OBJECTIVE FUNCTION The objective function for sensor querying and routing is a function of both: information utility and cost of bandwidth and latency.

COMPOSITE OBJECTIVE FUNCTION This can be expressed by a composite objective function, Mc, of the form: where ψ = M u (p(X|{z i }, i ∈ U ), λ j ) Ma = the cost of the bandwidth, and latency of communicating information between sensor j and sensor l  :The tradeoff parameter ∈ [0,1] balances the contribution from the two terms

COMPOSITE OBJECTIVE FUNCTION The objective is to maximize Mc by selecting a sensor j from the remaining sensors A={1,...,N} − U by:

INFORMATION-DRIVEN SENSOR QUERY A sensor selection algorithm based on the cluster leader type of distributed processing protocol Assume we have a cluster of N sensors each labelled by a unique integer in {1,..., N}. A priori, each sensor i only has knowledge of its own position xi ∈ R 2

Select cluster leader Activate if a target is present in sensor cluster