Dr. Miguel Bagajewicz Sanjay Kumar DuyQuang Nguyen Novel methods for Sensor Network Design.

Slides:



Advertisements
Similar presentations
5.4 Basis And Dimension.
Advertisements

SEM PURPOSE Model phenomena from observed or theoretical stances
Mathematics1 Mathematics 1 Applied Informatics Štefan BEREŽNÝ.
CSE 330: Numerical Methods
3/17/2003Tucker, Applied Combinatorics Section 4.2a 1 Network Flows Michael Duquette & Whitney Sherman Tucker, Applied Combinatorics, Section 4.2a, Group.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Chapter 7. Statistical Estimation and Sampling Distributions
Rigorous Software Development CSCI-GA Instructor: Thomas Wies Spring 2012 Lecture 13.
Online Social Networks and Media. Graph partitioning The general problem – Input: a graph G=(V,E) edge (u,v) denotes similarity between u and v weighted.
The adjustment of the observations
The Structure of Polyhedra Gabriel Indik March 2006 CAS 746 – Advanced Topics in Combinatorial Optimization.
Chapter 2: Lasso for linear models
The General Linear Model. The Simple Linear Model Linear Regression.
1 Fast Primal-Dual Strategies for MRF Optimization (Fast PD) Robot Perception Lab Taha Hamedani Aug 2014.
1 Rare Event Simulation Estimation of rare event probabilities with the naive Monte Carlo techniques requires a prohibitively large number of trials in.
Data Transmission and Base Station Placement for Optimizing Network Lifetime. E. Arkin, V. Polishchuk, A. Efrat, S. Ramasubramanian,V. PolishchukA. EfratS.
Offset of curves. Alina Shaikhet (CS, Technion)
Balanced Graph Partitioning Konstantin Andreev Harald Räcke.
Nonlinear Regression Probability and Statistics Boris Gervits.
1 Wavelet synopses with Error Guarantees Minos Garofalakis Phillip B. Gibbons Information Sciences Research Center Bell Labs, Lucent Technologies Murray.
4 4.6 © 2012 Pearson Education, Inc. Vector Spaces RANK.
1 In a second variation, we shall consider the model shown above. x is the rate of growth of productivity, assumed to be exogenous. w is now hypothesized.
COVERTNESS CENTRALITY IN NETWORKS Michael Ovelgönne UMIACS University of Maryland 1 Chanhyun Kang, Anshul Sawant Computer Science Dept.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
Introduction to Graph Theory
Chap 20-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 20 Sampling: Additional Topics in Sampling Statistics for Business.
Factorial Design of Experiments Kevin Leyton-Brown.
Managerial Decision Making and Problem Solving
Minimum Cost Flows. 2 The Minimum Cost Flow Problem u ij = capacity of arc (i,j). c ij = unit cost of shipping flow from node i to node j on (i,j). x.
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
A Clustering Algorithm based on Graph Connectivity Balakrishna Thiagarajan Computer Science and Engineering State University of New York at Buffalo.
Jeroen Pannekoek - Statistics Netherlands Work Session on Statistical Data Editing Oslo, Norway, 24 September 2012 Topic (I) Selective and macro editing.
Complexity 25-1 Complexity Andrei Bulatov Counting Problems.
4 4.6 © 2012 Pearson Education, Inc. Vector Spaces RANK.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
Network topology, cut-set and loop equation
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
Chapter 8 Maximum Flows: Additional Topics All-Pairs Minimum Value Cut Problem  Given an undirected network G, find minimum value cut for all.
Stochastic Optimization
Introduction to Optimization
Sampling Design and Analysis MTH 494 Lecture-21 Ossam Chohan Assistant Professor CIIT Abbottabad.
STATIC ANALYSIS OF UNCERTAIN STRUCTURES USING INTERVAL EIGENVALUE DECOMPOSITION Mehdi Modares Tufts University Robert L. Mullen Case Western Reserve University.
D EPARTMENT /S EMESTER (ECE – III SEM) NETWORK THEORY SECTION-D Manav Rachna University 1.
Surveying II. Lecture 1.. Types of errors There are several types of error that can occur, with different characteristics. Mistakes Such as miscounting.
CONCLUSIONS GA technique was implemented to find the optimal water network topology together with the minimum fresh water consumption, observing for each.
Network Partition –Finding modules of the network. Graph Clustering –Partition graphs according to the connectivity. –Nodes within a cluster is highly.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
2 2.2 © 2016 Pearson Education, Ltd. Matrix Algebra THE INVERSE OF A MATRIX.
Written By: Presented By: Swarup Acharya,Amr Elkhatib Phillip B. Gibbons, Viswanath Poosala, Sridhar Ramaswamy Join Synopses for Approximate Query Answering.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
CWR 6536 Stochastic Subsurface Hydrology Optimal Estimation of Hydrologic Parameters.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Chapter 7. Classification and Prediction
Deep Feedforward Networks
The minimum cost flow problem
James B. Orlin Presented by Tal Kaminker
Stochastic-Based Accuracy of Data Reconciliation Estimators for Linear Systems. Nguyen Tanth, DuyQuang and Miguel Bagajewicz, Chemical, Biological and.
Measure of precision - Probability curve shows the relationship between the size of an error and the probability of its occurrence. It provides the most.
3.5 Minimum Cuts in Undirected Graphs
EE513 Audio Signals and Systems
Efficient Subgraph Similarity All-Matching
Learning From Observed Data
6.3 Using Elimination to Solve Systems
Vector Spaces RANK © 2012 Pearson Education, Inc..
Presentation transcript:

Dr. Miguel Bagajewicz Sanjay Kumar DuyQuang Nguyen Novel methods for Sensor Network Design

Minimize cost of instrumentation while satisfying the constraints on attributes like Accuracy Precision Reliability Residual Accuracy etc… The Sensor Network Design Problem

Minimize Cost of instrumentation such that accuracy of S3= 7% S7= 8% Similarly we can have constraints on residual accuracy, reliability, precision etc.. The Sensor Network Design Problem

Tree Enumeration Procedure At each node calculate accuracy (and other attributes mandated by the constraints) compare with thresholds. If node is feasible, stop; explore sister nodes. If infeasible, go down. How to find optimal solution?

The Tree enumeration procedure can be made computationally effective by using cutsets instead of streams (Bagajewicz and Gala, 2006(a)). The efficiency is further more increased by decomposing the graph into subgraphs, (Bagajewicz and Gala, 2006(b)) Gala M and M. Bagajewicz. (2006b). “Rigorous Methodology for the Design and Upgrade of Sensor Networks using Cutsets. Industrial and Engineering Chemistry Research”. Vol 45, No 21, pp Gala M and M. Bagajewicz. (2006b) “Efficient Procedure for the Design and Upgrade of Sensor Networks using Cutsets and Rigorous Decomposition”. Industrial and Engineering Chemistry Research, Vol 45, No 21, pp Modified Tree Enumeration Procedure

Accuracy has been conventionally defined as the sum of absolute value of the systematic error and the standard deviation of the meter (Miller, 1996). Since the above definition is of very less practical value, accuracy of a stream can defined as the sum of the precision and the maximum induced bias in the respective stream, Bagajewicz (2005). Software Accuracy

-Software Accuracy -Precision -Maximum induced bias The maximum induced bias in a stream ‘i’ due a gross error in ‘s’ is given by, (using maximum power measurement test) Where, ‘A’ is the incidence matrix and ‘S’ is the variance covariance matrix of measurements Software Accuracy

In the presence of n T gross errors in positions given by a set T, the corresponding induced bias in variable ‘i’ is We have to explore all the possible combinations of locations of gross errors. Thus the problem can be stated using a binary vector as Software accuracy in the presence of ‘nt’ gross errors

When there is more than one gross error, two gross errors may be equal in magnitude but opposite in sign which tend to cancel each other. Gross Error Equivalency S1 S2 S3

Residual Accuracy of order ‘k’ is the software accuracy when ‘k’ gross errors have been found out and the measurements have been eliminated. Residual Accuracy

Probability with which a variable ‘i’ can be estimated using its own measurement or through material balance equations in the time interval [0, t]. Estimation Reliability

Cutset is the set of edges (streams) when eliminated, separates the graph into two disjoint subgraphs. Deletion of a subset of the edges in cutset does not separate the graph into two subgraphs. Streams 8, 6, 2 is a cutset. Streams 2, 3 is another cutset. There are several others. Cutset

x m = [1, 2, 3]; x m is also a cutset P{S1}= P{S2}= P{S3}= 0.9 Probability of estimating S1= Probability of S1 working or Probability of S2, S3 working simultaneously. Calculation of Estimation Reliability- Example R S1 = P{S1} υ [P{S2}∩P{S3}] R S1 = P{S1} υ [P{S2}×P{S3}] R S1 = P{S1}+ [P{S2}∩P{S3}]- [P{S1}×P{S2}×P{S3}] R S1 = ×0.81 When x m = [2, 3]; S1 becomes non redundant and so it can be estimated only by its material balance relations. Thus, R S1 = P{S1}.P{S2} = 0.81 S2S S1 S3 S4 4 S6

If the variable is measured, then its estimation is directly the service reliability of the sensor measuring it. If the variable is not measured, Estimation Reliability for Non Redundant Variable

Generate all the cutsets that has the variable of interest ‘i’. Removing the variable ‘i’ from those yields the reduced cutsets. Estimation Reliability for Redundant Variable S2S S1 S3 S4 4 S6 x m = [1, 2, 3]; Since the variable of interest is S1, the reduced cutset would be [2,3]. Let this be denoted by Z j (i), where ‘i’ is the variable of interest- here it is S1.

x m = [1, 2, 3, 4, 5]; [1, 2, 3], [1, 4, 5] are two cutsets. [2, 3] and [4,5] are reduced cutsets. P{S1}= P{S2}= P{S3}= P{S4}= P{S5}= 0.9 Probability of estimating S1= Probability of S1 working or Probability of S2, S3 working simultaneously or P { S4 and S5} working simultaneously Calculation of Estimation Reliability- Example R S1 = P{S1} υ [P{S2}∩P{S3}] υ [P{S4}∩P{S5}] R S1 = P{S1} υ [P{S2}×P{S3}] υ [P{S4}×P{S5}] R S1 = [ P{S1}+ [P{S2}∩P{S3}]- [P{S1}×P{S2}×P{S3}] ] υ [P{S4}×P{S5}] R S1 = ×0.81 When x m = [1, 3]; S1 becomes non redundant and so it can be estimated only by its direct measurement. Thus, R S1 = P{S1} = 0.9 Z 1 (1)- reduced cutset Z 2 (1) S2S S1 S3 S4 4 S6 ENV

For a measured variable, For a unmeasured variable, Estimation Reliability for Redundant Variable

Computation of estimation reliability of unmeasured variable- Sum of disjoint products It can be proved that,

Input Data: 1.Binary vector of measured streams at each node. 2.Service reliability of sensors. 3.Variables of interest. Steps to be performed: 1.Generate all the cutsets that has the variable of interest. 2.Choose only those reduced cutsets that have measured streams for reliability calculation. Other cutsets are useless as they do not make the variable of interest observable. 3.If no such cutset for unmeasured variable exist, then node is infeasible. Implementation in the Program

Check if the variables of interest are non redundant. If so we got three cases. Case 1: The variable is measured, then estimation reliability is the sensor service reliability itself. Case 2: The variable is not measured, the estimation reliability is product of service reliabilities of sensors in the reduced cutset. Case 3: The variable of interest is not observable, then node is infeasible, go down the tree. Implementation in the Program Non redundant variable

Case 1: variable is measured too. Case 2: unmeasured variable. We have already discussed the computational method for above equations. Implementation in the Program Redundant variable

Compare the obtained reliability with the specifications/ requirements/ thresholds, If node is feasible, transfer control to appropriate statement, which explores sister nodes If infeasible, go down the tree. Implementation in the Program Comparison with threshold

Let there be ‘n’ sensors when calculating reliability. Assume one of the sensors has malfuntioned and the measurement eliminated, the estimation reliability we now have is “Residual Reliability of order one” The sensor that has a gross error or the malfunctioned sensor can be identified. This helps to know which measurement is eliminated. Residual Reliability

Input Data: 1.Binary vector of measured streams at a node. Say ‘ns’ streams are measured. 2.Sensor Service Reliability 3.Reduced Cutset Information Calculation of Residual Reliability

Steps Involved: 1.Choose reduced cutsets from already available information. Eliminate those who have streams with the malfunctioning sensor. 2.Calculate Reliability the same way. Calculation of Residual Reliability- Order One.

Example Madron and Veverka (1992)

Instrumentation Details- Madron and Veverka (1992) StreamFlowSensor cost Sensor Precision (%) StreamFlowSensor Cost Sensor Precision

Software Accuracy when all streams are measured StreamSoftware Accuracy

Software Accuracy when all streams are measured

Requested Software Accuracy The software accuracy requested were. Three gross errors were allowed and no feasible nodes were found. Computed Reliability values were 90% for all streams when two gross errors are allowed. StreamThreshold Accuracy 1010% 168% 1810% 1915% 2419%

Solution of Madron and Veverka (1992) Cost of the NodeStreams Measured Values of Accuracy of requested streams in percentage 1853, 4, 5, 6, 7, 8, 9, 10, 15, 16, 19, 20, 23, 24 S10= 9.00 S16= 5.46 S18=7.53 S19=13.85 S24= , 4, 5, 6, 7, 8, 9, 10, 15, 16, 17, 19, 20, 23, 24 S10= 9.00 S16= 5.46 S18=7.53 S19=13.85 S24= , 4, 5, 6, 7, 8, 9, 10, 15, 16, 17, 18, 19, 20, 23, 24 S10= 7.63 S16= 5.48 S18=5.49 S19=12.91 S24= , 4, 5, 6, 7, 8, 9, 10, 12, 13, 15, 16, 17, 19, 20, 23, 24 (node- a) S10= 9.00 S16= 5.46 S18=7.53 S19=13.85 S24=13.85

2273, 4, 5, 6, 7, 8, 9, 10, 12, 14, 15, 16, 17, 19, 20, 23, 24 (node- b) S10= 9.00 S16= 5.46 S18=7.53 S19=13.85 S24= , 4, 5, 6, 7, 8, 9, 10, 12, 13, 15, 16, 17, 19, 20, 23, 24 S10= 9.00 S16= 5.46 S18=7.53 S19=13.85 S24= , 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 15, 16, 17, 18, 19, 20, 23, 24 S10= 7.64 S16= 5.48 S18=5.49 S19=12.91 S24= , 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 15, 16, 17, 19, 20, 23, 24 S10= 8.91 S16= 5.48 S18=7.00 S19=13.20 S24= , 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 15, 16, 17, 18, 19, 20, 23, 24 S10= 7.62 S16= 5.48 S18=5.49 S19=12.20 S24=12.20

Thank You