Computational Sensing = Modeling + Optimization CENS seminar Jan 28, 2005 Miodrag Potkonjak Key Contributors: Bradley Bennet, Alberto.

Slides:



Advertisements
Similar presentations
Probabilistic analog of clustering: mixture models
Advertisements

Notes Sample vs distribution “m” vs “µ” and “s” vs “σ” Bias/Variance Bias: Measures how much the learnt model is wrong disregarding noise Variance: Measures.
Face Recognition Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
Support vector machine
CS Statistical Machine learning Lecture 13 Yuan (Alan) Qi Purdue CS Oct
Hossein Ahmadi, Nam Pham, Raghu Ganti, Tarek Abdelzaher, Suman Nath, Jiawei Han Pallavi Arora.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
David Chu--UC Berkeley Amol Deshpande--University of Maryland Joseph M. Hellerstein--UC Berkeley Intel Research Berkeley Wei Hong--Arched Rock Corp. Approximate.
Visual Recognition Tutorial
Hilbert Space Embeddings of Hidden Markov Models Le Song, Byron Boots, Sajid Siddiqi, Geoff Gordon and Alex Smola 1.
Pattern Recognition and Machine Learning
Prénom Nom Document Analysis: Parameter Estimation for Pattern Recognition Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Dimensional reduction, PCA
Location Estimation in Sensor Networks Moshe Mishali.
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 1 Cascade Correlation Weights to each new hidden node are trained to maximize the covariance.
© 2005, it - instituto de telecomunicações. Todos os direitos reservados. Gerhard Maierbacher Scalable Coding Solutions for Wireless Sensor Networks IT.
Scalable Information-Driven Sensor Querying and Routing for ad hoc Heterogeneous Sensor Networks Maurice Chu, Horst Haussecker and Feng Zhao Xerox Palo.
Basics: Notation: Sum:. PARAMETERS MEAN: Sample Variance: Standard Deviation: * the statistical average * the central tendency * the spread of the values.
November 18, 2004 Energy Efficient Data Gathering in Sensor Networks F. Koushanfar, UCB N. Taft, Intel Research M. Potkonjak, UCLA A. Sangiovanni-Vincentelli,
Data mining and statistical learning - lecture 13 Separating hyperplane.
Probability Grid: A Location Estimation Scheme for Wireless Sensor Networks Presented by cychen Date : 3/7 In Secon (Sensor and Ad Hoc Communications and.
Jana van Greunen - 228a1 Analysis of Localization Algorithms for Sensor Networks Jana van Greunen.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
CHAPTER 4: Parametric Methods. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Parametric Estimation X = {
CHAPTER 4: Parametric Methods. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Parametric Estimation Given.
Analysis of Individual Variables Descriptive – –Measures of Central Tendency Mean – Average score of distribution (1 st moment) Median – Middle score (50.
Bayesian Classification with a brief introduction to pattern recognition Modified from slides by Michael L. Raymer, Ph.D.
Energy-efficient Self-adapting Online Linear Forecasting for Wireless Sensor Network Applications Jai-Jin Lim and Kang G. Shin Real-Time Computing Laboratory,
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Lecture II-2: Probability Review
1 Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data Presented by: Tun-Hsiang Yang.
Today Wrap up of probability Vectors, Matrices. Calculus
Correlation & Regression
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Particle Filtering in Network Tomography
Process modelling and optimization aid FONTEIX Christian Professor of Chemical Engineering Polytechnical National Institute of Lorraine Chemical Engineering.
Exposure In Wireless Ad-Hoc Sensor Networks Seapahn Meguerdichian Computer Science Department University of California, Los Angeles Farinaz Koushanfar.
Exposure In Wireless Ad-Hoc Sensor Networks Seapahn Meguerdichian Computer Science Department University of California, Los Angeles Farinaz Koushanfar.
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
Speech Recognition Pattern Classification. 22 September 2015Veton Këpuska2 Pattern Classification  Introduction  Parametric classifiers  Semi-parametric.
A statistical model Μ is a set of distributions (or regression functions), e.g., all uni-modal, smooth distributions. Μ is called a parametric model if.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Coordinated Statistical Modeling and Optimization for Ensuring Data Integrity and Attack-Resiliency in Networked- Embedded Systems Farinaz Koushanfar,
MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation 1 Dynamic Sensor Resource Management for ATE MURI.
Energy-Efficient Signal Processing and Communication Algorithms for Scalable Distributed Fusion.
PROCESS MODELLING AND MODEL ANALYSIS © CAPE Centre, The University of Queensland Hungarian Academy of Sciences Statistical Model Calibration and Validation.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
Calibration in Sensor Systems based on Statistical Error Models Computer Science Dept. University of California, Los Angeles Jessica Feng, Gang Qu*, and.
Dr. Sudharman K. Jayaweera and Amila Kariyapperuma ECE Department University of New Mexico Ankur Sharma Department of ECE Indian Institute of Technology,
Lecture 2: Statistical learning primer for biologists
Chapter1: Introduction Chapter2: Overview of Supervised Learning
Final Review Course web page: vision.cis.udel.edu/~cv May 21, 2003  Lecture 37.
Over-fitting and Regularization Chapter 4 textbook Lectures 11 and 12 on amlbook.com.
C. Savarese, J. Beutel, J. Rabaey; UC BerkeleyICASSP Locationing in Distributed Ad-hoc Wireless Sensor Networks Chris Savarese, Jan Beutel, Jan Rabaey.
Gaussian Processes For Regression, Classification, and Prediction.
Lecture 1: Basic Statistical Tools. A random variable (RV) = outcome (realization) not a set value, but rather drawn from some probability distribution.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
ESTIMATION METHODS We know how to calculate confidence intervals for estimates of  and  2 Now, we need procedures to calculate  and  2, themselves.
Gaussian Process Networks Nir Friedman and Iftach Nachman UAI-2K.
Computacion Inteligente Least-Square Methods for System Identification.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Estimating standard error using bootstrap
Deep Feedforward Networks
Probability Theory and Parameter Estimation I
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Dynamic Fine-Grained Localization in Ad-Hoc Networks of Sensors
CH 5: Multivariate Methods
Estimating Networks With Jumps
ESTIMATION METHODS We know how to calculate confidence intervals for estimates of  and 2 Now, we need procedures to calculate  and 2 , themselves.
Probabilistic Surrogate Models
Presentation transcript:

Computational Sensing = Modeling + Optimization CENS seminar Jan 28, 2005 Miodrag Potkonjak Key Contributors: Bradley Bennet, Alberto Cerpa, Jessica Feng, FarinazKoushanfar, Sasa Slijepcevic, Jennifer L. Wong

Goals Why Modeling? Why Non-parametric Statistical Modeling? Beyond Non-parametric Statistical Modeling? Do We Really Need Models? Tricks For Fame and Fun Applications: Calibration

Why Modeling: No OF, No Results L 1 : 1.272m L 2 : 5.737m L  : 8.365m Gaussian: 0.928m Stat. Error Model: 1.662x10 -3 m Location discovery

Why Modeling: What to Optimize? Packet size

Why Modeling: What to Optimize? Receiver/Transmitter quality

Why Modeling:Localized vs. Centralized Reception rate predictability

Why Modeling: Optimization Mechanism One unknown node Two unknown nodes Atomic localization

Why Modeling: What paradigm to use? Maximum Likelihood: Distance measurements correlation

Why Modeling: Protocol Design Lagged autocorrelation

Why Modeling: Executive Summary Objective Function and Constraints: What to Optimize? - Consistency Problem Identification Formulation - Variability Localized vs. Centralized - Time variability Optimization Mechanism - Topology of Solution space: Monotonicity, Convexity Optimization Paradigm - Correlations Design of Protocols - High Impact Features First

How to Model? Most likely value: regression Probability of given value of target variable for predictor variable Validation Evaluation Parametric and Non-parametric Exploratory and Confirmatory

Model Construction: Samples of Techniques Independent of Distance (ID) Normalized Distance (ND) Kernel Smoothing (KS) Recursive Linear Regression (LR) Data Partitioning (DP)

Independent of Distance (ID)

Normalized Distance (ND)

Kernel Smoothing (KS)

Recursive Linear Regression (LR)

Data Partitioning (DP)

Statistical Evaluation of Models

Statistical Evaluation of OFs

Location Discovery: Experimental Results

Location Discovery: Performance Comparison ROBUST – D. Niculescu and B. Nath. Ad Hoc Positioning System (APS). GLOBECOM N-HOP – A. Savvides, C. Han, M.B. Strivastava. Dynamic Fine-Grained Localization in Ad-Hoc Networks of Sensors. MOBICOM. pages APS – C. Savarese, K. Langendoen and J. Rabaey. Robust Positioning Algorithms for Distributed Ad-Hoc Wireless Sensor Networks. WSNA. pages K. Langendoen and N. Reijers. Distributed Localization in Wireless Sensor Networks: A Quantitative Comparison. Tech. Rep. PDS Technical University, Delft

Combinatorial Isotonic Regression: CIR Statistical models using combinatorics Hidden covariate problem Univariate CIR – Problem Formulation: –Given data (x i, y i,  i ), i=1,…,K –Given an error measure  p and x 1 <x 2 <x 3 <…<x K –  p isotonic regression is set (x i, ŷ i ), i=1,…,K, s.t. –Objective Function: Min  p (x i, ŷ i,  i ) –Constraint: ŷ 1 <ŷ 2 <ŷ 3 <…<ŷ K

 p (x i, ŷ j ) =  k | ŷ j – ŷ k |*h ik Univariate CIR Approach Histogram  build error matrix E, e ij =  p (x i, ŷ j ) Histogram Error Matrix XX YY 20

43 CE(x i, y j ) = E(x i,y j ) + min k  j E(x i-1,y k ) Univariate CIR Approach Histogram  build error matrix E, e ij =  p (x i, ŷ j ) Build the cumulative error matrix CE Error Matrix Cumulative Error X YY X

Univariate CIR Approach Histogram  build error matrix E, e ij =  p (x i, ŷ j ) Build the cumulative error matrix CE Map the problem to a graph combinatorial! Cumulative Error X Y Error Graph

Multivariate CIR Approach - ILP Given a response variable Y, and two explanatory X1, X2  3D error matrix E Objective Function: If ŷ k is the predicted value for X 1 =x 1 and X 2 =x 2 Otherwise Constraints: –C1:  one Ŷ for X1=x1, X2=x2 –C2: Ordering constraint:

CIR Prediction Error on Temperature Sensors at Intel Berkeley Prediction error over all nodes: Limiting number of parameters - AIC criteria

Combinatorial Regression: Flavors Minimal and Maximal Slope Number of Segments Convexity Unimodularity Locally Monotonic Symmetry y = f(x), x = g(y)  x = g(f(x)) Transitivity y = f(x), z = g(y), z = h(x)  h(x) = g(f(x))

Combinatorial Regression: Symmetry

Time Dependant Models

Do We Really Need Models?

Modeling Without Modeling: Consistency x 1 x 2  f(x 1 ) > f(x 2 )

On-line Model Construction

Statistics for Sensor Networks: Executive Summary Large Scale Time Dependent Modelling Hidden Covariates: Monotonicity, Convexity,... Go to Discrete and Graph Domains Interaction: Data Collection - Modeling Properties of Networks Simulators

Tricks – Modeling and Sensor Fusion Hide Nodes Split Nodes Weight Nodes Additional Dimensions Additional Sources

Hiding Beacons

Splitting Nodes

Modeling Networks for Fame & Fun

Perfect Neighbors

Applications Calibration Location Discovery Data Integrity Sensor Network Compression Sensor Network Management Low Power Wireless Ad-hoc Network: Lossy Links

Calibration Minimal Maximal Error Minimal Average Error: median Minimal L 2 Error: average Most Likely Value Error PDF LL ML

Calibration Model for Light

Interval of Confidence

Summary: Recipe for SN Research Collect Data Model Data Understand Data...