Towards a Universal Count of Resources Used in a General Measurement Saikat Ghosh Department of Physics IIT- Kanpur.

Slides:



Advertisements
Similar presentations
General Linear Model With correlated error terms  =  2 V ≠  2 I.
Advertisements

Brief introduction on Logistic Regression
Experiments and Variables
CHAPTER 8 More About Estimation. 8.1 Bayesian Estimation In this chapter we introduce the concepts related to estimation and begin this by considering.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Sampling: Final and Initial Sample Size Determination
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
1 BINARY CHOICE MODELS: LOGIT ANALYSIS The linear probability model may make the nonsense predictions that an event will occur with probability greater.
Visual Recognition Tutorial
By : L. Pour Mohammad Bagher Author : Vladimir N. Vapnik
The Simple Linear Regression Model: Specification and Estimation
The Mean Square Error (MSE):. Now, Examples: 1) 2)
Introduction to Inference Estimating with Confidence Chapter 6.1.
Minimaxity & Admissibility Presenting: Slava Chernoi Lehman and Casella, chapter 5 sections 1-2,7.
The Simple Regression Model
Visual Recognition Tutorial
Visual Recognition Tutorial
C4: DISCRETE RANDOM VARIABLES CIS 2033 based on Dekking et al. A Modern Introduction to Probability and Statistics Longin Jan Latecki.
July 3, A36 Theory of Statistics Course within the Master’s program in Statistics and Data mining Fall semester 2011.
BCOR 1020 Business Statistics
Maximum likelihood (ML)
Regression and Correlation Methods Judy Zhong Ph.D.
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
Introduction to Linear Regression and Correlation Analysis
Hypothesis Testing in Linear Regression Analysis
R. Demkowicz-Dobrzański 1, J. Kołodyński 1, M. Guta 2 1 Faculty of Physics, Warsaw University, Poland 2 School of Mathematical Sciences, University of.
CPE 619 Simple Linear Regression Models Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The University of Alabama.
Simple Linear Regression Models
Weak Values in Quantum Measurement Theory - Concepts and Applications - Yutaka Shikano 07M01099 Department of Physics, Tokyo Institute of Technology “Master.
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
One Sample Inf-1 If sample came from a normal distribution, t has a t-distribution with n-1 degrees of freedom. 1)Symmetric about 0. 2)Looks like a standard.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Section 8.1 Estimating  When  is Known In this section, we develop techniques for estimating the population mean μ using sample data. We assume that.
Correlation and Linear Regression. Evaluating Relations Between Interval Level Variables Up to now you have learned to evaluate differences between the.
1 Dr. Jerrell T. Stracener EMIS 7370 STAT 5340 Probability and Statistics for Scientists and Engineers Department of Engineering Management, Information.
QCCC07, Aschau, October 2007 Miguel Navascués Stefano Pironio Antonio Acín ICFO-Institut de Ciències Fotòniques (Barcelona) Cryptographic properties of.
Maximum Likelihood Estimator of Proportion Let {s 1,s 2,…,s n } be a set of independent outcomes from a Bernoulli experiment with unknown probability.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Brian Macpherson Ph.D, Professor of Statistics, University of Manitoba Tom Bingham Statistician, The Boeing Company.
R. Demkowicz-Dobrzański 1, J. Kołodyński 1, K. Banaszek 1, M. Jarzyna 1, M. Guta 2 1 Faculty of Physics, Warsaw University, Poland 2 School of Mathematical.
Fundamentals of Data Analysis Lecture 3 Basics of statistics.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
LECTURE 25 THURSDAY, 19 NOVEMBER STA291 Fall
1 2 nd Pre-Lab Quiz 3 rd Pre-Lab Quiz 4 th Pre-Lab Quiz.
Chapter 10: Confidence Intervals
NON-LINEAR REGRESSION Introduction Section 0 Lecture 1 Slide 1 Lecture 6 Slide 1 INTRODUCTION TO Modern Physics PHYX 2710 Fall 2004 Intermediate 3870 Fall.
Evaluation of gene-expression clustering via mutual information distance measure Ido Priness, Oded Maimon and Irad Ben-Gal BMC Bioinformatics, 2007.
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Simple Linear Regression Analysis Chapter 13.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Introduction to inference Estimating with confidence IPS chapter 6.1 © 2006 W.H. Freeman and Company.
Roger B. Hammer Assistant Professor Department of Sociology Oregon State University Conducting Social Research Logistic Regression Categorical Data Analysis.
Statistics for Business and Economics Module 1:Probability Theory and Statistical Inference Spring 2010 Lecture 4: Estimating parameters with confidence.
Fundamentals of Data Analysis Lecture 3 Basics of statistics.
Linear Regression. Regression Consider the following 10 data pairs comparing the yield of an experiment to the temperature at which the experiment was.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Metrology and integrated optics Geoff Pryde Griffith University.
Physics 114: Lecture 13 Probability Tests & Linear Fitting
Regression Analysis AGEC 784.
Visual Recognition Tutorial
Random Testing: Theoretical Results and Practical Implications IEEE TRANSACTIONS ON SOFTWARE ENGINEERING 2012 Andrea Arcuri, Member, IEEE, Muhammad.
Using Quantum Means to Understand and Estimate Relativistic Effects
Basic Estimation Techniques
the illusion of the Heisenberg scaling
Ch9 Random Function Models (II)
Basic Estimation Techniques
LECTURE 05: THRESHOLD DECODING
LECTURE 05: THRESHOLD DECODING
Summarizing Data by Statistics
LECTURE 05: THRESHOLD DECODING
Presentation transcript:

Towards a Universal Count of Resources Used in a General Measurement Saikat Ghosh Department of Physics IIT- Kanpur

How precisely can one measure a single parameter? Spectroscopy, time standards Detecting new physics: LIGO, variations in constants

Measurement Models Probe Estimator: Error: Measurement System Estimate from a measurement is between V. Giovannetti, S. Lloyd and L. Moaccone, Nature Photonics, 5, 222 (2011)

Classical Measurement and Standard Quantum Limit N repetitions N probes The error scales down to : Standard Quantum Limit Resource used : N V. Giovannetti, S. Lloyd and L. Moaccone, Nature Photonics, 5, 222 (2011)

Quantum Resource and Heisenberg Limit For quantum mechanically N probes, the error scales as: Resource used: N m repetitions yield: Resource used: N*m V. Giovannetti, S. Lloyd and L. Moaccone, Nature Photonics, 5, 222 (2011)

An example: Phase Measurement m repetitions yield: Resource used: N*m Measure: Error: V. Giovannetti, S. Lloyd and L. Moaccone, Nature Photonics, 5, 222 (2011)

Scaling beyond Heisenberg Limit and Resource Count However, counting resources becomes non-trivial: Number of times the probe interacts with the system For example: V. Giovannetti et. Al. PRL 108, (2012) M. Zwierz et. al. PRA 85, (2012), ibid PRL 105 (2010) For non-linear interactions, scaling beyond HL: For example.: S. Boixo et. al. PRL, 98, (2007), M. Napolitano et. al. Nature,471,486 (2011)

What can we say about resources from the output statistics?

Fisher Information Fisher information quantifies the amount of information carried by the variable X about the unknown parameter.

Cramer-Rao Bound For an unbiased estimator for the unknown parameter, Cramer-Rao bound sets a lower bound on the error in estimating the parameter. Note: the Cramer-Rao bound says nothing about the probability distribution or even if it exists.

Outline Can one explicitly construct a probability distribution that minimizes the error?

Definition of the problem For a measurement model with M+1 discrete outcomes and a probability mass function with the constraints: is there a that minimizes the root-mean-square-error:

Minimum-Error Distribution (MED) There is a distribution that indeed minimizes the error and can be constructed in the following way:

Minimum-Error Distribution (MED) The probability distribution: J. P. Perdew et. al. PRL. 49, 1691 (1982), D. P. Bestsekas, Nonlinear Programming (1999).

Proof: Minimum-Error Distribution (MED) The proof follows Cauchy-Schwartz inequality:

Proof: Minimum-Error Distribution (MED) The proof follows Cauchy-Schwartz inequality: Any other distribution with different weights will necessarily have a larger error: The distribution is unique

Minimum Error Distribution(MED) What is the bound on the error of MED? Any experimental model with M+1 discrete set of outcomes that has the least error will have to follow the MED, independent of the strategy.

What is the bound on the error? Without loss of generality, one can scale the values between -1 to +1 This implies the expected or true value ranges from -1 and +1 +1

Estimation of the bound on the error Maximum at the midpoint:

Estimation of the bound on the error So to find the bound on the MED, we need to find the worst error estimate over any value of. For one of the intervals interval this is

Estimation of the bound on the error of MED For the entire range: The maximum is larger than the average, and the equality holds when all s are equally spaced.

Can one understand this scaling better? +1

A graphical representation Plot: with Here, zero error corresponds to

A graphical representation

Not allowed

A graphical representation: visualizing MED Not allowed M=2 M=3 M=4 M=1

A graphical representation: visualizing MED Not allowed M=4

A class of measurement(with M+1 outputs) M repetitions: values of which M+1 are distinct The values are equi-spaced: We take the error in the form:

A class of measurements For a specific M, what is the minimum ?

M=2 M=3 M=4 A class of measurements For a given M, there is a curve such that the MED for that M is tangent to it. For a specific M, what is the minimum ?

A class of measurements

M=2 M=3 M=4 The extremal curve corresponds to

A class (classical) of measurements N repetitions yields N+1 distinct values with the error scaling as the SQL: The extremal curve corresponds to

Ansatz: N(resource) = M

Quantum mechanical cases: For a state with N-spins, the Hilbert space is dimensional with N+1 distinct values. Classical case: For N repetitions of a measurement with two outcomes, there are again N+1 distinct outcomes. A measurement with P outcomes can always be constructed from P-1 repetitions from the two-outcome box.

Ansatz: N(resource) = M Classical case: The error scales as Minimum Error Distribution: The error scales as Heisenberg Limit: The error scales as M. Tsang, PRL, 108,230401(2012), M. Zwierz et. Al. PRA, 85,042112(2012), V. Giovannetti et.al. PRL, 108,21040(2012) and PRL, 108, (2012)

Universal resource count A functional definition of resource used in a measurement: For a classical, quantum (linear or non-linear model) resource used is equal to the number of possible outcomes the model predicts (-1).

Can one realize the Minimum Error Distribution (MED)? To achieve this, some amount of prior knowledge is required about the expected or true value of the parameter. How well does one need to know the value?

Can one realize the Minimum Error Distribution (MED)? The MED is achieved for equally spaced outputs, with each bin of size Therefore the measurement is not going to add any further value One needs a prior knowledge of with a precision: But the precision in the measurement of is of the same order

Can one realize the Minimum Error Distribution (MED)? The MED is not attainable !!

Estimating an arbitrary phase using entangled states (NOON states) With a proper choice of POVM the output is So the possible estimates are: One needs to adopt a strategy to choose the right value. M. J.W. Hall et. al. PRA, 85, (R) (2012)

Can there be a strategy to achieve the limit? So no such strategy exists and NOON-like states cannot hit the 1/N limit ! To identify the phase, the distribution will have to be the MED, since it is unique, with an error But the MED is not attainable.

Conclusion For any experimental model with M+1 discrete set of outcomes that has the least error the corresponding probability mass function is explicitly constructed. The error corresponding to the distribution scales as 1/M A simple ansatz is proposed : Resources = Number of discrete outcomes It is found that the distribution(or the Heisenberg Limit) is not achievable in practice. H. M. Bharath and Saikat Ghosh (in review) arxiv:

Conclusion Classical Correlations In between H. M. Bharath and Saikat Ghosh (in review) arxiv:

At IIT-Kanpur Quantum and Nano-photonics Group Under Construction:

At IIT-Kanpur Students and post-docs: PhD Students: Post-Doctoral Fellow: Rajan Singh Dr. Niharika Singh Arif Warsi Anwesha Dutta Under-graduates: H. M. Bharath(Now at Georgia-Tech), Gaurav Gautam, Prateek Gujrati Quantum Optics and Atomic Physics Prof. Anand Jha Prof. Harshwardhan Wanare Prof. V. Subhramaniyam Visitors and students are most welcome to spend some time with us!