Download presentation

Presentation is loading. Please wait.

Published byMarvin Dalton Modified over 2 years ago

1
Tony Short University of Cambridge (with Sabri Al-Safi – PRA 84, 042323 (2011))

2
Overview Comparing the information-theoretic capabilities of quantum theory with other possible theories can help us: Understand why nature is quantum Hone our intuitions about quantum applications Surprisingly, despite entanglement, quantum theory is no better than classical for some non-local tasks.... Why? Non-local computation [Linden et al., 2007] Guess your neighbour’s input [Almeida et al. 2010] Information causality [Pawlowski et al. 2009]

3
The CHSH game What correlations P(ab|xy) are achievable given certain resources? What is the maximum success probability p in this game? b {0,1} Alice Random x {0,1}Random y {0,1} Bob a {0,1} Shared resources

4
Local (classical): P(a,b|x,y) = Σ λ q λ P λ (a|x)P λ (b|y) p C 3/4 (Bell’s Theorem - CHSH inequality) Quantum: P(a,b|x,y) = Tr(P x a ⊗ P y b ρ) p Q (2+√2)/4 (Tsirelson’s bound) General (box-world): Σ a P(a,b|x,y) independent of x Non-signalling conditions Σ b P(a,b|x,y) independent of y p G 1 (PR-boxes)

5
PR-box correlations [Popescu, Rohrlich (1994)] Optimal non-signalling correlations (p = 1) Problem: Is there a good, physical intuition behind p Q (2 + √2) / 4 ? x b a b y

6
Information Causality Information causality relates to a particular communication task [Pawlowski et al, Nature 461, 1101 (2009)] m classical bits byby Alice N random bits x 1... x N Random y {1,...,N} Bob (Bob’s best Guess of x y ) Task : Maximize

7
I(x:y) is the classical mutual information The Information causality principle states Physical intuition: The total information that Bob can extract about Alice’s N bits must be no greater than the m bits Alice sends him. However, note that Bob only guesses 1 bit in each game. The bound on J can easily be saturated: Alice simply sends Bob the first m bits of a.

8
Information Causality is obeyed in quantum theory and classical theory, and in any theory in which a ‘good’ measure of mutual information can be defined (see later) Information Causality can be violated by general non-signalling correlations. E.g. One can achieve J=N >> m=1, using Information Causality can be violated using any correlations which violate Tsirelson’s bound for the CHSH game (when N=2 n, m=1)

9
Hence Information Causality Tsirelson’s bound Furthermore, it can even generate part of the curved surface of quantum correlations [Allcock, Brunner, Pawlowski, Scarani 2009] But why is this particular task and figure of merit J so important? What about probability of success in the game? Given that J is a strange non-linear function of the probabilities, how does this yield nice bounds on quantum correlations Is mutual information central to quantum theory?

10
I.C. - A probabilistic perspective If we use probability of success in the Information Causality game, quantum theory can do better than classical m classical bits byby Alice N random bits x 1... x N Random y {1,...,N} Bob (Bob’s best Guess of x y ) Task : Maximize

11
When m=1, N=2, maximum success probabilities are the same as for the CHSH game The m=1 case for general N has been studied as ‘Random access coding’ [Ambainis et al 2008, Pawlowski & Zukowski 2010] (Known to be tight for N=2 k 3 j )

12
Furthermore, J= y I(x y :b y ) and the success probability are not monotonically related. E.g. For N=2, m=1 Strategy 1: Alice sends x 1 with a bit of noise J= 1- , p=3/4- ’ Strategy 2: Alice sends either x 1 or x 2 perfectly, based on random bit shared with Bob J 0.38, p= ¾ What is the relation between bounds on J and on the success probability, and how do these relate to Tsirelson’s bound?

13
Define p y as the probability of success when Bob is given y, and the corresponding bias E y = 2p y – 1 When proving Tsirelson’s bound, the crucial step uses a quadratic bound on the entropy when m=1, Information Causality therefore implies Can we derive a ‘quadratic bias bound’ like this directly?

14
Information Causality as a non-local game It is helpful to consider a nonlocal version of the Information Causality game. This is at least as hard as the previous version with m=1 (as Alice can send the message a, and Bob output b y =a b ) b Alice N random bits x 1... x N Random y {1,...,N} Bob a

15
For any quantum strategy Using similar techniques to those in the non-local computation paper [Linden et al (2007)] we define and note that

16
Hence we obtain the quantum bound This is easily saturated classically (a=x 1, b=0) With this figure of merit quantum theory is no better than classical. Yet with general correlations the sum can equal N It is stronger than the bound given by Information Causality ( y E y 2 2ln2) Furthermore any set of biases E y satisfying Σ y E y 2 ≤ 1 is quantum realizable. This bound therefore characterizes the achievable set of biases more comprehensively than Information Causality.

17
When we set all E y equal, then E y = 1/ N, and we achieve As this non-local game is at least as hard as the original, we can achieve the previously known upper bound on the success probability of the (m=1) Information Causality game for all N. We can easily extend the proof to get quadratic bounds for a more general class of inner product games

18
Inner product game (with Bob’s input having any distribution) When Bob’s bit string is restricted to contain a single 1, this implies the Information Causality result. When N=1, it yields Tsirelson’s bound, and the stronger quadratic version [Uffink 2002] b Alice N random bits x 1... x N Bob a N bits y 1... y N

19
Summary of probabilistic perspective The form of the mutual information does not seem crucial in deriving Tsirelson’s bound from Information Causality. Instead, quadratic bias bounds seem to naturally characterise quantum correlations. The inner product game with figure of merit y E y 2 is another task for which quantum theory is no better than classical, but which slightly-stronger correlations help with.

20
I.C. - An entropic perspective The key role of the mutual information is in deriving Information Causality. The bound J m follows from the existence of a mutual information I(X:Y) for all systems XY, satisfying: 1. Symmetry I(X:Y) = I(Y:X) 2. Consistency I(X:Y)= Classical I when X, Y are classical 3. Data Processing I(X:Y) ≥ I(X:T(Y)) for any transformation T 4. Chain Rule I(XY:Z) – I(X:Z) = I(Y:XZ) – I(X:Y) (Plus the existence of some natural transformations)

21
But mutual information is a complicated quantity (two arguments), and this list of properties is quite extensive. Instead, we can derive Information Causality from the existence of an entropy H(X), defined for all systems X in the theory, satisfying just 2 conditions: 1. Consistency H(X)= Shannon entropy when X is classical 2. Local Evolution ΔH(XY) ≥ ΔH(X)+ ΔH(Y) for any local transformation on X and Y The intuition behind the 2 nd condition is that local transformations can destroy but not create correlations, generally leading to more uncertainty than their local effect.

22
To derive information causality, we can use H to construct a measure of mutual information I(X:Y)=H(X)+H(Y) – H(XY), then use the original proof. The desired properties of I(X:Y) follow simply 1. Symmetry trivial 2. Consistency from consistency of H(X) 3. Data Processing equivalent to Local Evolution of H(X) 4. Chain Rule trivial Hence, Information causality holds in any theory which admits a `good’ measure of entropy. I.e. One which obeys Consistency and Local Evolution. The Shannon and von Neumann entropies are both `good’.

23
We can prove that any `good’ entropy shares the following standard properties of the Shannon and Von Neumann entropies: SubadditivityH(X,Y) ≤ H(X) + H(Y) Strong subadditivityH(X 1 X 2 | Y) ≤ H(X 1 | Y) + H(X 2 | Y) Classical positivityH(X | Y) ≥ 0 whenever X is classical (where we have defined H(X|Y)= H(XY)-H(Y) ) Instead of proceeding via the mutual information, we can use these relations to derive information causality directly.

24
This actually allows us to prove a slight generalisation of Information Causality: This generalized form of Information Causality makes no assumptions about the distribution on Alice’s inputs x 1...x N. The intuition here is that the uncertainty that Bob has about Alice’s bits at the end of the game, must be greater than the original uncertainty about her inputs minus the information gained by the message.

25
Entropy in general probabilistic theories We can define an entropy operationally in any theory [Short, Wehner / Barrett et al. / Kimura et al. (2010) ] Measurement entropy: H(X) is the minimal Shannon entropy of the outputs for a fine-grained measurement on X Decomposition entropy: H(X) is the minimal Shannon entropy of the coefficients when X is written as a mixture of pure states. These both obey consistency, and give the von Neumann entropy for quantum theory. However, for many theories they violate local evolution.

26
Entropy and Tsirelson’s bound (also in Dahlsten et al. 2011) Finally note that due to information causality, Existence of a `good’ entropy Tsirelson’s bound The existence of a `good’ measure of entropy seems like a very general property, yet remarkably it leads to a very specific piece of quantum structure. This also means that no ‘good’ measure of entropy exists in physical theories more nonlocal than Tsirelson’s bound, (such as box-world, which admits all non-signalling correlations).

27
Summary and open questions Quantum theory satisfies and saturates a simple quadratic bias bound y E y 2 1 for the Inner Product and Information Causality games, which generalises Tsirelson’s bound. Can we find other similar quadratic bounds? The existence of a ‘good’ measure of entropy in a theory (satisfying just 2 properties) is sufficient to derive information causality and Tsirelson’s bound. Is quantum theory the most general case with such an entropy? Is there a connection to thermodynamics?

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google