Presentation is loading. Please wait.

Presentation is loading. Please wait.

Locking of correlations Debbie Leung U. Waterloo From: Charles Bennett Date: Sept 06, 2001 Subject: Pictures from Huangshan China Dear Friends, Here is.

Similar presentations


Presentation on theme: "Locking of correlations Debbie Leung U. Waterloo From: Charles Bennett Date: Sept 06, 2001 Subject: Pictures from Huangshan China Dear Friends, Here is."— Presentation transcript:

1 Locking of correlations Debbie Leung U. Waterloo From: Charles Bennett Date: Sept 06, 2001 Subject: Pictures from Huangshan China Dear Friends, Here is a picture of the famous mountain view called CloudDispelling Pavilion, and the curious lovers' locks, which I used in my talk this morning to illustrate the locking and unlocking of quantum information. Lovers' locks are a custom in China and Japan wherein lovers leave a padlock locked onto the infrastructure at some scenic location, signifying eternal love... Thinking about this custom I wondered if anyone had thought of designing a special lovers’ padlock, without a keyhole or unlocking mechanism. Such a lock would be cheaper to manufacture, and would express greater commitment (no danger of some faithless guy returning at midnight to retrieve the lock and maybe use it with someone else)...

2 Can we increase a correlation a lot using little comm? Alice Bob :::: State after the k-th message:  AB k 1 2 l Question: is * possible? f( AB l )f( AB k ) * À total message size taking  AB k to  AB l ? Not classically, but yes for some f quantumly. WLOG,  AB 0 uncorrelated. Reasonable normalization: f( AB k ) · total comm up to k-th message Interpreting * : correlation f is existing yet inaccessible in  AB k and the little extra comm unlocks it! f:correlation function

3 Locking of correlations -- Survey Locking classical mutual information Entropic uncertainty relations Applications: - key uncertainty Extension 1: Locking entanglement App: near-max entanglement deficit of state preparation Extension 2: Multi-round locking Application: separation of capacities DiVincenzo, M. Horodecki, L, Smolin, Terhal 0303088 Hayden, L, Shor, Winter (Buhrman & Christandl) 0307104, Maassen, Uffink 88 Damgaard, Pedersen, Salvail 0407066 K, M, P Horodecki, Oppenheim 0404096 DHLST, Bennett, Devetak, Shor, Smolin 0406086

4 Definition: Classical Mutual Information I c Alice Bob  AB MAMA MBMB XAXA XBXB Def: I c ( AB ) := max M A ­ M B I(X A :X B ) X A, X B : random variables x a, x b : outcomes

5 Special case: accessible information I acc  AB MAMA MBMB X Y Def: I c ( AB ) := max M A ­ M B I(X:Y) Special case:  AB =  x p x |x ih x| ­  x for {|x i } a basis Alice's part is classical, optimal meas M A is along {|x i } Bob is given  x wp p x (call an "ensemble" E:={p x,  x }) M B should maximize Bob's information on "x" Max info =: accessible information of the ensemble I acc (E) X, Y: random variables x, y: outcomes

6 Locking I c Alice Bob X 2 R {1, ,d}m = log d x (1) Wed e.g. x = exam questions Exam will be on Wed

7 Locking I c Alice Bob X 2 R {1, ,d} m = log d x (1) Sat U t | i t (2) Wed T 2 R {1, ,r}k = log r Alice’s well-wish: k ¿ m, and Bob learns little about x on Sat Not classically: I(A 1 A 2 :B) - I(A 1 :B) = H(A 1 A 2 ) + H(B) - H(A 1 A 2 B) - [H(A 1 ) + H(B) - H(A 1 B)] Quantumly,... Alice has XT together Each xt occurs with prob 1/(rd),  xt = U t |x ih x|U t y · H(A 1 A 2 ) - H(A 1 ) + H(A 1 B) - H(A 1 A 2 B) · H(A 2 ) + 0

8 Locking I c Alice Bob X 2 R {1, ,d} m = log d x (1) Sat U t | i t (2) Wed T 2 R {1, ,r}k = log r k Scheme r=2, U 1 =I, U 2 =FT r=(logd) 3, U t 2 R Haar r=(logd) 5, U t 2 R Haar I acc (w/o t) = ½ logd ·  logd + 3 const I acc (given t) log d + 1 log d + 3 loglog d log d + 5 loglog d Difference = ½ log d ¸ (1-) logd ¸ logd - const d large : logd ¸ O[(1/) log(1/)] Set of {U t } picked iid & preagreed upon t chosen randomly by Alice during the run

9 Why is I acc (w/o t) bounded? Alice Bob X 2 R {1, ,d} m = log d x (1) Sat U t | i T 2 R {1, ,r}k = log r I c ( AB ) = I acc ({1/rd, U t |x i }) WLOG, M B ={ j | j ih  j |} ( j  j =d) (J:= rv for outcome) I acc = I(J:XT) = H(J) - H(J|XT) = -  j  j /d log  j /d +(1/rd)  xt  j  j | h  j |U t |x i | 2 log  j | h  j |U t |x i | 2 = log d +  j ( j /d) 1/r  t  x | h  j |U t |x i | 2 log | h  j |U t |x i | 2 prob(j|xt) =  j | h  j |U t |x i | 2 prob(j) =  j /d

10 Why is I c ( AB Sat ) bounded? I c ( AB ) = I acc ({1/rd, U t |x i }) WLOG, M B ={ j | j ih  j |} I c ( AB ) = max M B [log d +  j  j /d ¢ 1/r  xt | h  j |U t |x i | 2 log | h  j |U t |x i | 2 ] · max | i [log d – 1/r  t (-) x | h |U t |x i | 2 log | h |U t |x i | 2 ] = log d – min | i 1/r  t H t Goal: Prove EUR’s min | i 1/r  t H t ¸ then I c ( AB ) · log d -  algebra distribution convexity Measure | i along {U t |x i } H t = entropy of outcome “x” Average uncertainty of outcome when measuring | i along a basis randomly chosen from a set Any lower bound ( 8 ) is called “entropy uncertainty relation” (EUR) for the set of basis.

11 Proving min | i 1/r  t H t ¸ so that I c ( AB ) · log d -  H t = entropy of outcome “x” when measuring | i along {U t |x i } k Scheme r=2, U 1 =I, U 2 =FT r=(logd) 3, U t 2 R Haar r=(logd) 5, U t 2 R Haar I c ( AB Sat ) · ½ logd ·  logd + 3 const I c ( AB Wed ) logd + 1 logd + 3 loglogd logd + 5 loglogd I c ¸ ½ log d ¸ (1-) logd ¸ logd - const

12 Proving min | i 1/r  t H t ¸ so that I c ( AB ) · log d -  H t = entropy of outcome “x” when measuring | i along {U t |x i } For scheme 1: min | i 1/r  t H t ¸ ½ logd For scheme 2: min | i 1/r  t H t ¸ (1-) logd - 3 For scheme 3: min | i 1/r  t H t ¸ logd - const k Scheme r=2, U 1 =I, U 2 =FT r=(logd) 3, U t 2 R Haar r=(logd) 5, U t 2 R Haar I c ( AB Sat ) · ½ logd ·  logd + 3 const I c ( AB Wed ) logd + 1 logd + 3 loglogd logd + 5 loglogd I c ¸ ½ logd ¸ (1-) logd ¸ logd - const for log d ¸ const/ log(20/) 1 2323

13 Proving min | i 1/r  t H t ¸ so that I c ( AB ) · log d -  H t| i = entropy of outcome “x” when measuring | i along {U t |x i } Pf ideas for: min | i 1/r  t H t | i ¸ (1-) logd + 3 Pick 1 random state | i - Levy’s Lemma: Pr[ H | i < logd 2 ] · exp( » d/(logd) 2 ) - Chernoff bound: Pr[ 1/r  t H t| i · (1-/2)(logd -2) ] · exp(- »  r d/(logd) 2 ) Union bound for N | i ’s - Pr[ min | i 1/r  t H t| i · (1-/2)(logd -2) ] · N exp(- »  r d/(logd) 2 ) - Take N=(10/) 2d, {| i } s.t. any | i is /2 close to one | i For all | i - Continuity: || | ih || ih | || tr · /2 ) | H t| i – H t| i | · /2 logd + 1 - Pr[ min | i 1/r  t H t| i · (1-)(logd -3) ] · (10/) 2d exp(- »  r d/(logd) 2 ) details needed

14 Proving min | i 1/r  t H t ¸ so that I c ( AB ) · logd -  H t| i = entropy of outcome “x” when measuring | i along {U t |x i } Pf ideas for: min | i 1/r  t H t | i ¸ (1-) logd + 3 - Pr[ min | i 1/r  t H t| i · (1-)(log d -3) ] · (10/) 2d exp(- »  r d/(log d) 2 ) if Pr <  1, green statement is sometimes false, and the black statement is sometimes true. i.e. 9 {U t } s.t. the claimed EUR holds r » 1/ (log d) 2 log(1/) sufficient which r = (log d) 3 OK for  const r = (log d) 5 OK for  = 1/log d

15 Punchline: Alice Bob X 2 R {1, ,d} m = log d x (1) Sat U t | i T 2 R {1, ,r}k = log r r=2, U 1 =I, U 2 =FT r=(logd) 3, U t 2 R Haar r=(logd) 5, U t 2 R Haar I c ( AB Sat ) · ½ logd ·  logd + 3 const


Download ppt "Locking of correlations Debbie Leung U. Waterloo From: Charles Bennett Date: Sept 06, 2001 Subject: Pictures from Huangshan China Dear Friends, Here is."

Similar presentations


Ads by Google