# Computational Analogues of Entropy Boaz Barak Ronen Shaltiel Avi Wigderson.

## Presentation on theme: "Computational Analogues of Entropy Boaz Barak Ronen Shaltiel Avi Wigderson."— Presentation transcript:

Computational Analogues of Entropy Boaz Barak Ronen Shaltiel Avi Wigderson

H(X) · Shannon-Ent(X) H(X) · Shannon-Ent(X) Statistical Min-Entropy Definition: H(X)¸k iff max x Pr[ X=x ]<2 -k ( X r.v. over {0,1} n ) Properties: H(X)=n iff X~U n H(X)=n iff X~U n H(X,Y) ¸ H(X) (concatenation) H(X,Y) ¸ H(X) (concatenation) If H(X)¸k then 9 (efficient) f s.t. f(X)~ U k/2 (extraction) If H(X)¸k then 9 (efficient) f s.t. f(X)~ U k/2 (extraction) Our Objectives: 1. Investigate possible defs for computational Min-Entropy. 2. Check whether computational defs satisfy analogs of statistical properties. 2. Check whether computational defs satisfy analogs of statistical properties.

Our Contributions Study 3 variants (1 new) of pseudoentropy. Study 3 variants (1 new) of pseudoentropy. Equivalence & separation results for several computational model. Equivalence & separation results for several computational model. Study analogues of IT results. Study analogues of IT results. In this talk: Present the 3 variants. Present the 3 variants. Show 2 results + proof sketches Show 2 results + proof sketches

Review - Pseudorandomness Def: X is pseudorandom if max D2C bias D (X,U n ) < max D2C bias D (X,U n ) < C – class of efficient algorithms (e.g. s-sized circuits) bias D (X,Y) = | E X [D(X)] - E Y [D(Y)] | – parameter (in this talk: some constant > 0) – parameter (in this talk: some constant > 0) i.e., X is computationally indistinguishable from U n

Defining Pseudoentropy *X is pseudorandom if max D2C bias D (X,U n ) < max D2C bias D (X,U n ) < Def 1 [HILL] : H HILL (X)¸k if 9Y s.t. H(Y)¸ k and max D2C bias D (X,Y) < 9Y s.t. H(Y)¸ k and max D2C bias D (X,Y) < min H(Y)¸ K max D2C bias D (X,Y) < min H(Y)¸ K max D2C bias D (X,Y) < Def 2: H Met (X)¸k if max D2C min H(Y)¸ K bias D (X,Y) < max D2C min H(Y)¸ K bias D (X,Y) < Def 3 [Yao] : H Yao (X)¸k if X cannot be efficiently compressed to k-1 bits. i.e., X is computationally indist. from some Y with ¸k statistical min-entropy. i.e., 8 efficient D, X is computationally indist. by D from some Y=Y(D) with ¸k statistical min-entropy.

Defining Pseudoentropy Claim 2: For k=n all 3 defs equivalent to pseudorandomness. H HILL (X)¸k if min H(Y)¸ K max D2C bias D (X,Y) < H HILL (X)¸k if min H(Y)¸ K max D2C bias D (X,Y) < H Met (X)¸k if max D2C min H(Y)¸ K bias D (X,Y) < H Met (X)¸k if max D2C min H(Y)¸ K bias D (X,Y) < H Yao (X)¸k if X cant be efficiently compressed to k-1 bits. Claim 1: H(X) · H HILL (X) · H Met (X) · H Yao (X) Claim 3: All 3 defs satisfy extraction property. [Tre]

HILL & Metric Def are Equivalent H HILL (X)¸k if min H(Y)¸K max D2C bias D (X,Y) < H HILL (X)¸k if min H(Y)¸K max D2C bias D (X,Y) < H Met (X)¸k if max D2C min H(Y)¸K bias D (X,Y) < H Met (X)¸k if max D2C min H(Y)¸K bias D (X,Y) < Thm 1: H HILL (X) = H Met (X) (For C = poly-sized circuits, any ) Proof: Suppose H HILL (X) { "@context": "http://schema.org", "@type": "ImageObject", "contentUrl": "http://images.slideplayer.com/7/1654599/slides/slide_7.jpg", "name": "HILL & Metric Def are Equivalent H HILL (X)¸k if min H(Y)¸K max D2C bias D (X,Y) < H HILL (X)¸k if min H(Y)¸K max D2C bias D (X,Y) < H Met (X)¸k if max D2C min H(Y)¸K bias D (X,Y) < H Met (X)¸k if max D2C min H(Y)¸K bias D (X,Y) < Thm 1: H HILL (X) = H Met (X) (For C = poly-sized circuits, any ) Proof: Suppose H HILL (X)

Unpredictability & Entropy Thm [Yao] : If X is unpredicatble with adv. then X is pseudorandom w/ param =n¢ Thm [Yao] : If X is unpredicatble with adv. then X is pseudorandom w/ param =n¢ Loss of factor of n due to hybrid argument – useless for constant advantage Loss of factor of n due to hybrid argument – useless for constant advantage This loss can be crucial for some applications (e.g., extractors, derandomizing small-space algs) Can we do better?

Unpredictability & Entropy IT Fact [TZS] : If X is IT-unpredictable with const. adv. then H(X)= (n) We obtain the following imperfect analog: Thm 2: If X is unpredictable by SAT-gate circuits with const. adv. then H Met (X)= (n) In paper: A variant of Thm 2 for nonuniform online logspace.

Thm 2: If X is unpredictable by SAT-gate circuits with const. adv. then H Met (X)= (n) Proof: Suppose that H Met (X)< n Well construct a SAT-gate predictor P s.t. Pr i,X [ P(X 1,…,X i-1 )=X i ] = 1 – Pr i,X [ P(X 1,…,X i-1 )=X i ] = 1 – We have that max D2C min H(Y)¸ n bias D (X,Y)¸ We have that max D2C min H(Y)¸ n bias D (X,Y)¸ i.e., 9D s.t. 8Y If H(Y)¸ n then bias D (X,Y)¸ i.e., 9D s.t. 8Y If H(Y)¸ n then bias D (X,Y)¸ Assume: 1) |D -1 (1)| < 2 n *2) Pr X [ D(X)=1 ] = 1 {0,1} n D X

Define predictor P as follows: P(x 1,…,x i )=0 iff Pr[ D(x 1,…,x i,0,U n-i-1 )=1] > ½ 1) |D -1 (1)| < 2 n 2) Pr X [ D(X)=1 ] = 1 {0,1} n D X Construct P from D Note that P does not depend on X and can be constructed w/ NP oracle. (approx counting [JVV]) Claim: 8x2D, P predicts at least (1- )n indices of x

P(x 1,…,x i )=0 iff Pr[ D(x 1,…,x i,0,U n-i-1 )=1] > ½ Proof: Suppose P fails to predict x in m indices. Well show that |D|>2 m, obtaining a contradiction. 1 ¸2¸2¸2¸2 ¸2¸2¸2¸2 ¸4¸4¸4¸4 ¸4¸4¸4¸4 ¸8¸8¸8¸8 ¸2m¸2m¸2m¸2m

Open Problems Analog of Thm 2 (unpredictability entropy) ? Analog of Thm 2 (unpredictability entropy) ? Meaningful concatenation property? Meaningful concatenation property? Separate Yao & Metric pseudoentropy. Separate Yao & Metric pseudoentropy. More results for poly-time computation: Prove that RL=L

Download ppt "Computational Analogues of Entropy Boaz Barak Ronen Shaltiel Avi Wigderson."

Similar presentations