Deterministic Extractors for Small Space Sources Jesse Kamp, Anup Rao, Salil Vadhan, David Zuckerman.

Presentation on theme: "Deterministic Extractors for Small Space Sources Jesse Kamp, Anup Rao, Salil Vadhan, David Zuckerman."— Presentation transcript:

Deterministic Extractors for Small Space Sources Jesse Kamp, Anup Rao, Salil Vadhan, David Zuckerman

Randomness Extractors Defn: min-entropy(X)k if x Pr[X=x] · 2 -k. No “deterministic” (seedless) extractor for all X with min-entropy k: 1.Can add seed. 2.Can restrict X. ExtX  Uniform

Independent Sources Ext Uniform

Bit-Fixing Sources ? 1 ? ? 0 1 Ext

Small Space Sources Space s source: min-entropy k source generated by width 2 s branching program. n+1 layers 110100 1/ , 0 1-1/ , 0 1,1 0.1,0 0.8,1 0.1,0 0.3,0 0.5,1 0.1,1 0.1,0 1 width 2 s

Related Work [Blum]: Markov Chain with a constant number of states [Koenig, Maurer]: related model [Trevisan, Vadhan]: considered sources sampled by small circuits  requires complexity theoretic assumptions.

Small space sources capture: Bit fixing sources  space 0 sources General Sources with min-entropy k  space k sources c Independent sources  space n/c sources

Bit Fixing Sources can be modelled by Space 0 sources ? 1 ? ? 0 1 0.5,1 0.5,0 1,11,01,1

General Sources are Space n sources Pr[X 2 = 1|X 1 =1], 1 Pr[X 1 = 0], 0 Pr[X 1 = 1], 1 Pr[X 2 = 0|X 1 =1], 0 n layers width 2 n X = X 1 X 2 X 3 X 4 X 5 …..………….. Min-entropy k sources are convex combinations of space k sources

c Independent Sources: Space n/c sources. 0 1 1 1 0 1 0 0 1 1 0 1 0 1 00 0 1 0 1 0 0 0 1 0 1 0 1 1 1 1 width 2 n/c

Our Main Results Min-EntropySpaceErrorOutput Bits k = n 1-c n 1-4c 2 -n c 99% k k =  ncncn 2 -n/polylog(n) 99% k c = sufficiently small constant > 0

Outline Our Techniques  Extractor for linear min-entropy rate  Extractor for polynomial min-entropy rate Future Directions

We reduce to another model Total Entropy k independent sources:

X|State 5 = V Y| State 5 = V Y The Reduction X V These two distributions are independent! Expect the min-entropy of X|State 5 = V, Y|State 5 = V to be about k – s.

Can get many independent sources W X Y Z If we condition on t states, we expect to lose ts bits of min-entropy.

Entropy Loss Let S 1, …, S t denote the random variables for the state in the t layers. Pr[X = x]  Pr[X=x|S 1 =s 1,…,S t =s t ] Pr[S 1 =s 1,…,S t =s t ] X|S 1 =s 1,…,S t =s t has min-entropy < k – 2ts ) Pr[S 1 = s 1,…,S t =s t ] < 2 -2ts Union bound: happens with prob < 2 -ts

The Reduction Every space s source with min-entropy k is close to a convex combination of t total entropy k-2ts sources. W X Y Z

Some Additive Number Theory [Bourgain, Glibichuk, Konyagin] (   >0) (  integers C=C(  ), c=c(  )):  non-trivial additive character  of GF(2 p ) and every independent min-entropy  p sources X 1, …, X C, | E[  ( X 1 X 2 … X C )] | < 2 -cp

Vazirani’s XOR lemma Z  GF(2 n ) a random variable with |E[  (Z)]| <  for every nontrivial , then any m bits of Z are  2 m/2 close to uniform. | E[  ( X 1 X 2 … X C )] | < 2 -cp ) lsb m (X 1 X 2 … X C ) is 2 m/2 – cp close to uniform X 1 X 2 X 3 X 4 lsb(X 1 X 2 X 3 X 4 )

More than an independent sources extractor Analysis: (X 1 X 2 ), (X 3 X 4 ), (X 5 X 6 X 7 ), X 8 are independent sources. X 1 X 2 X 3 X 4 X 5 X 6 X 7 X 8 lsb m (X 1 X 2 X 3 X 4 X 5 X 6 X 7 X 8 )

Small Space Extractor for  n entropy If the source has min-entropy  n,  /2 fraction of blocks must have min-entropy rate . Take (2/  ) C(  /2) blocks ) C(  /2) blocks have min-entropy rate  /2. lsb(  )

Result Theorem: (   > 0,   > 0)  efficient extractor for min-entropy k   n space   n output length =  (n) error = 2 -  (n) Can improve to get 99% of the min-entropy out using techniques from [Gabizon,Raz,Shaltiel]

For Polynomial Entropy Rate Black Boxes: Good Condensers: [Barak, Kindler, Shaltiel, Sudakov, Wigderson], [Raz] Good Mergers: [Raz], [Dvir, Raz] White Box: Condensing somewhere random sources: [Rao]

Somewhere Random Source Def: [TS96] Has some uniformly random row. t r

Aligned Somewhere High Entropy Sources Def: Two somewhere high-entropy sources are aligned if the same row has high entropy in both sources.

Condensers [BKSSW],[Raz],[Z] A B C nn A B C AC+B  (1.1) (2n/3) Elements in a prime field

Iterating the condenser A B C nn  (1.1) t (2/3) t n

Mergers [Raz], [Dvir, Raz]  0.9  99% of rows in output have entropy rate 0.9  C

Condense + Merge [Raz] 1.1  99% of rows in output have entropy rate 1.1   Condense Merge C

This process maintains alignment  1.1  C (1.1) 2  C2C2

Bottom Line: (1.1) t  CtCt  [BGK] X1X1 Y1Y1 Z1Z1 lsb(X 1 Y 1 Z 1 ) n/d t

Extracting from SR-sources [Rao] r sqrt(r) r We generalize this: Arbitrary number of sources

Recap (1.1) t  CtCt  [BGK] X1X1 Y1Y1 Z1Z1 lsb(X 1 Y 1 Z 1 ) sqrt(r) r Arbitrary number of sources W X Y Z

Solution Entropy:  n  2 of these have rate  /2  4 of these have rate  /4 CtCt (1.1) t  [BGK] X1X1 Y1Y1 Z1Z1 lsb(X 1 Y 1 Z 1 )

Final Entropy:  n  2 of these have rate  /2 If   n -0.01 # rows << length of row

Result Theorem: (Assuming we can find primes) (   )  efficient extractor for min-entropy n 1-  space n 1-4  output length n  (1) error 2 -n  (1) Can improve to get 99% of the min-entropy out using techniques from [Gabizon,Raz,Shaltiel]

Future Directions Smaller min-entropy k?  Non-explicit: k=O(log n)  Our results: k=n 1-  (1) Larger space?  Non-explicit:  (k)  Our results:  (k) only for k=  (n) Other natural models?

Questions?

Download ppt "Deterministic Extractors for Small Space Sources Jesse Kamp, Anup Rao, Salil Vadhan, David Zuckerman."

Similar presentations