Presentation is loading. Please wait.

Presentation is loading. Please wait.

Coding, Complexity and Sparsity workshop

Similar presentations


Presentation on theme: "Coding, Complexity and Sparsity workshop"— Presentation transcript:

1 Coding, Complexity and Sparsity workshop
Pseudo-randomness Shachar Lovett (IAS) Coding, Complexity and Sparsity workshop August 2011

2 Overview Pseudo-randomness – what? why? Concrete examples
Local independence Expander graphs Small space computations

3 What is pseudo-randomness?
Explicit (mathematical) objects that “look like” random objects random objects? “look like”? explicit?

4 What is pseudo-randomness?
Explicit (mathematical) objects that “look like” random objects Random object: random function, graph, … Look like: some functions (distinguishers / tests) cannot distinguish a random object from a pseudorandom one Explicit: can be generated efficiently

5 What is pseudo-randomness?
Example: pseudo-random bits Random object: uniform bits Un{0,1}n Pseudo-random bits: random variable X{0,1}n Explicit: Pseudorandom generator - X=G(Ur) G:{0,1}r  {0,1}n “easy to compute” Seed length r<<n Tests: family of functions

6 What is pseudo-randomness good for?
Derandomization of randomized algorithms Replace giving random bits to algorithm by “pseudo-random bits” “look like”: algorithm cannot distinguish random bits from pseudo-random bits Explicit: can generate pseudo-random bits efficiently Derandomization: enumerate all possibilities for pseudo-random bits

7 What is pseudo-randomness good for?
Succinct randomized data structures Random data structures rely on randomness Problem: randomness cannot be compressed Solution: use pseudo-randomness instead “look like”: data structures still works Pseudo-randomness can be compressed

8 What is pseudo-randomness good for?
Tool in extremal combinatorics: reduce proving thms to “random-looking” objects and “structured” objects Example: Szemeredi theorem k,>0, n large enough, s.t. any subset of {1,…,n} of size n contains an arithmetic progression of length k Proof [Gowers]: 1. Define good notion of pseudo-randomness for sets 2. Prove for pseudo-random sets (easy for random sets) 3. Non pseudo-random sets are structured: use structure

9 What is pseudo-randomness good for?
Computational complexity: benchmark of how well we understand a computational model Computational model: family of functions Following are usually in increasing hardness: Find functions outside the computational model (i.e. cannot be computed exactly) Find functions which cannot be approximated by the computational model Construct pseudo-random generators for the model

10 pair-wise and k-wise independence
Local independence: pair-wise and k-wise independence

11 Pair-wise independence
Family of functions If is chosen uniformly, then for any , h(x), h(y) are uniform and independent, i.e. Pair-wise independent bits: U={1,..,n}, m=2

12 Pair-wise independence: construction
Pair-wise independent bits: n=2m; Identify [n]={0,1}m Construction (all operation modulo 2): Proof: fix x≠y; choose h (i.e. a,b) uniformly If z≠0m: random variable <a,z>{0,1} is uniform Setting z=x+y: <a,x>+<a,y> uniform So: (<a,x>+b, <a,y>+b){0,1}2 uniform

13 Application: succinct dictionaries [Fredman-Komlós-Szemerédi’84]
Problem: Dictionary with n entries, O(1) retrieval time Simple solution: hash to O(n2) locations FKS: two-step hashing, using O(n) locations Randomized procedures, so need to store randomness Pair-wise independence is crucial: Suffices to bound collisions (so algorithms still work) Can be represented succinctly (i.e. “compressed”)

14 k-wise independence k-wise independent bits: uniform
Powerful derandomization tool, usually for k poly-logarithmic in n Many randomized data structures Some general computational models (AC0,PTFs,…)

15 k-wise independence AC0: constant depth circuits w. AND/OR gates
Cannot distinguish uniform random bits from polylog(n)-wise independence [Braverman’09] AND OR x1 x2 x3 xn

16 k-wise independence PTF: sign of real polynomials
Basic building block in machine learning: neural network (degree 1) and support vector machines (higher degrees) Cannot distinguish uniform random bits from polylog(n)-wise independence [Kane’11]

17 Future research Which models fooled by k-wise independence?
More complex local independence Designs: sequences of n bits of hamming weight m, each k indices “look uniform” K-wise independent permutations

18 Further reading Pairwise independence and derandomization, by Luby and Wigderson

19 Expander graphs

20 Expander graphs Expander graphs: graphs that “look random”
Usually interested in sparse graphs Several notions of “randomness” Combinatorial: edge/vertex expansion Algebraic: eigenvalues Geometric: isoperimetric inequalities

21 Expander graphs Expander graphs: graphs that “look random”
Usually interested in sparse graphs Several notions of “randomness” Combinatorial: edge/vertex expansion Algebraic: eigenvalues Geometric: isoperimetric inequalities

22 Applications of expander graphs
Expander graphs have numerous applications; explicit instances of random-looking graphs Reliable networks Error reduction of randomized algorithms Codes with linear time decoding Metric embeddings Derandomization of small space algorithms

23 Edge expansion G=(V,E) d-regular graph, |V|=n Edge boundary of SV:
G is an -edge expander if h(G)≥

24 Edge expansion: high connectivity
G=(V,E) d-regular graph, |V|=n G is an -edge expander if Thm: G is highly connected - if we remove n edges, largest connected component has ≥(1-)n vertices Proof: If then Union of connected components is C1,…,Ck connected components:

25 Spectral expansion G=(V,E) d-regular, |V|=n A = adjacency matrix of G
Eigenvalues of A: d=1≥2≥…≥n Second eigenvalue: (G)=max(|2|, |n|) G is a spectral expander if (G)<d d-(G) : eigenvalue gap

26 Spectral expansion: rapid mixing
G=(V,E) d-regular, |V|=n G is a spectral expander if (G)<d Thm: random walk on G mixes fast – from any initial vertex, a random walk of steps on edges of G ends in a nearly uniform vertex In particular, the diameter of G is

27 Spectral expansion: rapid mixing
Thm: random walk on G mixes in steps Random walk matrix on G: Distribution on V: Random walk: Second eigenvalue: Probability of a random walk ending in vertex i:

28 Edge vs. Spectral expansion
Edge expansion: high connectivity Spectral expansion: mixing of random walk Are they equivalent? No, but highly related [Cheeger’70,…;Dodziuk’84,Alon-Milman’85,Alon’86]

29 Vertex expansion: unique neighbors
G is a unique neighbor expander if for every set S (of size ≤k) there exist an element in S which has a unique neighbor in G Crucial for some applications, e.g. graph-based codes If G is d regular, (,k)-vertex expander for >d/2 then G is a unique neighbor expander

30 Constructions of expanders
Most constructions: spectral expanders Algebraic: explicit constructions [Margulis’73,…] Example (Selberg 3/16 theorem) Vertices: 3-regular graph: edges (x,x+1),(x,x-1),(x,1/x) Combinatorial: Zig-Zag [Reingold-Vadhan-Wigderson’02,…] Iterative construction of expanders from smaller ones

31 Optimal spectral expanders
G: d-regular graph Optimal spectral expanders: Ramanujan Alon-Boppana bound: Achieved by some algebraic constructions Are most d-regular graphs Ramanujan? open Theorem[Friedman]: >0 any large enough n, most d-regular graphs have

32 Future research Spectral expanders: well understood, explicit optimal constructions Combinatorial expansion (edge/vertex): follow to some degree from spectral expansion, but not optimally Require other methods for construction (e.g. Zig-Zag) Current constructions far from optimal Some applications require tailor-made notions of “pseudo-random graphs”

33 Further reading Expander graphs and their applications, by Hoory, Linial and Wigderson

34 Derandomizing small space computations

35 Small space computation
Model for computation with small memory, typically of logarithmic size Application: streaming algorithms Randomized computation: access to random bits Reads input from RAM, random bits from one way read-only tape Combinatorial model: branching programs

36 Branching programs Combinatorial model for small space programs
Input incorporated into program, reads random bits as input Model: layered graph layer = all memory configurations of algorithm from each vertex two edges (labeled 0/1) go to next layer x1 x2 x3 x4 1 start accept width = 2memory reject accept

37 Pseudo-randomness for branching programs
Branching programs “remember” only a small number of bits in each step Intuitively, this allows randomness to be recycled This can be made formal – main ingredient is expander graphs!

38 Pseudo-randomness for branching programs
Generator for recycling randomness for branching programs [Nisan’92, Imapgliazzo-Nisan-Wigderson’94] Main idea: build generator recursively G0: generator that output 1 bit G1: generator that output 2 bits ... Gi: generator that outputs 2i bits Glogn: generator that outputs n bits Main ingredient: efficient composition to get Gi+1 from Gi

39 Composition Assume Want Trivial composition:
Yields: , i.e. trivial… Expander based composition [INW]: Let H be a d-regular expander, vertices = (x,y) = random edge in H; Yields: How good should the expander be?

40 Composition Expander based generator:
Based on d-regular expanders seed = INW: to fool width poly(n), enough to take d=poly(n) (i.e. =poly(n)-1) seed length = O(logn)2 Main challenge: get seed length down to O(log n) Would allow enumeration of all seeds in poly(n) time

41 Limited models Can get optimal seed length, i.e. O(log n), for various limited models, e.g. Random walks on undirected graphs […,Reingold’05] Constant space reversible computations […,Koucky-Nimbhorkar-Pudlak’10] Symmetric functions […,Gopalan-Meka-Reingold-Zuckerman’11]

42 Future research Goal: seed length O(log n) for width poly(n) (i.e. log-space computations) Interesting limited models, open: Constant space computations (non-reversible) Reversible log-space Combinatorial rectangles (derandomize “independent random variables”)

43 Summary Pseudo-randomness: framework Applications:
Context of “random objects” Define notion of pseudo-randomness via tests This talk: local independence, expander graphs Applications: Derandomization / compression of randomness This talk: small space algorithms Extremal combinatorics

44 THANK YOU


Download ppt "Coding, Complexity and Sparsity workshop"

Similar presentations


Ads by Google