Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Variance Reduction via Lattice Rules By Pierre L’Ecuyer and Christiane Lemieux Presented by Yanzhi Li.

Similar presentations


Presentation on theme: "1 Variance Reduction via Lattice Rules By Pierre L’Ecuyer and Christiane Lemieux Presented by Yanzhi Li."— Presentation transcript:

1 1 Variance Reduction via Lattice Rules By Pierre L’Ecuyer and Christiane Lemieux Presented by Yanzhi Li

2 2 Outline Motivation Lattice rules Functional ANOVA decomposition Lattice selection criterion Random shifts Examples Conclusions

3 3 Motivation - MC  E[f(U)] =  [0,1) t f(u)du where U is a t- dimensional vector of i.i.d. Unif(0,1) r.v. Monte Carlo method (MC) Sample n points uniformly in [0,1) t

4 4 Motivation - MC MC gives a convergence rate MC performs worse for larger t. A large amount of points required in order to be uniform sampling Can we do better? Quasi-Monte Carlo (QMC) Constructs the point set P n more evenly and use a relatively smaller number of points

5 5 Given fixed number of points, which one is better? MC QMC

6 6 Motivation - QMC D(P n ): measure the non-evenness of P n, discrepancy between P n and U |Q n -  |  V(f)D * (P n )=V(f)O(n -1 (lnn) t ) where V(f) is the total variation of f and D * (P n ) is the rectangular star discrepancy (D * (P n ) =, J is an interval) Performs better than MC asymptotically

7 7 Motivation - QMC Drawback For larger t, convergence rate better than that of MC only for impractically large values of n D * (P n ) is difficult to computer The bound is very loose for normal functions Good news Lower discrepancy point sets seem to effectively reduce the integration error, even for larger t

8 8 Motivation - Questions Why does QMC performs better than MC empirically? Fourier expansion Traditional error bound Variance reduction viewpoint How to select quasi-random point sets P n ? In particular, how to select integration lattice?

9 9 Lattice Rules (Integration) lattice Discrete set of the real space R t Contains Z t Dual lattice Lattice rule An integration method that approximates  by Q n using the node set P n =L t  [0,1) t

10 10 Lattice doesn’t mean well distributed

11 11 Lattice Rules - Integration error Fourier expansion of f with For lattice point set:

12 12 Functional ANOVA Decomposition Writes f(u) as a sum of orthogonal functions The variance  2 decomposed as

13 13 Functional ANOVA Decomposition The best mean-square approximation of f(.) by a sum of d-dimensional function is  |I|  d f I (.) f has a low effective dimension in the superposition sense when the approximation is good for small d, which is frequent This suggests that the point sets P n should be chosen on the basis of the quality of the distribution of the points over the subspace I that are deemed important When |I| is small it is possible to make sure P n (I) covers the subspace very well

14 14 Lattice Selection Criterion It is desirable that P n is (for rank-1 lattice) Fully projection-regular, i.e., for any non- empty I  {1,…,t}, P n (I) contains as many distinct points as P n Or dimension-stationary, i.e., P n ({i 1,…,i d }) = P n ({i 1 +j, …,i d +j}) for all i 1,..,i d and j One full projection-regular example is P n ={(j/n)v mod 1: 0  j<n} for v=(1,a,…,a t-1 ), where a is integer, 0<a<n and gcd(a,n)=1

15 15 Lattice Selection Criterion L t (I) is a lattice => its points are contained in families of equidistant parallel hyperplanes Choose the family that is farthest apart and let d t (I) be the distance between hyperplanes d t (I)=1/l I where l I is the Euclidean length of the shortest nonzero vector in the dual lattice L t * (I), which has a tight upper bound l d * (n)=c d n 1/d where d=|I| Define a figure of merit l I / l d * (n) so that we can compare the quality of projections of different dimensions

16 16 Lattice Selection Criterion Minimize d t (I)  Maximize l I / l d * (n) Worse-case figure of merit consideration For arbitrary d  1 and t 1  …  t d  d, define This means takes into account the projections over s successive dimensions for all s  t 1 and over more than d non-successive dimensions that are not too far apart

17 17

18 18 Random Shifts When P n is deterministic, the integration error is also deterministic and hard to estimate. To estimate the error, we use independent random shifts Generate a r.v. U~Unif[0,1) t and replace u i by u i ’=(u i +U) mod 1 Let P n ’={u 0 ’,…,u n-1 ’} and Q n ’=(1/n)  i=0 n-1 f(u i ’) Repeat this m times, independently, with the same P n thus obtaining m i.i.d copies of Q n ’, denoted X 1,…,X m

19 19 Random Shifts Let We have If  2 < , with the MC method, For a randomly shifted lattice rule

20 20 Random Shifts Since L t * contains exactly 1/n of the points of Z t, the randomly shifted lattice rule reduces the variance compared with MC  the “average” squared Fourier coefficients are smaller over L t * than over Z t, which is true for typical well-behaved functions The previous selection criterion is also aimed to avoid having small vectors h in the dual lattice L t * for the sets I deemed important

21 21 Example: Stochastic activity network Each arc k, 1  k  is an activity, with random duration ~ F k (  ) Estimate  =P[T  x] Generate N(A) unif[0,1) r.v.s., N(P) is the number of paths

22 22 Estimated Variance Reduction Factors w.r.t. MC MC: Monte Carlo LR: randomly shifted Lattice Rule CMC: Conditional Monte Carlo t: dimension of the integration n: number of points in P n m=100

23 23 Conclusions Explain the success of QMC with variance reduction instead of the traditional discrepancy measure Propose a new way of generating lattice, choosing parameters Pay more attention to subspace Things we don’t cover Rules of higher rank Polynomial lattice rule Massaging the problem

24 24 Variance Reduction via Lattice Rules Thank you! Q&A


Download ppt "1 Variance Reduction via Lattice Rules By Pierre L’Ecuyer and Christiane Lemieux Presented by Yanzhi Li."

Similar presentations


Ads by Google