Presentation is loading. Please wait.

Presentation is loading. Please wait.

Recent Progress On Sampling Problem

Similar presentations


Presentation on theme: "Recent Progress On Sampling Problem"β€” Presentation transcript:

1 Recent Progress On Sampling Problem
Yin Tat Lee (MSR/UW), Santosh Vempala (Gatech)

2 My Dream Tell the complexity of a convex problem by looking at the formula. Example Minimum Cost Flow Problem: This is a linear program, each row has two non-zero. It can be solved in 𝑂 (π‘š 𝑛 ). [LS14] (Previous: 𝑂 (π‘š π‘š ) for graph with π‘š edges and 𝑛 vertices.)

3 My Dream Tell the complexity of a convex problem by looking at the formula. Example Submodular Minimization: where 𝑓 satisfies diminishing return, i.e. 𝑓 𝑆+𝑒 βˆ’π‘“ 𝑆 ≀𝑓 𝑇+𝑒 βˆ’π‘“ 𝑇 βˆ€π‘‡βŠ‚π‘†, π‘’βˆ‰π‘†. 𝑓 can be extended to a convex function on 0,1 𝑛 . subgradient of 𝑓 can be computed in 𝑛 2 time. It can be solved in 𝑂 ( 𝑛 3 ). [LSW15] (Previous: 𝑂 ( 𝑛 5 )) Fundamental in combinatorial optimization. Worth β‰₯2 Fulkerson prizes

4 Algorithmic Convex Geometry
To describe a formula, we need some operations. Given a convex set 𝐾, we have following operations Membership(x): Check if π‘₯∈𝐾. Separation(x): Assert π‘₯∈𝐾, or find a hyperplane separate π‘₯ and 𝐾. Width(c): Compute min π‘₯∈𝐾 𝑐 𝑇 π‘₯. Optimize(c): Compute argmin π‘₯∈𝐾 𝑐 𝑇 π‘₯. Sample(g): Sample according to 𝑔 π‘₯ 1 𝐾 . (assume 𝑔 is logconcave) Integrate(g): Compute 𝐾 𝑔 π‘₯ 𝑑π‘₯ . (assume 𝑔 is logconcave) Theorem: They are all equivalent by polynomial time algorithms. One of the Major Source of Polynomial Time Algorithms!

5 Algorithmic Convex Geometry
Traditionally viewed as impractical. Now, we have an efficient version of ellipsoid method. Algorithmic Convex Geometry Why those operations? For any convex 𝑓, define the dual 𝑓 βˆ— 𝑐 = argmax π‘₯ 𝑐 𝑇 π‘₯βˆ’π‘“(π‘₯), and 𝑙 𝐾 =∞ 1 𝐾 𝑐 . Progress: We are getting the tight polynomial equivalence between left 4. Membership 𝑙 𝐾 (π‘₯) Integration 𝐾 𝑔 π‘₯ 𝑑π‘₯ Today Focus Width 𝑙 𝐾 βˆ— (𝑐) Separation πœ• 𝑙 𝐾 (π‘₯) Sample ~ 𝑒 βˆ’π‘” 1 𝐾 Optimization πœ• 𝑙 𝐾 βˆ— (𝑐) Convex Optimization

6 Problem: Sampling Input: a convex set 𝐾.
Output: sample a point from the uniform distribution on K. Generalized Problem: Input: a logconcave distribution 𝑓 Output: sample a point according to 𝑓. Why? useful for optimization, integration/counting, learning, rounding. Best way to minimize convex function with noisy value oracle. Only way to compute volume of convex set.

7 Non-trivial application: Convex Bandit
Game: For each round 𝑑=1,2,β‹―,𝑇, the player Adversary selects a convex loss function β„“ 𝑑 Chooses (possibly randomly) π‘₯ 𝑑 from unit ball in n dim based on past observations. Receives the loss/observation β„“ 𝑑 π‘₯ 𝑑 ∈[0,1]. Nothing else about β„“ 𝑑 is revealed! Measure performance by regret: There is a good fixed action, but We only learn one point each iteration! Adversary can give confusing information! SΓ©bastien Bubeck Ronen Eldan The gold of standard is getting 𝑂( 𝑇 ). Namely, 𝒦 𝑇 is better than 𝒦 𝑇 2/3 .

8 Non-trivial application: Convex Bandit
Game: For each round 𝑑=1,2,β‹―,𝑇, the player Adversary selects a convex loss function β„“ 𝑑 Chooses (possibly randomly) π‘₯ 𝑑 from unit ball in n dim based on past observations. Receives the loss/observation β„“ 𝑑 π‘₯ 𝑑 ∈[0,1]. Nothing else about β„“ 𝑑 is revealed! Measure performance by regret: After a decade of research, we have 𝑅 𝑇 = 𝑛 𝑇 . (The first polynomial time and regret algorithm.) SΓ©bastien Bubeck Ronen Eldan The gold of standard is getting 𝑂( 𝑇 ). Namely, 𝒦 𝑇 is better than 𝒦 𝑇 2/3 .

9 How to Input the set Oracle Setting:
A membership oracle: answer YES/NO to β€œπ‘₯βˆˆπΎβ€. A ball π‘₯+π‘Ÿπ΅ such that π‘₯+π‘Ÿπ΅βŠ‚πΎβŠ‚π‘₯+poly 𝑛 π‘Ÿπ΅. Explicit Setting: Given explicitly, such as polytopes, spectrahedrons, … In this talk, we focus on polytope {𝐴π‘₯β‰₯𝑏}. (m = # constraints)

10 Outline Oracle Setting: Explicit Setting: (original promised talk)
Introduce the ball walk KLS conjecture and its related conjectures Main Result Explicit Setting: (original promised talk) Introduce the geodesic walk Bound the # of iteration Bound the cost per iteration

11 Sampling Problem Input: a convex set 𝐾 with a membership oracle Output: sample a point from the uniform distribution on K. Conjectured Lower Bound: 𝑛 2 . Generalized Problem: Given a logconcave distribution 𝑝, sampled π‘₯ from 𝑝.

12 Conjectured Optimal Algorithm: Ball Walk
At π‘₯, pick random 𝑦 from π‘₯+𝛿 𝐡 𝑛 , if 𝑦 is in 𝐾, go to 𝑦. otherwise, sample again This walk may get trapped on one side if the set is not convex.

13 Isoperimetric constant
For any set 𝐾, we define the isoperimetric constant πœ™ 𝐾 by πœ™ 𝐾 = min 𝑆 Area(πœ•π‘†) min⁑(vol 𝑆 ,vol 𝑆 𝑐 ) Theorem Given a random point in 𝐾, we can generate another in 𝑂( 𝑛 𝛿 2 πœ™ 𝐾 2 log(1/πœ€)) iterations of Ball Walk where 𝛿 is step size. πœ™ 𝐾 or 𝛿 larger, mix better. 𝛿 cannot be too large, otherwise, fail probability is ~1. πœ™ large, hard to cut the set πœ™ small, easy to cut the set

14 Isoperimetric constant of Convex Set
Note that πœ™ 𝐾 is not affine invariant and can be arbitrary small. However, you can renormalize 𝐾 such that Cov 𝐾 =𝐼. Definition: 𝐾 is isotropic, if it is mean 0 and Cov 𝐾 =𝐼. Theorem: If 𝛿< 𝑛 , ball walk stays inside the set with constant probability. Theorem: Given a random point in isotropic 𝐾, we can generate another in 𝑂( 𝑛 2 πœ™ 𝐾 2 log(1/πœ€)) To make body isotropic, we can sample the body to compute covariance. L 1 πœ™ 𝐾 =1/𝐿.

15 KLS Conjecture Kannan-LovΓ‘sz-Simonovits Conjecture: For any isotropic convex 𝐾, πœ™ 𝐾 =Ξ©(1). If this is true, Ball Walk takes O( 𝑛 2 ) iter for isotropic 𝐾 (Matched the believed information theoretical lower bound.) To get the β€œtight” reduction from membership to sampling, it suffices to prove KLS conjecture

16 KLS conjecture and its related conjectures
Slicing Conjecture: Any unit volume convex set 𝐾 has a slice with volume Ξ©(1). Thin-Shell Conjecture: For isotropic convex 𝐾, 𝔼( π‘₯ βˆ’ 𝑛 2 )=𝑂(1). Generalized Levy concentration: For logconcave distribution 𝑝, 1-Lipschitz 𝑓 with 𝔼𝑓=0, β„™ |𝑓 π‘₯ βˆ’π”Όπ‘“|>𝑑 =exp(βˆ’Ξ© 𝑑 ). Essentially, it is asking if all convex sets looks like ellipsoids.

17 Do you know better way to bound mixing time of ball walk?
Main Result What if we cut the body by sphere only? 𝜎 𝐾 ≝ 𝑛 π‘‰π‘Žπ‘Ÿ 𝑋 β‰₯ πœ™ 𝐾 [Lovasz-Simonovits 93] πœ™=Ξ© 1 𝑛 βˆ’1/2 . [Klartag 2006] 𝜎=Ξ© 1 𝑛 βˆ’1/2 log 1/2 𝑛. [Fleury, Guedon, Paouris 2006] 𝜎=Ξ© 1 𝑛 βˆ’1/2 log 1/6 𝑛 log βˆ’2 log 𝑛 . [Klartag 2006] 𝜎=Ξ©(1) 𝑛 βˆ’0.4 . [Fleury 2010] 𝜎=Ξ©(1) 𝑛 βˆ’ [Guedon, Milman 2010] 𝜎=Ξ©(1) 𝑛 βˆ’ [Eldan 2012] πœ™= Ξ© 1 𝜎= Ξ© (1) 𝑛 βˆ’ [Lee Vempala 2016] πœ™=Ξ© 1 𝑛 βˆ’ In particular, we have 𝑂( 𝑛 2.5 ) mixing for ball walk. Do you know better way to bound mixing time of ball walk?

18 Outline Oracle Setting: Explicit Setting: Introduce the ball walk
KLS conjecture and its related conjectures Main Result Explicit Setting: Introduce the geodesic walk Bound the # of iteration Bound the cost per iteration

19 Problem: Sampling Input: a polytope with π‘š constraints and 𝑛 variables. Output: sample a point from the uniform distribution on K. {𝐴π‘₯β‰₯𝑏} Iterations Time/Iter Polytopes KN09 Dikin walk π‘šπ‘› π‘š 𝑛 1.38 LV16 Ball walk 𝑛 2.5 π‘šπ‘› LV16 Geodesic walk π‘š 𝑛 0.75 π‘š 𝑛 1.38 First sub-quadratic algorithm. Cost of matrix inversion

20 How does nature mix particles?
Brownian Motion. It works for sampling on ℝ 𝑛 . However, convex set has boundary . Option 1) Reflect it when you hit the boundary. However, it need tiny step for discretization.

21 How does the nature mixes particle?
Brownian Motion. It works for sampling on ℝ 𝑛 . However, convex set has boundary . Option 2) Remove the boundary by blowing up. However, this requires explicit polytopes.

22 Blowing Up? After blow up Non-Uniform Distribution on Real Real Line
The distortion makes the hard constraint becomes β€œsoft”. Original Polytope Uniform Distribution on [0,1]

23 Enter Riemannian manifolds
𝑛-dimensional manifold M is an 𝑛-dimensional surface. Each point 𝑝 has a tangent space 𝑇 𝑝 𝑀 of dimension 𝑛, the local linear approximation of M at 𝑝; tangents of curves in 𝑀 lie in 𝑇 𝑝 𝑀. The inner product in 𝑇 𝑝 𝑀 depends on 𝑝: 𝑒,𝑣 𝑝 Informally, you can think it is like assigning an unit ball for every point

24 Enter Riemannian manifolds
Each point 𝑝 has a linear tangent space 𝑇 𝑝 𝑀. The inner product in 𝑇 𝑝 𝑀 depends on 𝑝: 𝑒,𝑣 𝑝 Length of a curve 𝑐: 0,1 →𝑀 is 𝐿 𝑐 = 𝑑 𝑑𝑑 𝑐 𝑑 𝑐(𝑑) 𝑑𝑑 Distance 𝑑(π‘₯,𝑦) is the infimum over all paths in M between x and y.

25 β€œGeneralized” Ball Walk
At x, pick random y from 𝐷 π‘₯ where 𝐷 π‘₯ ={𝑦:𝑑 π‘₯,𝑦 ≀1}.

26 Hessian manifold 𝑠 3 𝑠 2 𝑠 1 Hessian manifold: a subset of ℝ 𝑛 with inner product defined by 𝑒,𝑣 𝑝 = 𝑒 𝑇 𝛻 2 πœ™ 𝑝 𝑣. For polytope π‘Ž 𝑖 𝑇 π‘₯β‰₯ 𝑏 𝑖 βˆ€π‘– , we use the log barrier function πœ™ π‘₯ = 𝑖=1 π‘š log( 1 𝑠 𝑖 π‘₯ ) 𝑠 𝑖 π‘₯ = π‘Ž 𝑖 𝑇 π‘₯βˆ’ 𝑏 𝑖 is the distance from π‘₯ to constraint 𝑖 𝑝 blows up when π‘₯ close to boundary Our walk is slower when it is close to boundary.

27 random walk on real line
Suggested algorithm At x, pick random y from 𝐷 π‘₯ where 𝐷 π‘₯ ={𝑦:𝑑 π‘₯,𝑦 ≀1} is induced by log barrier. Doesn’t work! Converges to the boundary since the volume of β€œboundary” is +∞. (Called Dikin Ellipsoid) Corresponding Hessian Manifold random walk on real line Original Polytope

28 Getting Uniform Distribution
Lemma If 𝑝 π‘₯→𝑦 =𝑝(𝑦→π‘₯), then stationary distribution is uniform. To make a Markov chain 𝑝 symmetric, we use 𝑝 π‘₯→𝑦 = min 𝑝 π‘₯→𝑦 ,𝑝 𝑦→π‘₯ 𝑖𝑓 π‘₯≠𝑦 ⋯𝑖𝑓 π‘₯=𝑦 . To implement it, we sample 𝑦 according to 𝑝(π‘₯→𝑦) if 𝑝 π‘₯→𝑦 <𝑝 𝑦→π‘₯ , go to 𝑦. Else, we go to 𝑦 with probability 𝑝 𝑦→π‘₯ /𝑝(π‘₯→𝑦); Stay at x otherwise.

29 Dikin Walk At x, pick random y from 𝐷 π‘₯ if π‘₯βˆ‰ 𝐷 𝑦 , reject 𝑦 else, accept 𝑦 with probability min(1, vol 𝐷 π‘₯ vol 𝐷 𝑦 ). [KN09] proved it takes 𝑂 (π‘šπ‘›) steps. Better than the previous best 𝑂 𝑛 2.5 for oracle setting. [Copied from KN09]

30 Dikin Walk and its Limitation
At x, pick random y from 𝐷 π‘₯ if π‘₯βˆ‰ 𝐷 𝑦 , reject 𝑦 else, accept 𝑦 with probability min(1, vol 𝐷 π‘₯ vol 𝐷 𝑦 ). Dikin Walk and its Limitation Dikin ellipsoid is fully contained in 𝐾. Idea: Pick next step y from a blown-up Dikin ellipsoid. Can afford to blow up by ~ 𝑛/ log π‘š . WHP π‘¦βˆˆπΎ. In high dimension, volume of 𝐷 π‘₯ is not that smooth. (Worst case 0,1 𝑛 ) Any larger step makes the success probability exponentially small! 0,1 𝑛 is the worst case for ball walk, hit-and-run, Dikin walk .

31 Going back to Brownian Motion
At x, pick random y from 𝐷 π‘₯ if π‘₯βˆ‰ 𝐷 𝑦 , reject 𝑦 else, accept 𝑦 with probability min(1, vol 𝐷 π‘₯ vol 𝐷 𝑦 ). Going back to Brownian Motion The walk is not symmetric in the β€œspace”. Tendency of going to center. Taking step size to 0, Dikin walk becomes a stochastic differential equation: 𝑑 π‘₯ 𝑑 =πœ‡ π‘₯ 𝑑 𝑑𝑑+𝜎 π‘₯ 𝑑 𝑑 π‘Š 𝑑 where 𝜎 π‘₯ 𝑑 = πœ™ β€²β€² π‘₯ 𝑑 βˆ’1/2 and πœ‡( π‘₯ 𝑑 ) is the drift towards center. Corresponding Hessian Manifold Original Polytope

32 What is the drift? Fokker-Planck equation
The probability distribution of the SDE 𝑑 π‘₯ 𝑑 =πœ‡ π‘₯ 𝑑 𝑑𝑑+𝜎 π‘₯ 𝑑 𝑑 π‘Š 𝑑 is given by πœ•π‘ 𝑑𝑑 π‘₯,𝑑 =βˆ’ πœ• πœ•π‘₯ πœ‡ π‘₯ 𝑝 π‘₯,𝑑 πœ• 2 πœ• π‘₯ 2 𝜎 2 π‘₯ 𝑝 π‘₯,𝑑 . To make the stationary distribution constant, we need βˆ’ πœ• πœ•π‘₯ πœ‡ π‘₯ πœ• 2 πœ• π‘₯ 2 𝜎 2 π‘₯ =0 Hence, we have πœ‡ π‘₯ =πœŽπœŽβ€².

33 A New Walk A new walk: π‘₯ 𝑑+β„Ž = π‘₯ 𝑑 +β„Žβ‹…πœ‡ π‘₯ 𝑑 +𝜎 π‘₯ 𝑑 π‘Š with π‘Š~𝑁(0,β„ŽπΌ). It doesn’t make sense.

34 Exponential map Exponential map exp 𝑝 : 𝑇 𝑝 𝑀→ 𝑀 is defined as
𝛾 𝑣 : unique geodesic (shortest path) from p with initial velocity 𝑣.

35 Anyway to avoid using filter?
Geodesic Walk A new walk: π‘₯ 𝑑+β„Ž = exp π‘₯ 𝑑 (β„Ž/2β‹…πœ‡ π‘₯ 𝑑 +𝜎 π‘₯ 𝑑 π‘Š) with π‘Š~𝑁(0,β„ŽπΌ). However, this walk has discretization error. So, we do a metropolis filter after. Since our walk is complicated, the filter is super complicated. Anyway to avoid using filter?

36 Outline Oracle Setting: Explicit Setting: (original promised talk)
Introduce the ball walk KLS conjecture and its related conjectures Main Result Explicit Setting: (original promised talk) Introduce the geodesic walk Bound the # of iteration Bound the cost per iteration

37 Geodesic Walk A new walk: π‘₯ 𝑑+β„Ž = exp π‘₯ 𝑑 (β„Ž/2β‹…πœ‡ π‘₯ 𝑑 +π‘Š)
with π‘Š~𝑁(0,β„ŽπΌ). Geodesic is better than β€œstraight line”: It extends infinitely. It gives a massive cancellation.

38 Key Lemma 1: Provable Long Geodesic
Straight line defines finitely; Geodesic defines infinitely. Thm [LV16]: For manifold induced by log barrier, a random geodesic 𝛾 starting from π‘₯ satisfies π‘Ž 𝑖 𝑇 𝛾 β€² 𝑑 ≀𝑂( 𝑛 βˆ’ 1 4 )( π‘Ž 𝑖 𝑇 π‘₯βˆ’π‘) for 0≀𝑑≀ 𝑂 ( 𝑛 1/4 ). Namely, the geodesic is well behavior for a long time. Remark: If central path in IPM had this, we have a π‘š 5/4 time algorithm for MaxFlow!

39 Key Lemma 2: Massive Cancellation
Consider a SDE on 1 dimensional real line (NOT manifold) 𝑑 π‘₯ 𝑑 =πœ‡ π‘₯ 𝑑 𝑑𝑑+𝜎 π‘₯ 𝑑 𝑑 π‘Š 𝑑 . How good is the β€œEuler method”, namely π‘₯ 0 +β„Žπœ‡ π‘₯ 0 + β„Ž 𝜎 π‘₯ 0 π‘Š? By β€œTaylor” expansions, we have π‘₯ β„Ž = π‘₯ 0 +β„Žπœ‡ π‘₯ 0 + β„Ž 𝜎 π‘₯ 0 π‘Š+ β„Ž 2 𝜎 β€² π‘₯ 0 𝜎 π‘₯ 0 π‘Š 2 βˆ’1 +𝑂 β„Ž If 𝜎 β€² π‘₯ 0 β‰ 0, the error is 𝑂(β„Ž). If 𝜎 β€² π‘₯ 0 =0, the error is 𝑂( β„Ž 1.5 ). For geodesic walk, 𝜎 β€² π‘₯ 0 =0 (Christoffel symbols vanish in normal coordinates)

40 Is high order method for SDE used in MCMC?
Convergence Theorem Thm [LV16]: For log barrier, the geodesic walk mixes in 𝑂 π‘š 𝑛 0.75 steps. Thm [LV16]: For log barrier on 0,1 𝑛 , it mixes in 𝑂 ( 𝑛 1/3 ) steps.  The best bound for ball-walk, hit-and-run and Dikin walk is 𝑂( 𝑛 2 ) steps for 0,1 𝑛 . Our walk is similar to Milstein method. Is high order method for SDE used in MCMC?

41 Outline Oracle Setting: Explicit Setting: (original promised talk)
Introduce the ball walk KLS conjecture and its related conjectures Main Result Explicit Setting: (original promised talk) Introduce the geodesic walk Bound the # of iteration Bound the cost per iteration

42 How to implement the algorithm
Can we simply do Taylor expansion? In high dim, it may take 𝑛 π‘˜ time to compute the π‘˜ π‘‘β„Ž derivatives. How to implement the algorithm In tangent plane at x, pick π‘€βˆΌ 𝑁 π‘₯ (0,𝐼), i.e. standard Gassian in β€–. ​ π‘₯ Compute 𝑦= exp π‘₯ β„Ž 2 πœ‡ π‘₯ + β„Ž 𝑀 Accept with probability Min 1, 𝑝 𝑦 β†’ π‘₯ 𝑝 π‘₯ β†’ 𝑦 How to compute geodesic and rejection probability? Need high accuracy for rejection probability  due to β€œdirectedness”. Geodesic is given by geodesic equation; probability is given by Jacobi field.

43 Collocation Method for ODE A weakly polynomial time algorithm for some ODEs
Consider the ODE 𝑦 β€² =𝑓 𝑑,𝑦(𝑑) with 𝑦 0 = 𝑦 0 . Given a degree 𝑑 poly π‘ž and distinct points 𝑑 1 , 𝑑 2 ,β‹―, 𝑑 𝑑 , let 𝑇(π‘ž) be the unique degree 𝑑 poly 𝑝 s.t. 𝑝 β€² 𝑑 =𝑓(𝑑,π‘ž(𝑑)) on 𝑑= 𝑑 1 , 𝑑 2 ,β‹―, 𝑑 𝑑 𝑝 0 =π‘ž(0). Lem [LV16]: 𝑇 is well defined. If 𝑑 𝑖 are Chebyshev points on [0,1], then 𝐿𝑖𝑝 𝑇 =𝑂 𝐿𝑖𝑝 𝑓 . Thm [LV16]: If 𝐿𝑖𝑝 𝑓 ≀0.001, we can find a fix point of 𝑇 efficiently. p

44 𝑂(𝑑 log 2 𝑑 πœ€ βˆ’1 ) with 𝑂(𝑑log 𝑑 πœ€ βˆ’1 ) evaluations of 𝑓.
Collocation Method for ODE A weakly polynomial time algorithm for some ODEs Consider the ODE 𝑦 β€² =𝑓 𝑑,𝑦(𝑑) with 𝑦 0 = 𝑦 0 . Thm [LV16]: Suppose that 𝐿𝑖𝑝 𝑓 ≀0.001 There is a degree 𝑑 poly 𝑝 such that 𝑝 β€² βˆ’ 𝑓 β€² β‰€πœ€. Then, we can find a 𝑦 such that π‘¦βˆ’π‘¦ 1 =𝑂(πœ€) in time 𝑂(𝑑 log 2 𝑑 πœ€ βˆ’1 ) with 𝑂(𝑑log 𝑑 πœ€ βˆ’1 ) evaluations of 𝑓. Remark: No need to compute 𝑓′! In general, the runtime is 𝑂 (𝑛𝑑 Lip 𝑂(1) (𝑓)) instead.

45 How can I bound the πŸπŸ•πŸŽ 𝒕𝒉 derivatives?
For 1 variable function, we can estimate π‘˜ π‘‘β„Ž derivatives easily. Idea: reduce estimating derivatives of general functions to 1 variable. In general, we write 𝐹 ≀ π‘₯ 𝑓 D π‘˜ 𝐹(π‘₯) ≀ 𝑓 π‘˜ 0 . Calculus rule: 𝐹 ≀ π‘₯ 𝑓 and 𝐺 ≀ 𝐹(π‘₯) 𝑔, then 𝐺∘𝐹 ≀ π‘₯ π‘”βˆ˜(π‘“βˆ’π‘“ 0 ).

46 Implementation Theorem
Using the trick before, we show geodesic can be approximated by 𝑂 (1) degree poly. Hence, collocation method finds in 𝑂 (1) steps. Thm [LV16]: If β„Žβ‰€ 𝑛 βˆ’1/2 , 1 step of Geodesic walk can be implemented in matrix multiplication time. For hypercube, β„Žβ‰€ 𝑂 (1) suffices.

47 Questions We have no background on numerical ODE/SDE and RG. So, the running time should be improvable easily. How to avoid the filtering step? Is there way to tell a walk mixed or not? (i.e. even if we cannot prove KLS, the algorithm can still stop early.) Is higher order method in SDE useful in MCMC? Any other suggestion/heuristic for sampling on convex set?


Download ppt "Recent Progress On Sampling Problem"

Similar presentations


Ads by Google