Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Probabilistic Roadmaps CS 326A: Motion Planning.

Similar presentations


Presentation on theme: "1 Probabilistic Roadmaps CS 326A: Motion Planning."— Presentation transcript:

1 1 Probabilistic Roadmaps CS 326A: Motion Planning

2 2 The complexity of the robot’s free space is overwhelming

3 3  The cost of computing an exact representation of the configuration space of a multi-joint articulated object is often prohibitive  But very fast algorithms exist that can check if an articulated object at a given configuration collides with obstacles   Basic idea of Probabilistic Roadmaps (PRMs): Compute a very simplified representation of the free space by sampling configurations at random using some probability measure

4 Initial idea: Potential Field + Random Walk  Attract some points toward their goal  Repulse other points by obstacles  Use collision check to test collision  Escape local minima by performing random walks

5 But many pathological cases …

6 Illustration of a Bad Potential “Landscape” U q Global minimum

7 7 Probabilistic Roadmap (PRM) Free/feasible space Space  n forbidden space

8 8 Probabilistic Roadmap (PRM) Configurations are sampled by picking coordinates at random

9 9 Probabilistic Roadmap (PRM) Configurations are sampled by picking coordinates at random

10 10 Probabilistic Roadmap (PRM) Sampled configurations are tested for collision

11 11 Probabilistic Roadmap (PRM) The collision-free configurations are retained as milestones

12 12 Probabilistic Roadmap (PRM) Each milestone is linked by straight paths to its nearest neighbors

13 13 Probabilistic Roadmap (PRM) Each milestone is linked by straight paths to its nearest neighbors

14 14 Probabilistic Roadmap (PRM) The collision-free links are retained as local paths to form the PRM

15 15 Probabilistic Roadmap (PRM) s g The start and goal configurations are included as milestones

16 16 Probabilistic Roadmap (PRM) The PRM is searched for a path from s to g s g

17 17 Multi- vs. Single-Query PRMs  Multi-query roadmaps  Pre-compute roadmap  Re-use roadmap for answering queries  Single-query roadmaps  Compute a roadmap from scratch for each new query

18 18 This answer may occasionally be incorrect Sampling strategy Procedure BasicPRM (s,g,N) 1.Initialize the roadmap R with two nodes, s and g 2.Repeat: a.Sample a configuration q from C with probability measure  b.If q  F then add q as a new node of R c.For some nodes v in R such that v  q do If path (q,v)  F then add (q,v) as a new edge of R until s and g are in the same connected component of R or R contains N+2 nodes 3.If s and g are in the same connected component of R then Return a path between them 4.Else Return NoPath

19 Requirements of PRM Planning 1.Checking sampled configurations and connections between samples for collision can be done efficiently.  Hierarchical collision detection 2.A relatively small number of milestones and local paths are sufficient to capture the connectivity of the free space.  Non-uniform sampling strategies

20 20 PRM planners work well in practice Why?

21 21 PRM planners work well in practice. Why?  Why are they probabilistic?  What does their success tell us?  How important is the probabilistic sampling measure  ?  How important is the randomness of the sampling source?

22 Why is PRM planning probabilistic?  A PRM planner ignores the exact shape of F. So, it acts like a robot building a map of an unknown environment with limited sensors  At any moment, there exists an implicit distribution (H,s), where H is the set of all consistent hypotheses over the shapes of F For every x  H, s (x) is the probability that x is correct  The probabilistic sampling measure  reflects this uncertainty. [Its goal is to minimize the expected number of remaining iterations to connect s and g, whenever they lie in the same component of F.]

23 So...  PRM planning trades the cost of computing F exactly against the cost of dealing with uncertainty  This choice is beneficial only if a small roadmap has high probability to represent F well enough to answer planning queries correctly [Note the analogy with PAC learning]  Under which conditions is this the case?

24 Relation to Monte Carlo Integration x f(x) a b A = a × b x1x1 x2x2 (x i,y i )

25 Relation to Monte Carlo Integration x f(x) a b A = a × b x1x1 x2x2 (x i,y i ) But a PRM planner must construct a path The connectivity of F may depend on small regions Insufficient sampling of such regions may lead the planner to failure

26  Two configurations q and q’ see each other if path ( q,q’ )  F  The visibility set of q is V ( q ) = { q’ | path ( q,q’ )  F } Visibility in F

27 ε-Goodness of F  Let μ(X) stand for the volume of X  F  Given ε  (0,1], q  F is ε-good if it sees at least an ε-fraction of F, i.e., if μ(V(q))  ε  μ(F)  F is ε-good if every q in F is ε-good  Intuition: If F is ε-good, then with high probability a small set of configurations sampled at random will see most of F q F V(q)V(q) Here, ε ≈ 0.18

28 F1F1 F2F2 Connectivity Issue

29 F1F1 F2F2 Lookout of F 1

30 F1F1 F2F2 Connectivity Issue Lookout of F 1 The β-lookout of a subset F 1 of F is the set of all configurations in F 1 that see a β-fraction of F 2 = F\ F 1 β-lookout(F1) = {q  F 1 | μ( V (q)  F 2 )  β  μ(F 2 )}

31 F1F1 F2F2 Connectivity Issue Lookout of F 1 The β-lookout of a subset F 1 of F is the set of all configurations in F 1 that see a β-fraction of F 2 = F\ F 1 β-lookout(F1) = {q  F 1 | μ( V (q)  F 2 )  β  μ(F 2 )} F is (ε,α,β)-expansive if it is ε-good and each one of its subsets X has a β-lookout whose volume is at least a  μ(X) Intuition: If F is favorably expansive, it should be relatively easy to capture its connectivity by a small network of sampled configurations

32  Expansiveness only depends on volumetric ratios  It is not directly related to the dimensionality of the configuration space E.g., in 2-D the expansiveness of the free space can be made arbitrarily poor Comments

33 Thanks to the wide passage at the bottom this space favorably expansive Many narrow passages might be better than a single one This space’s expansiveness is worse than if the passage was straight A convex set is maximally expansive, i.e., ε = α = β = 1

34 Theoretical Convergence of PRM Planning Theorem 1 Let F be (ε,a,β)-expansive, and s and g be two configurations in the same component of F. BasicPRM(s,g,N) with uniform sampling returns a path between s and g with probability converging to 1 at an exponential rate as N increases  = Pr(Failure)  Experimental convergence

35 Linking sequence

36 Theoretical Convergence of PRM Planning Theorem 1 Let F be (ε,α,β)-expansive, and s and g be two configurations in the same component of F. BasicPRM(s,g,N) with uniform sampling returns a path between s and g with probability converging to 1 at an exponential rate as N increases Theorem 2 For any ε > 0, any N > 0, and any g in (0,1], there exists α o and β o such that if F is not (ε,α,β)-expansive for α > α 0 and β > β 0, then there exists s and g in the same component of F such that BasicPRM(s,g,N) fails to return a path with probability greater than g.

37 What does the empirical success of PRM planning tell us? It tells us that F is often favorably expansive despite its overwhelming algebraic/geometric complexity

38 In retrospect, is this property surprising?  Not really! Narrow passages are unstable features under small random perturbations of the robot/workspace geometry  Poorly expansive space are unlikely to occur by accident

39 Most narrow passages in F are intentional … … but it is not easy to intentionally create complex narrow passages in F Alpha puzzle

40 PRM planners work well in practice. Why?  Why are they probabilistic?  What does their success tell us?  How important is the probabilistic sampling measure π?  How important is the randomness of the sampling source?

41 How important is the probabilistic sampling measure π?  Visibility is usually not uniformly favorable across F  Regions with poorer visibility should be sampled more densely (more connectivity information can be gained there) small visibility setssmall lookout sets good visibility poor visibility

42 Impact s g Gaussian [Boor, Overmars, van der Stappen, 1999] Connectivity expansion [Kavraki, 1994]

43  But how to identify poor visibility regions? What is the source of information?  Robot and workspace geometry How to exploit it?  Workspace-guided strategies  Filtering strategies  Adaptive strategies  Deformation strategies

44 How important is the randomness of the sampling source? Sampler = Uniform source S + Measure π  Random  Pseudo-random  Deterministic [LaValle, Branicky, and Lindemann, 2004]

45  Adversary argument in theoretical proof  Efficiency  Practical convenience Choice of the Source S s g

46 Conclusion  In PRM, the word probabilistic matters.  The success of PRM planning depends mainly and critically on favorable visibility in F  The probability measure used for sampling F derives from the uncertainty on the shape of F  By exploiting the fact that visibility is not uniformly favorable across F, sampling measures have major impact on the efficiency of PRM planning  In contrast, the impact of the sampling source – random or deterministic – is small


Download ppt "1 Probabilistic Roadmaps CS 326A: Motion Planning."

Similar presentations


Ads by Google