Presentation is loading. Please wait.

Presentation is loading. Please wait.

Path Planning vs. Obstacle Avoidance

Similar presentations


Presentation on theme: "Path Planning vs. Obstacle Avoidance"— Presentation transcript:

1 Path Planning vs. Obstacle Avoidance
No clear distinction, but usually: Global vs. local Path planning low-frequency, time-intensive search method for global finding of a (optimal) path to a goal Examples: road maps, cell decomposition Obstacle avoidance (aka “local navigation”) fast, reactive method with local time and space horizon Examples: vector field histogram, dynamic window algorithm Gray area Fast methods for finding path to goal that can fail if environment contains local minima Example: potential field method Many slides courtesy of Brian Gerkey

2 Local minima Local navigation can easily get stuck
Current sensor data is insufficient for general navigation Shallow minima may only cause a delay

3 Deep minima Maintaining history can avoid oscillations
Look-ahead and randomness can avoid some minima But eventually, local navigation can get irretrievably stuck

4 Our goal Given: Compute: Also, execute the path map robot pose
goal pose Compute: a feasible (preferably efficient) path to the goal Also, execute the path may require replanning

5 Configuration space workspace (W): The ambient environment in which the robot operates Usually R2 or R3 configuration (q): Complete specification of the position of all points in the robot’s body Set of points in robot’s body is given by R(q) configuration space, or C-space (Q): Space of all possible configurations C-space dimensionality = # degrees of freedom

6 Example 1: two-link arm Two angles fully specify the configuration
2-dimensional configuration space Q = (1, 2) Is the position of the end effector a valid configuration specification? Q ?= (X, Y)

7 Example 2: Sony AIBO 4 legs, 3 dof each 1 head, 3 dof
2 in the shoulder 1 in the knee 1 head, 3 dof 15 dof => 15-d C-space Q = (l1,l2,l3,l4,h) + 6 DOF real-world pose

8 Example 3: planar mobile robot
Workspace is R2 Q = (X,Y,) How about a circular robot? Body position doesn’t depend on   1,2: R(x, y,1) = R(x,y,2) Thus the C-space is 2-d: Q = (X,Y)

9 C-space obstacles Given workspace obstacle WOi , C-space obstacle QOi is the set of configurations in which the robot intersects WOi : QOi = {q  Q | R(q)  WOi  } Free C-space, Qfree , is the set of configurations in which the robot intersects no obstacle: Qfree = Q \ ( QOi)

10 Computing C-space obstacles for a circular robot
Slide the robot around the obstacle Effectively “grows” the obstacle by the radius of the robot

11 C-space maps for circular robots
Workspace C-space, radius=1.0m C-space, radius=0.25m

12 10/31/06 CS225B Brian Gerkey

13 Representation: geometric vs. sampled
Geometric representation exact model of environment possibility of optimal solution difficult to obtain such a map continuous solution space combinatorial techniques Sampled representation approximate model of environment usually, though not necessarily, uniform grid optimality limited by resolution map readily obtained from data discrete solution space navigation function techniques

14 Geometric: Cell decomposition
How can we search a continuous space of configurations? Decompose into non-overlapping cells cell: a connected region of C-space can be exact or approximate Construct connectivity graph from cells and transitions between them Search the graph for a path to the goal Convert graph solution to real-world path Example: triangulation Example: trapezoidal decomposition Images from LaValle

15 Geometric: Roadmap / retraction
How can we search a continuous space of configurations? Retract the free space into a set of curves The curves are your roadmap Plan like you would for a road trip Get on the freeway Traverse the network of freeways Get off the freeway May need special procedures for getting on/off the roadmap Example: generalized Voronoi graph Example: visibility graph

16 10/31/06 CS225B Brian Gerkey

17 Sampled: potential / navigation functions
For each sampled state, estimate the cost to goal similar to a policy in machine learning Grid maps: for each cell, compute the cost (number of cells, distance, etc.) to the goal potential functions are local navigation functions are global Images from LaValle

18 wavefront planner Given grid map M, robot position p, goal position g
From M, create C-space map MQ Run brushfire algorithm on MQ: Label grid cell g with a “0” Label free cells adjacent to g with a “1” Label free cells adjacent to those cells with a “2” Repeat until you reach p Labels encode length of shortest (Manhattan distance) obstacle-free path from each cell to g Starting from p, follow the gradient of labels Complete

19 wavefront in action Gets very close to obstacles…

20 Inefficient paths Naïve single-neighbor cost update gives Manhattan distance (maybe with diagonal transitions) The gradient unnecessarily tracks the grid Graphic from Ferguson and Stentz , Field-D*

21 True-distance alternatives
Theta* [Nash et al. 2007 Field D* [Ferguson and Stentz 2005] E* / gradient [Phillipsen 2006 / Konolige unpub.] Hybrid A* [Dolgov et al. 2008]

22 Computing true distance [Gradient method]
Naïve single-neighbor cost update gives Manhattan distance (maybe with diagonal transitions) The gradient unnecessarily tracks the grid In reality, the grid potentials are discrete samples of wave that propagates from the goal A better distance can be had by approximating this wave: Sethian’s Fast Marching Method Huygen’s Principle Konolige’s (unpublished) 2-neighbor plane wave approximation

23 Interpolating a potential function
c = fn(a,b) Planar wave approximation b c? a << b => c = a + F F is the intrinsic cost of motion F a isopotentials

24 Interpolating a potential function
c = fn(a,b) Planar wave approximation b c? a = b => c = a + (1/sqrt(2))*F F is the intrinsic cost of motion F isopotentials a

25 Interpolating a potential function
c = fn(a,b) Planar wave approximation b c? a < b => c = ??? F is the intrinsic cost of motion F a isopotentials

26 Dynamic programming update algorithm
q, Cost(q) = ∞ Cost(g) = 0 ActiveList = {g} # ActiveList is a priority queue while p  ActiveList (or ActiveList  ): q = ActiveList.pop() for each n  4neighbors(q): newcost = F(qn) if newcost < Cost(n): Cost(n) = newcost ActiveList.push(n) # replace if already there Will any cell be put back on ActiveList after it is popped? Can you use A* with this algorithm?

27 gradient in action Running time: approx O(n) for n grid cells.
Only practical for modestly-sized 2d grids

28 Obstacle costs To set intrinsic costs, first run DP with:
goal = obstacle points F(p) = ||pj - pi|| We have for each cell d(p), the minimum distance from p to the closest obstacle Intrinsic cost is: F(p) = Q(d(p)) where Q() is a decreasing function of distance. Q d(p)

29 Finding the gradient Gradient Bilinear interpolation c d s b
g(x) = avg((s-b), (d-s)) Corner cases: b >> s, etc. a a b c d g α β Bilinear interpolation avgq(x,y) = (1-q)*x + q*y g(x) = avgβ(avgα(a(x),b(x)), avg α(c(x),d(x)))

30 Gradient in LAGR Test 26A

31 Gradient with A*

32 Practical concerns Goals and robot positions will rarely fall exactly at the center of a grid cell Pick the closest cell center OR interpolate cost information between cells When computing obstacle costs, stop DP when costs drop below a threshold (i.e., far enough from any obstacle) Q(d(p)) should be very large (essentially infinite) when d(p) < (1/2 * robot_radius) + safety_distance Costs should decrease smoothly and quickly after that. Choose the step size for gradient carefully Larger step sizes cause trouble in narrow pinch points Smaller step sizes are computationally inefficient For non-circular robots (e.g,. Erratic), use the longer half-width as the radius No longer complete, but usually works

33 Dealing with local obstacles
The map isn’t perfect; neither is localization How do you deal with locally-sensed obstacles? => LOCAL CONTROLLER Option 1: VFH or other local controller Pro: easy to implement Pro: can take safety, global path, etc. into account Con: planner / navigator interaction may be suboptimal Option 2: Local Planner Pro: potentially optimal behavior for unforseen obstacles Con: planner has to run FAST 10/31/06 CS225B Brian Gerkey

34 Another way: trajectory rollout
10/31/06 CS225B Brian Gerkey

35 Gradient + trajectory rollout in exploration: LAGR
LAGR-Eucalyptus run LAGR-Eucalyptus run - montage LAGR-Eucalyptus run - map 10/31/06 CS225B Brian Gerkey Average speed > 1m/s

36 Local Planner Pick some small area to do local planning
Look ahead on global plan for the local goal point Add in local obstacles from sensors Things to look out for: How do you track the local path? When do you replan the global plan? 10/31/06 CS225B Brian Gerkey

37 Computing velocities directly
Given cost-to-goal values for each cell, how do you compute velocities for the robot? Pick a point p some distance (e.g., 0.5m) ahead of the robot on the best path error = (bearing to p) - (current heading) derror = distance to p if abs(error) < min_angle_threshold: d = 0 else: d = k * error if abs(error) > max_angle_threshold: dX = 0 dX = kx * derror or dX = kx * derror - kx * error 10/31/06 CS225B Brian Gerkey


Download ppt "Path Planning vs. Obstacle Avoidance"

Similar presentations


Ads by Google