Download presentation

Presentation is loading. Please wait.

Published byMarcus Harmon Modified over 4 years ago

1
Fast Iterative Alignment of Pose Graphs with Poor Initial Estimates Edwin Olson eolson@mit.edu John Leonard, Seth Teller jleonard@mit.edu, teller@csail.mit.edu http://rvsn.csail.mit.edu/graphoptim

2
Where are we going?

3
Problem summary ► Robot moves around environment. Poses are connected by constraints (odometry). Feature Pose Constraint ► Constraint = rigid body transformation + covariance matrix

4
Problem summary ► Re-observing features allows constraints between non-successive poses Feature Pose Constraint

5
Problem summary ► Goal: find the arrangement of poses and features that best satisfies the constraints. (e.g., maximizes the probability) An improbable initial configurationMaximum likelihood configuration Poorly satisfied constraints SLAM

6
Why is this hard? ► Huge number of unknowns |x| = 10 3 is a small problem ► Non-linear constraints Function of robot orientation ► Lousy initial estimates Odometry! ► (Today, we’ll ignore SLAM’s other difficult sub-problems: data association, exploration) Cost surface from a laser-scan matching problem. Many local maxima/minima

7
Our Work ► We present an algorithm for optimizing pose graphs Very fast Estimates non-linear maximum likelihood ► Unlike EKF, EIF and friends Works well even with poor initial estimates Under 100 lines of code

8
A peek at results Gauss-Seidel, 60 sec. Multi-Level Relaxation, 8.6 sec. Our method, 2.8 sec. Noisy (simulated) input: 3500 poses 3499 temporal constraints 2100 spatial constraints Ground Truth

9
Our Method: Overview ► Marginalize out features ► Use “incremental” state space ► Enforce constraints serially Linearize constraint to global incremental constraint Taking smaller steps as t → ∞

10
Marginalizing Features Marginalize features, leaving only trajectory Problem now involves only inter-pose constraints Then compute features (trivial, as in FastSLAM) Marginalize

11
Marginalization: Cons ► Con: More edges in graph Feature with N observations leads to O(N 2 ) edges Slower/harder to solve ► (Information Matrix less sparse) Marginalize

12
Marginalization: Not so bad? ► Pro: Pose-pose edges better model data inter-dependence “This cloud of points matches this cloud of points” ► Individual point associations are not independent. More correct to use a single “lumped” constraint ► Bonus: Makes it easier to undo bad data associations later Observations at t=5Observations at t=500 Data association Lumped Constraint

13
Iterative Optimization ► Do forever: Pick a constraint Descend in direction of constraint’s gradient ► Scale gradient magnitude by alpha/iteration ► Clamp step size iteration++ ► alpha/iteration → 0 as t → ∞ ► Robustness to local concavities Hop around the state space, “stick” in the best one ► Good solution very fast, “perfect” solution only as t → ∞

14
Importance of state space ► Choice of state representation affects gradient ► Pick a state space that: Is computationally efficient Interacts well with optimization algorithm 1 2 3 5 6 7 6 constraint error

15
Previous work: global state space ► Global state: x = [ x 0 y 0 θ 0 x 1 y 1 θ 1 x 2 y 2 θ 2 … ] T ► Constraint between pose a and b = f ( x a,y a,θ a,x b,y b,θ b ) Gradient = 0 for all but poses a and b 6 ► Slow convergence ► Error for some links can go up dramatically 1 2 3 4 5 6 7 1 2 3 4 5 7 constraint error 6 Step direction

16
State Space: Relative/Incremental ► Robot motion is cumulative Adjustment of one pose affects neighbors ► Each constraint affects multiple nodes 1 2 3 4 5 6 7 6 1 2 3 4 5 7 6 ► Faster convergence ► Error decreases more predictably Step direction

17
Relative versus Incremental ► Relative (prior work): State vector: rigid-body transformations P 3 = P 0 T 1 T 2 T 3 More realistic Projective ► Small adjustments can have large effects ► Instability O(N) time to compute update 1 2 3 4 5 7 6 ► Incremental (contribution): State vector: global-relative motions P 3 = P 0 + D 1 + D 2 + D 3 Less realistic (inexact) Behaves linearly ► Small adjustments have small effects ► Good convergence properties O(log N) to compute update

18
Algorithm Complexity ► For N poses, M constraints: ► Memory: O(N+M) 1M poses, 2M constraints → 176MB EKF/EIF: O(N 2 ) ► Time: Impose constraint: O(log N) ► (using incremental state space and tree-based data structure) One full iteration: O(M log N) Convergence: hard to make guarantees (nonlinear problem) ► Δx bounded at each time step, but ∫ Δx dt might not be ► In practice, SLAM problems fairly well behaved after a few iterations

19
Gauss Seidel Relaxation (exponential time scale)

20
Our algorithm (slowed down by ~4x)

21
Multi-Level Relaxation ► Optimization result after 8.6 seconds ► Converges to (very) good result in 30 minutes ► Thanks to Udo Frese for running this data set

22
Killian Court (Real Data) Laser-scan derived open-loop estimate

23
Continuing work… ► Several 3D implementations under way Both with and without full-rank constraints ► Incremental (not batch) version ► Adaptive learning rate ► Reports of success with our algorithm from other universities…

24
Questions? ► Updated paper, movies, source code: http://rvsn.csail.mit.edu/graphoptim

Similar presentations

© 2020 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google