Presentation is loading. Please wait.

Presentation is loading. Please wait.

Image Alignment and Mosaicing Feature Tracking and the Kalman Filter.

Similar presentations


Presentation on theme: "Image Alignment and Mosaicing Feature Tracking and the Kalman Filter."— Presentation transcript:

1 Image Alignment and Mosaicing Feature Tracking and the Kalman Filter

2 Image Alignment Applications Local alignment:Local alignment: – Tracking – Stereo Global alignment:Global alignment: – Camera jitter elimination – Image enhancement – Panoramic mosaicing

3 Image Enhancement OriginalEnhanced Anandan

4 Panoramic Mosaicing Anandan

5 Correspondence Approaches Optical flowOptical flow CorrelationCorrelation Correlation + optical flowCorrelation + optical flow Any of the above, iterated (e.g. Lucas-Kanade)Any of the above, iterated (e.g. Lucas-Kanade) Any of the above, coarse-to-fineAny of the above, coarse-to-fine

6 Correspondence Approaches Optical flowOptical flow CorrelationCorrelation Correlation + optical flowCorrelation + optical flow Any of the above, iterated (e.g. Lucas-Kanade)Any of the above, iterated (e.g. Lucas-Kanade) Any of the above, coarse-to-fineAny of the above, coarse-to-fine

7 Optical Flow for Image Registration Compute local matchesCompute local matches Least-squares fit to motion modelLeast-squares fit to motion model Problem: outliersProblem: outliers

8 Outlier Rejection Least squares very sensitive to outliersLeast squares very sensitive to outliers

9 Outlier Rejection Robust estimation: tolerant of outliersRobust estimation: tolerant of outliers In general, methods based on absolute value rather than square: minimize  |x i - f |, not  (x i - f ) 2In general, methods based on absolute value rather than square: minimize  |x i - f |, not  (x i - f ) 2

10 Robust Fitting In general, hard to computeIn general, hard to compute Analytic solutions in a few casesAnalytic solutions in a few cases – Mean  Median – Solution for fitting lines

11 Approximation to Robust Fitting Compute least squaresCompute least squares Assign weights based on average deviationAssign weights based on average deviation – Simplest: weight = 1 if deviation < 2.5 , 0 otherwise – Smoother version: weight = G  (deviation) Weighted least squaresWeighted least squares IterateIterate

12 Alternative Approximation to Robust Fitting Least median of squares:Least median of squares: – Select n random subsets of data points – Perform least-squares fit on each – Compute average deviation for each – Select fit with median deviation

13 Correspondence Approaches Optical flowOptical flow CorrelationCorrelation Correlation + optical flowCorrelation + optical flow Any of the above, iterated (e.g. Lucas-Kanade)Any of the above, iterated (e.g. Lucas-Kanade) Any of the above, coarse-to-fineAny of the above, coarse-to-fine

14 Correlation / Search Methods Assume translation onlyAssume translation only Given images I 1, I 2Given images I 1, I 2 For each translation ( t x, t y ) computeFor each translation ( t x, t y ) compute Select translation that maximizes cSelect translation that maximizes c Depending on window size, local or globalDepending on window size, local or global

15 Cross-Correlation Statistical definition of correlation:Statistical definition of correlation: Disadvantage: sensitive to local variations in image brightnessDisadvantage: sensitive to local variations in image brightness

16 Normalized Cross-Correlation Normalize to eliminate brightness sensitivity: whereNormalize to eliminate brightness sensitivity: where

17 Sum of Squared Differences More intuitive measure:More intuitive measure: Negative sign so that higher values mean greater similarityNegative sign so that higher values mean greater similarity Expand:Expand:

18 Local vs. Global Correlation with local windows not too expensiveCorrelation with local windows not too expensive High cost if window size = whole imageHigh cost if window size = whole image But computation looks like convolutionBut computation looks like convolution – FFT to the rescue!

19 Correlation

20 Fourier Transform with Translation

21 Therefore, if I 1 and I 2 differ by translation,Therefore, if I 1 and I 2 differ by translation, So, F -1 (F 1 /F 2 ) will have a peak at (  x,  y)So, F -1 (F 1 /F 2 ) will have a peak at (  x,  y)

22 Phase Correlation In practice, use cross power spectrumIn practice, use cross power spectrum Compute inverse FFT, look for peaksCompute inverse FFT, look for peaks [Kuglin & Hines 1975][Kuglin & Hines 1975]

23 Phase Correlation AdvantagesAdvantages – Fast computation – Low sensitivity to global brightness changes (since equally sensitive to all frequencies)

24 Phase Correlation DisadvantagesDisadvantages – Sensitive to white noise – No robust version – Translation only Extensions to rotation, scaleExtensions to rotation, scale But not local motionBut not local motion Not too bad in practice with small local motionsNot too bad in practice with small local motions

25 Correspondence Approaches Optical flowOptical flow CorrelationCorrelation Correlation + optical flowCorrelation + optical flow Any of the above, iterated (e.g. Lucas-Kanade)Any of the above, iterated (e.g. Lucas-Kanade) Any of the above, coarse-to-fineAny of the above, coarse-to-fine

26 Correlation plus Optical Flow Use e.g. phase correlation to find average translation (may be large)Use e.g. phase correlation to find average translation (may be large) Use optical flow to find local motionsUse optical flow to find local motions

27 Correspondence Approaches Optical flowOptical flow CorrelationCorrelation Correlation + optical flowCorrelation + optical flow Any of the above, iterated (e.g. Lucas-Kanade)Any of the above, iterated (e.g. Lucas-Kanade) Any of the above, coarse-to-fineAny of the above, coarse-to-fine

28 Correspondence Approaches Optical flowOptical flow CorrelationCorrelation Correlation + optical flowCorrelation + optical flow Any of the above, iterated (e.g. Lucas-Kanade)Any of the above, iterated (e.g. Lucas-Kanade) Any of the above, coarse-to-fineAny of the above, coarse-to-fine

29 Image Pyramids Pre-filter images to collect information at different scalesPre-filter images to collect information at different scales More efficient computation, allows larger motionsMore efficient computation, allows larger motions

30 Image Pyramids Szeliski

31 Pyramid Creation “Gaussian” Pyramid“Gaussian” Pyramid “Laplacian” Pyramid“Laplacian” Pyramid – Created from Gaussian pyramid by subtraction L i = G i – expand(G i+1 ) Szeliski

32 Octaves in the Spatial Domain Bandpass Images Lowpass Images Szeliski

33 Blending Blend over too small a region: seamsBlend over too small a region: seams Blend over too large a region: ghostingBlend over too large a region: ghosting

34 Multiresolution Blending Different blending regions for different levels in a pyramid [Burt & Adelson]Different blending regions for different levels in a pyramid [Burt & Adelson] – Blend low frequencies over large regions (minimize seams due to brightness variations) – Blend high frequencies over small regions (minimize ghosting)

35 Pyramid Blending Szeliski

36 Minimum-Cost Cuts Instead of blending high frequencies along a straight line, blend along line of minimum differences in image intensitiesInstead of blending high frequencies along a straight line, blend along line of minimum differences in image intensities

37 Minimum-Cost Cuts Moving object, simple blending  blur [Davis 98]

38 Minimum-Cost Cuts Minimum-cost cut  no blur [Davis 98]

39 Feature Tracking Local regionLocal region Take advantage of many framesTake advantage of many frames – Prediction, uncertainty estimation – Noise filtering – Handle short occlusions

40 Kalman Filtering Assume that results of experiment (i.e., optical flow) are noisy measurements of system stateAssume that results of experiment (i.e., optical flow) are noisy measurements of system state Model of how system evolvesModel of how system evolves Prediction / correction frameworkPrediction / correction framework Optimal combination of system model and observationsOptimal combination of system model and observations Rudolf Emil Kalman Acknowledgment: much of the following material is based on the SIGGRAPH 2001 course by Greg Welch and Gary Bishop (UNC)

41 Simple Example Measurement of a single point z 1Measurement of a single point z 1 Variance  1 2 (uncertainty  1 )Variance  1 2 (uncertainty  1 ) Best estimate of true positionBest estimate of true position Uncertainty in best estimateUncertainty in best estimate

42 Simple Example Second measurement z 2, variance  2 2Second measurement z 2, variance  2 2 Best estimate of true positionBest estimate of true position Uncertainty in best estimateUncertainty in best estimate

43 Online Weighted Average Combine successive measurements into constantly-improving estimateCombine successive measurements into constantly-improving estimate Uncertainty decreases over timeUncertainty decreases over time Only need to keep current measurement, last estimate of state and uncertaintyOnly need to keep current measurement, last estimate of state and uncertainty

44 Terminology In this example, position is state (in general, any vector)In this example, position is state (in general, any vector) State can be assumed to evolve over time according to a system model or process model (in this example, “nothing changes”)State can be assumed to evolve over time according to a system model or process model (in this example, “nothing changes”) Measurements (possibly incomplete, possibly noisy) according to a measurement modelMeasurements (possibly incomplete, possibly noisy) according to a measurement model Best estimate of state with covariance PBest estimate of state with covariance P

45 Linear Models For “standard” Kalman filtering, everything must be linearFor “standard” Kalman filtering, everything must be linear System model:System model: The matrix  k is state transition matrixThe matrix  k is state transition matrix The vector  k represents additive noise, assumed to have covariance QThe vector  k represents additive noise, assumed to have covariance Q

46 Linear Models Measurement model:Measurement model: Matrix H is measurement matrixMatrix H is measurement matrix The vector  is measurement noise, assumed to have covariance RThe vector  is measurement noise, assumed to have covariance R

47 PV Model Suppose we wish to incorporate velocitySuppose we wish to incorporate velocity

48 Prediction/Correction Predict new statePredict new state Correct to take new measurements into accountCorrect to take new measurements into account

49 Kalman Gain Weighting of process model vs. measurementsWeighting of process model vs. measurements Compare to what we saw earlier:Compare to what we saw earlier:

50 Results: Position-Only Model MovingStill [Welch & Bishop]

51 Results: Position-Velocity Model [Welch & Bishop] MovingStill

52 Extension: Multiple Models Simultaneously run many KFs with different system modelsSimultaneously run many KFs with different system models Estimate probability each KF is correctEstimate probability each KF is correct Final estimate: weighted averageFinal estimate: weighted average

53 Results: Multiple Models [Welch & Bishop]

54 Results: Multiple Models [Welch & Bishop]

55 Results: Multiple Models [Welch & Bishop]

56 Extension: SCAAT H be different at different times H be different at different times – Different sensors, types of measurements – Sometimes measure only part of state Single Constraint At A Time (SCAAT)Single Constraint At A Time (SCAAT) – Incorporate results from one sensor at once – Alternative: wait until you have measurements from enough sensors to know complete state (MCAAT) – MCAAT equations often more complex, but sometimes necessary for initialization

57 UNC HiBall 6 cameras, looking at LEDs on ceiling6 cameras, looking at LEDs on ceiling LEDs flash over timeLEDs flash over time [Welch & Bishop]

58 Extension: Nonlinearity (EKF) HiBall state model has nonlinear degrees of freedom (rotations)HiBall state model has nonlinear degrees of freedom (rotations) Extended Kalman Filter allows nonlinearities by:Extended Kalman Filter allows nonlinearities by: – Using general functions instead of matrices – Linearizing functions to project forward – Like 1 st order Taylor series expansion – Only have to evaluate Jacobians (partial derivatives), not invert process/measurement functions

59 Other Extensions On-line noise estimationOn-line noise estimation Using known system input (e.g. actuators)Using known system input (e.g. actuators) Using information from both past and futureUsing information from both past and future Non-Gaussian noise and particle filteringNon-Gaussian noise and particle filtering


Download ppt "Image Alignment and Mosaicing Feature Tracking and the Kalman Filter."

Similar presentations


Ads by Google