Presentation on theme: "Outline Feature Extraction and Matching (for Larger Motion)"— Presentation transcript:
1Feature Extraction and Matching Feature Tracking Sudipta N Sinha Sep 19, 2006
2Outline Feature Extraction and Matching (for Larger Motion) What are features ?TasksDetection: finding the feature locationsRepresentation: computing a compact descriptorMatching: Finding distances in feature space.Algorithms:Harris Corner Detector, SIFT.More complex (wide-baseline correspondence)Tracking (for Small Motion)Track geometric primitives (points, lines, patches, objects …)from frame to frame in video.High temporal coherence.Typically required in a real-time system.
3Matching comes up in all kinds of problems in computer vision Panoramas, mosaicsStructure from Motion ( F, T , … )Object recognitionMore: Detect object in clutter, Motion segmentation,Image-based retrieval, Video mining .. (Check Papers in References)
4The Correspondence Problem and Invariance Invariance: Features need to be detected repeatedly at the same locations and the computed descriptors must be similar in-spite of the following type of changes observed in two images of the same scene.
5Point Features (Interest Points) Goal:To detect the samepoint in each imageindependentlyChallenges:Need repeatability in presence of Scale, Rotation, Affine distortions and Illumination changeNot all pixels are good candidates.Texture-less regions, edges.Effect of noise on feature extraction.Examples:Harris Corner Detector, SIFT
6Harris Corner Detector Idea:Detect a patch which looks locally unique.Shifting the patch in any direction will give a large change in intensity.Texture-less region:no change in all directionsEdge:no change along one direction.Corner: large changes in all direction.
8A symmetric matrix represents an ellipse Matrix is symmetric semi-definite
9Harris Corner Detector Eigen-value analysis of the 2x2 matrix M:
10Corners: Feature Descriptors and Matching. Simple Descriptor: convert a patch of n x n pixels centered at that pixel into a vector.Matching: SAD, SSD, ZMNCCInvariance:Translation ? YesRotation ? No. But the image patch could be re-sampled using eigen-vector pair as the local coordinate frame.Scale and Affine ? NoBrightness Change ? Yes, normalize image intensity (ZMNCC)Feature point in high dim feature space
19Wide Baseline Matching: Elliptical and Parallelogram features (Tuteylaar, Van Gool et. al. IJCV 2004)Anchor point:Traditional Corners
20Wide Baseline Matching: Elliptical and Parallelogram features (Tuteylaar, Van Gool et. al. IJCV 2004)Anchor point:local intensity maxima
21Tracking Corners – The KLT algorithm Main Idea: Assuming brightness constancy, try to find the new positions of some ‘salient’ image points in the second image (where the motion is small)Steps:Detecting Salient Points to track (in current frame)Track those features in next frameCould be done by Searching (Template matching) BUTKLT algorithm does this analytically, hence its faster !
22KLT equations: Assumption – Brightness Constancy Find a displacement d, such that the error given by the following equation is minimized (over a tracking window )
23KLT equations: Assumption – Brightness Constancy Find a displacement d, such that the error given by the following equation is minimized (over a tracking window )
24KLT equations:A symmetric form was later proposed by Tomasi, as followsTo estimate d, differentiate w.r.t d,
25KLT equations:Substituting Taylor Series Expansion for J(.) and I(.)We get,Setting derivative to zero at the minima, and re-arranging, we geta linear system of equations for d
27Multiscale and Iterative KLT Build Image PyramidCoarse to Fine TrackingIncreases Effective Spatial Rangewithin which features can be tracked.View Dependent Effects : If surface patch is small, then large persective distortions can be approximated by an affine transformationBrightness change = gain + offset (2 more parameters)Affine KLTInvariance to illumination
28Acknowledgments Slides/Figures were taken from – SIFT MATLAB tutorial - [Thomas F. El-Maraghi May 2004]Lecture Notes by Bill FreemanLecture Notes on Tracking: UWA Computer Science, CITS 4240.David Lowe’s SIFT papersStan Birchfield’s article on Symmetric Version of KLT equations.
29References and PapersStan Birchfield. KLT: An Implementation of the Kanade-Lucas-Tomasi Feature Tracker Bruce D. Lucas and Takeo Kanade. An Iterative Image Registration Technique with an Application to Stereo Vision. International Joint Conference on Artificial Intelligence, pages , 1981.David Lowe, ‘Distinctive image features from scale-invariant keypoints’, Int. Journal of Computer Vision, 60(2):91–110, 2004.J. Matas, O. Chum, U. Martin, and T. Pajdla. Robust wide baseline stereo from maximally stable extremal regions. In Proc. British Machine Vision Conference, volume 1, pages 384–393, Sep 2002.K. Mikolajczyk and C. Schmid. Scale and affine invariant interest point detectors. Int. Journal of Computer Vision, 1(60):63–86, 2004T.Tuytelaars and L. Van Gool. Matching widely separated views based on affine invariant regions. Int. Journal of Computer Vision, 1(59):61–85, 2004K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. Van Gool. A comparison of affine region detectors. Technical Report, accepted to IJCV, 2005KLT src code:SIFT Matlab code: see Link at