Final Exam Review CS485/685 Computer Vision Prof. Bebis.

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints David Lowe.
Advertisements

Lecture 11: Two-view geometry
CSE473/573 – Stereo and Multiple View Geometry
Presented by Xinyu Chang
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
Computer vision: models, learning and inference
Instructor: Mircea Nicolescu Lecture 17
Two-view geometry.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Camera calibration and epipolar geometry
A Study of Approaches for Object Recognition
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Uncalibrated Geometry & Stratification Sastry and Yang
Object Recognition CS485/685 Computer Vision Dr. George Bebis.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and.
Distinctive Image Feature from Scale-Invariant KeyPoints
Introduction to Object Recognition CS773C Machine Intelligence Advanced Applications Spring 2008: Object Recognition.
Scale Invariant Feature Transform (SIFT)
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Projected image of a cube. Classical Calibration.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Lec 21: Fundamental Matrix
CSE473/573 – Stereo Correspondence
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Automatic Camera Calibration
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Multi-view geometry.
Recognition and Matching based on local invariant features Cordelia Schmid INRIA, Grenoble David Lowe Univ. of British Columbia.
Object Tracking/Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition.
The Brightness Constraint
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
CSCE 643 Computer Vision: Structure from Motion
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Computer Vision 776 Jan-Michael Frahm 12/05/2011 Many slides from Derek Hoiem, James Hays.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Final Review Course web page: vision.cis.udel.edu/~cv May 21, 2003  Lecture 37.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Computer vision: models, learning and inference M Ahad Multiple Cameras
3D Reconstruction Using Image Sequence
MASKS © 2004 Invitation to 3D vision Uncalibrated Camera Chapter 6 Reconstruction from Two Uncalibrated Views Modified by L A Rønningen Oct 2008.
Geometry Reconstruction March 22, Fundamental Matrix An important problem: Determine the epipolar geometry. That is, the correspondence between.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Announcements Final is Thursday, March 18, 10:30-12:20 –MGH 287 Sample final out today.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
SIFT.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
SIFT Scale-Invariant Feature Transform David Lowe
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Scale Invariant Feature Transform (SIFT)
Two-view geometry Computer Vision Spring 2018, Lecture 10
Epipolar geometry.
3D Photography: Epipolar geometry
Filtering Things to take away from this lecture An image as a function
Two-view geometry.
Filtering An image as a function Digital vs. continuous images
Presented by Xu Miao April 20, 2005
Recognition and Matching based on local invariant features
Presentation transcript:

Final Exam Review CS485/685 Computer Vision Prof. Bebis

Final Exam Final exam will be comprehensive. –Midterm Exam material –SIFT –Object recognition –Face recognition using eigenfaces –Camera parameters –Camera calibration –Stereo

SIFT feature computation Steps –Scale space extrema detection (how is it different from Harris-Laplace? different parameters) –Keypoint localization (need to know main ideas, no equations; two thresholds, which ones?) –Orientation assignment (how are the histograms built? multiple peaks?) –Keypoint descriptor (how are the histograms built? partial voting, main parameters, invariance to illumination changes)

SIFT features Properties –Scale and rotation invariant –Highly distinctive –Partially invariant to 3D viewpoint and illumination changes –Fast and efficient computation Main parameters? Matching –How do we match SIFT features? –How do we evaluate the performance of a feature matcher? Applications

SIFT variations PCA SIFT SURF GLOH Need to know key ideas and steps (no need to remember exact parameter values) Similarities/Differences with SIFT Strengths/Weakeness

Object Recognition Model-based vs category-specific recognition –Preprocessing & Recognition Challenges? –Photometric effects, scene clutter, changes in shape (e.g., non-rigid objects), viewpoint changes Requirements? –Invariance, robustness Performance Criteria? –Efficiency (time + memory), accuracy

Object Recognition (cont’d) Representation schemes – advantages/disadvantages –Object centered (3D/3D or 3D/2D matching) –Viewer centered (2D/2D matching) Matching schemes – advantages/disadvantages –Geometry-based –Appearance-based

Object Recognition (cont’d) Main steps in matching: –Hypothesis generation –Hypothesis verification Efficient hypothesis generation –Which scene features to choose? –How to organize and search the model database?

Object Recognition Methods Alignment Pose Clustering Geometric Hashing Main ideas and steps

Object Recognition using SIFT Main ideas and steps –Perform nearest neighbor search –Find clusters of features (pose clustering) –Perform verification Practical issues –Approximate nearest neighbors

Bag of Features Origins of bag of features method Computing Bag of Features –Feature extraction –Learn “visual vocabulary” (e.g., K-Means clustering) –Quantize features using “visual vocabulary”. –Represent images by frequencies of “visual words” (bugs of features)

Bag of Features (cont’d) Object categorization using bags of features. –Represent objects using Bag of Features –Classification (NN, kNN, SVM)

PCA Need to know steps and equations. What criterion does PCA minimize? How is the “best” low-dimensional space determined using PCA? What is the geometric interpretation of PCA? Practical issues (e.g., choosing K, computing error, standardization)

Using PCA for Face Recognition Represent faces using PCA – need to know steps and practical issues (e.g., AA T vs A T A) Face recognition using PCA (i.e., eigenfaces) –DIFS Face detection using PCA –DFFS Limitations

Camera Parameters Reference frames – what are they? –World –Camera –Image plane –Pixel plane Perspective projection –Should know how to derive equations –Matrix notation –Properties of perspective projection –Vanishing points, vanishing lines.

Camera Parameters Orthographic projection –How is related to perspective? –Study equations –Matrix notation –Properties Weak perspective projection –How is related to perspective? –Study equations –Matrix notation –Properties

Extrinsic camera parameters –What are they and what is their meaning? –Study equations Intrinsic camera parameters –What are they and what is their meaning? –Study equations Projection matrix –What does it represent? Camera Parameters (cont’d)

Camera Calibration What is the goal of camera calibration and how is it performed? Camera calibration using the projection matrix (study equations for step 1 only; you should remember how this method works in general) Direct parameter calibration (do not memorize equations but remember how they work); how is the orthogonally constraint of the rotation matrix enforced?

Stereo What is the goal of stereo vision? Triangulation principle. Familiarity with terminology (e.g., baseline, epipolar plane, epipolar lines, epipoles, disparity) Two main problems of stereo (i.e., correspondence + reconstruction) Recover depth from disparity – study proof.

Correspondence Problem What is the correspondence problem and why is it difficult? Main methods: intensity-based, feature-based –How do intensity-based methods work? –Main parameters of intensity-based methods. How can we choose them? –How do feature-based methods work? –Comparison between intensity-based and feature-based methods

Epipolar Geometry Stereo parameters: extrinsic + intrinsic What is the epipolar constraint, why is it important? How is epipolar geometry represented? –Essential matrix –Fundamental matrix

Essential Matrix What is the essential matrix? Properties of essential matrix Study equations Equation satisfied by corresponding points

Fundamental Matrix What is the fundamental matrix? Properties of fundamental matrix Study equations Equation satisfied by corresponding points

Eight-point algorithm What is it useful for? Study steps How is the rank(2) constraint enforced? Normalized eight-point algorithm Estimate epipoles and epipolar lines using the fundamental matrix?

Rectification What is the purpose of rectification? Why is it useful? Study steps

Stereo Reconstruction Three cases: –Known extrinsic and intrinsic parameters –Known intrinsic parameters –Unknown extrinsic and intrinsic parameters. What information could be recovered in each case? What are the main steps of the first two methods? (do not memorize equations)