CMSC5711 Revision (1) (v.7.b) revised

Slides:



Advertisements
Similar presentations
Computer Vision, Robert Pless
Advertisements

Camera Models A camera is a mapping between the 3D world and a 2D image The principal camera of interest is central projection.
Lecture 5: Projection CS6670: Computer Vision Noah Snavely.
CS 376b Introduction to Computer Vision 02 / 27 / 2008 Instructor: Michael Eckmann.
CAU Kiel DAGM 2001-Tutorial on Visual-Geometric 3-D Scene Reconstruction 1 The plan for today Leftovers and from last time Camera matrix Part A) Notation,
Lecture 13: Projection, Part 2
3D Computer Vision and Video Computing 3D Vision Lecture 14 Stereo Vision (I) CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
Projected image of a cube. Classical Calibration.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Edge Detection.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
By: Suhas Navada and Antony Jacob
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
CSE 6367 Computer Vision Stereo Reconstruction Camera Coordinate Transformations “Everything should be made as simple as possible, but not simpler.” Albert.
Automatic Camera Calibration
Lecture 14: Projection CS4670 / 5670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
WP3 - 3D reprojection Goal: reproject 2D ball positions from both cameras into 3D space Inputs: – 2D ball positions estimated by WP2 – 2D table positions.
Epipolar geometry The fundamental matrix and the tensor
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
May 9, 2005 Andrew C. Gallagher1 CRV2005 Using Vanishing Points to Correct Camera Rotation Andrew C. Gallagher Eastman Kodak Company
Localization for Mobile Robot Using Monocular Vision Hyunsik Ahn Jan Tongmyong University.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Binocular Stereo #1. Topics 1. Principle 2. binocular stereo basic equation 3. epipolar line 4. features and strategies for matching.
10/3/02 (c) 2002 University of Wisconsin, CS 559 Last Time 2D Coordinate systems and transformations.
General ideas to communicate Show one particular Example of localization based on vertical lines. Camera Projections Example of Jacobian to find solution.
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
Development of a laser slit system in LabView
Creating a Little Book Using PowerPoint (Windows XP 2003 or above) By Pete Senske & Joan Olson.
Feature Matching. Feature Space Outlier Rejection.
Computer vision: models, learning and inference M Ahad Multiple Cameras
9-2 Reflections Objective: To find reflection images of figures.
Honors Geometry.  We learned how to set up a polygon / vertex matrix  We learned how to add matrices  We learned how to multiply matrices.
Lecture 14: Projection CS4670 / 5670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
Write Bresenham’s algorithm for generation of line also indicate which raster locations would be chosen by Bresenham’s algorithm when scan converting.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Question 1 Divide 50 by half and add 20, the answer is __120_____.
CMSC5711 Image processing and computer vision
Section 3.5 – Transformation of Functions
CS4670 / 5670: Computer Vision Kavita Bala Lecture 20: Panoramas.
3B Reflections 9-2 in textbook
Mean Shift Segmentation
Line and Character Attributes 2-D Transformation
The Unit Square Saturday, 22 September 2018.
Lesson 2.8 The Coordinate Plane
Addition Grids.
CMSC5711 Revision 3 CMSC5711 revision 3 ver.x67.8c.
FP1 Matrices Transformations
WINDOWING AND CLIPPING
CMSC5711 Image processing and computer vision
Two-view geometry.
“Keep looking up…that’s the secret of life.”
Transformations for GCSE Maths
Image Processing, Lecture #8
Free-Response-Questions
WINDOWING AND CLIPPING
Reconstruction.
Image Processing, Lecture #8
Transformations for GCSE Maths
Plotting Points Guided Notes
CS 565 Computer Vision Nazar Khan Lecture 9.
Patrick Cozzi University of Pennsylvania CIS Spring 2011
Transformations for GCSE Maths
Unit 1 Transformations in the Coordinate Plane
Subtraction Grids.
Image Filtering Readings: Ch 5: 5. 4, 5. 5, 5. 6, , 5
Unit 1 Transformations in the Coordinate Plane
Division Grids.
Revision 4 CSMSC5711 Revision 4: CSMC5711 v.9a.
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Chapter 7 Transformations.
Presentation transcript:

CMSC5711 Revision (1) (v.7.b) revised 17. 04.20 Q1: A 3D point X is at (in meters) in the world coordinate system. The parameters of a camera are shown below: The world coordinate system and the camera coordinate system are the same. The focal length F=6mm. Horizontal pixel width sx= Vertical pixel width sy The CCD sensor size = 10mm x 10mm. The resolution of the image captured by the camera is 500x500. The image centre is at (4.8 mm, 5.1 mm) from the right bottom corner of the image plane. The origin (1,1) of the image is at the right bottom corner. The x-coordinate is increasing from right to left, and y-coordinate is increasing from bottom to top. Find sx and sy in meters. Find the image centre (Ox, Oy) in pixels. Find the focal length in pixels. Find the 2D image position of the point X in pixels. CMSC5711 revision 1 v.8b

Q2: A 3D point is at [0.5, 0.6, 1.2]T meters from the world centre. (a) The 3D point X is rotated (by R1) around the world centre and then translated (by T=[Tx,Ty,Tz]T ) to a new position X’, where   The rotation angles are (1=1.2, 2=0, 3=0.57) (in degrees) rotating around X,Y,Z axis respectively, and the translation T is meters. Assume the world coordinate system and the camera coordinate system are the same. Calculate the position of the 3D point X’ in meters. (Hint: The entries of R are in radian, you need to convert angles to radian before use.) (b) If the rotation R1 is not rotating around the world centre but around a 3D point Y instead ( where Y is [0.45, 0.25, 1.05]T in meters from the world centre). All other parameters remain unchanged, calculate the position of the 3D point X’ in meters again. CMSC5711 revision 1 v.8b

Q3: An image X1 and a mask mask1 are shown below Find the convolution result of X1 and mask1, the result should include partial overlapping cases. Describe how to find the edge image of a gray level input image using convolution and Sobel kernel masks. CMSC5711 revision 1 v.8b

r(k) N(k) Pr(r(k)) S(k) Round off (S(k)) r(0) 135 r(1) 278 r(2) 4521 Q4: An original gray level image has resolution M=100 row and N=100 columns. The gray level resolution (L) of each pixel is 6. R(k) is the gray level of index k, N(k) is the number of pixels that have level R(k). Pr(R(k)) is the probability of the pixels in the image having gray level R(k). After histogram normalization, S(k) is the normalized gray level of index k. Discuss why histogram normalization is important in image processing. Based on the following table (you may copy it to your answer book first), fill in the blanks. Discuss the term “histogram back projection” and its applications. Find the histogram back projection of a pixel with gray level 3 in the original image r(k) N(k) Pr(r(k)) S(k) Round off (S(k)) r(0) 135   r(1) 278 r(2) 4521 r(3) 244 r(4) 3987 r(5) 1352 CMSC5711 revision 1 v.8b

Q5: In a stereo system, a 3-D Point P1=[X1,Y1,Z1]T is in the left (reference) camera coordinate system and its projection is q1 in the left image. And this point becomes P2 =[X2,Y2,Z2]T in the right camera coordinate system and its projection is q2 in the right image. q1 and q2 are in homogenous coordinates. P1 and P2 are related by P2=R*P1 +T, where R and T are the rotation and translation of the right camera respectively. Draw the diagram depicting the two cameras, the point P1 in 3D, image projections of P1 in both images, the epipoles, the epipolar lines in the right image of two left image feature points ‘a’ and ‘b’ (you are free to select any two image points). Write the Essential matrix E in term of R and T. Write the relation of q1, q2, T and R. Discuss the relation between essential matrix E and fundamental matrix F. Use formulas to illustrate your answer. Describe the algorithm to find F. CMSC5711 revision 1 v.8b

CMSC5711 revision 1 v.8b

Q7: Find the edge image of Image (I) if we use Prewitt masks and the threshold is set to be 0.5. Show your calculation steps. (Consider cases only when the mask and image are fully overlapped).   CMSC5711 revision 1 v.8b

The initial window is centred at (x,y)=(4,5) as shown above. Q8: As shown above, an object has pixels of gray levels 1 or above. The empty cells have gray level 0. A mean shift algorithm is applied to find the position of the object of size around 5x5 with gray level 1 or above. That means you need to find the centre of a 5x5 window covering pixels of the object.  The initial window is centred at (x,y)=(4,5) as shown above. Describe the procedure of finding the object by a mean-shift algorithm. Find the location of the centre of the 5x5 box at each step of the mean-shift algorithm. Round off numbers to integers during your calculations. CMSC5711 revision 1 v.8b