PA2920 SPACECRAFT IMAGING SYSTEMS GROUP S1 Presented by Leon Hicks.

Slides:



Advertisements
Similar presentations
Volume of Revolution, Shell Method
Advertisements

 Understand that the x-intercepts of a quadratic relation are the solutions to the quadratic equation  Factor a quadratic relation and find its x- intercepts,
Today’s Objectives: Students will be able to:
Verification of specifications and aptitude for short-range applications of the Kinect v2 depth sensor Cecilia Chen, Cornell University Lewis’ Educational.
The perfect approach To the Pole Vault By: Sam Boswell.
CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
The score book The idea behind this presentation is to show how a score card is used. When several of these cards are put together they will form the basis.
Lecture 8: Stereo.
Aberrations  Aberrations of Lenses  Analogue to Holographic Model  Aberrations in Holography  Implications in the aberration equations  Experimental.
Group S3. Lab Session 5 Following on from our previous lab session decided to find the relationship between Disparity vs Camera Separation. Measured Disparity.
Contents Description of the big picture Theoretical background on this work The Algorithm Examples.
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Spacecraft Stereo Imaging Systems Group S3. Variables Separation of the cameras Height of the cameras – relative to the bench Angle – The direction cameras.
COMP322/S2000/L23/L24/L251 Camera Calibration The most general case is that we have no knowledge of the camera parameters, i.e., its orientation, position,
CSE473/573 – Stereo Correspondence
Visualization- Determining Depth From Stereo Saurav Basu BITS Pilani 2002.
 What is the equation of the line, in slope- intercept form, that passes through (2, 4) and is perpendicular to 5x+7y-1=0.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Automatic Camera Calibration
(JL)^2 [aka Jackie and Jocelyn].  Determine the relationship between object distance and image distance for real images in a concave spherical mirror.
Squares, Square Roots and other radicals.
Integrals 5.
D MANCHE Finding the area under curves:  There are many mathematical applications which require finding the area under a curve.  The area “under”
Graphs of Tangent, Cotangent,
A. can be focused on a screen. B. can be projected on a wall.
Images in Concave Mirrors. Properties  The mirror has a reflecting surface that curves inward.  When you look at objects in the mirror, the image appears.
Image Formation. We will use geometrical optics: light propagates in straight lines until its direction is changed by reflection or refraction. When we.
Section 7.3 – Volume: Shell Method. White Board Challenge Calculate the volume of the solid obtained by rotating the region bounded by y = x 2, x=0, and.
Copyright © 2011 Pearson Education, Inc. Slide Vertical Translations of Graphs Vertical Shifting of the Graph of a Function If the graph of is.
Concave Mirrors. Objectives Know the three principal rays for a concave mirror. Use the principal rays to locate the image of an object in front of a.
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
Translations Translations and Getting Ready for Reflections by Graphing Horizontal and Vertical Lines.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Optimising Cuts for HLT George Talbot Supervisor: Stewart Martin-Haugh.
02/06/2016 Lenses and Images LO: to be able to describe images formed by different types of lenses Starter: make a list of any items you can think of that.
Unit 1, Chapter 2 Integrated Science. Unit One: Forces and Motion 2.1 Using a Scientific Model to Predict Speed 2.2 Position and Time 2.3 Acceleration.
Computer Vision Stereo Vision. Bahadir K. Gunturk2 Pinhole Camera.
CHAPTER 37 Presentation of Data 2. Time Series A TIME SERIES is a set of readings taken at TIME INTERVALS. A TIME SERIES is often used to monitor progress.
Foundation Tier Problems You will be presented with a series of diagrams taken from an exam paper. Your task is to make up a possible question using the.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Optics or The Physics of forming images with a Convex Lens.
Vertical and Horizontal Shifts of Graphs.  Identify the basic function with a graph as below:
 It’s important to gather accurate data and to ensure the data is as complete as possible  If you gather the data yourself, it is Primary Data  If.
Computer vision: models, learning and inference M Ahad Multiple Cameras
CfE Advanced Higher Physics
Copyright © 2007 Pearson Education, Inc. Slide 2-1.
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
1.Describe two differences between electromagnetic waves and other (mechanical) waves. 2.Write down the names of all seven types of electromagnetic waves.
Curved Mirrors. Images in Mirrors S ize, A ttitude, L ocation, T ype Size –Is the image bigger, smaller or the same size as the object? Attitude –Is the.
Motion Notes. Key Terms 1)Motion: 2)Reference point: The state in which one object’s distance from another is changing. A place or object used for comparison.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Update on electron spectrometer measurements Introduction Setup, measurements carried out Some raw images Results and conclusions L. Deacon, M. Wing (UCL)
13.4 Graphing Lines in Slope-Intercept Form
More about Velocity Time Graphs and Acceleration
Chapter 8 : Analytic Geometry
José Manuel Iñesta José Martínez Sotoca Mateo Buendía
Jure Zbontar, Yann LeCun
Instructor: Otmar Hilliges
Remember graphs are read from left to right like a book
Foundations of Physical Science
The focal length of a lens
A graphing calculator is required for some problems or parts of problems 2000.
Computer Vision Stereo Vision.
Essential Questions How do I use intervals of increase and decrease to understand average rates of change of quadratic functions?
The Mirror Equation and Ray Diagrams.
USING DATA Obj
Thin Lens Equation 1
Use the Chain Rule to find {image} {image}
Question 17.
Presentation transcript:

PA2920 SPACECRAFT IMAGING SYSTEMS GROUP S1 Presented by Leon Hicks

PROBLEMS: Calibration of the cameras to calculate distances Focal length Actual pixel size

To find the factor (or focal length), the offset needs to be measured in number of pixels. The offset should be horizontal and is the measured displacement from one point of the chosen object in the left camera image to the same point of the object in the right camera image (for our experiment the chosen point is the centre cross of a box).

tan θ 1 = offset / factor θ 2 = π/2 – θ 1 tan θ 2 = distance / eye seperation →factor = offset / tan(π/2 – tan -1 (distance / eye seperation)) Thus →distance = tan(π/2 – tan -1 (offset / factor)) x eye seperation

For the parallel eye seperation of 0.251m; an average factor value of 1013 was found. To confirm this factor value, the focal length was also calculated via a second method of calculation, but similar in that it required the counting of pixels in the offset between the two left and right images. x` l / f = (x + d/2) / z x` r / f = (x – d/2) / z y` l / f = y` r / f = y / z

x = d(x` l + x` r ) / 2(x` l – x` r ) y = d(y` l + y` r ) / 2(x` l – x` r ) z = d x f / (x` l – x` r ) Where z is the distance to the object and (x` l – x` r ) is the offset in pixels. Using z = d x f / (x` l – x` r ) the average focal length (f) was valued at 1013 (the same as calculated earlier) Note: the focal length is in terms of pixels rather than a physical measurement.

Also, graphs were created to show the relation between offset and the distance. The graph below shows a better trendline in the relation between the logs of the offset and the distance. The equation for this graph line is y = x

To put our calibrations to the test, we obtained new images of the crossed box at varied distances (which were recorded for comparison), and calculated the distance from the offset value. Offset (pixels) z = d x f / (x` l – x` r )Distance obtained by graph Real measured distance Its not perfect: Errors have not yet been added, and might allow these results to be acceptable? Further measurements could make for better results? The factor/focal length was a taken average. In truth f decreased with decreasing offset. It is hoped that a pattern may be there so that f could be varied for each calculation of distance. Unfortunately, no such pattern has been found.

NEXT - ErrorsErrors More results for calibration and testingMore results for calibration and testing Consideration of other potential methodsConsideration of other potential methods Calculation of heightsCalculation of heights Further work on producing a 3-D stereo effectFurther work on producing a 3-D stereo effect