Copyright © 2007-2008 Gregory Avady. All rights reserved. Electro-optical 3D Metrology Gregory Avady, Ph.D. Overview.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

QR Code Recognition Based On Image Processing
Laser Speckle Extensometer ME 53
Rotary Encoder. Wikipedia- Definition  A rotary encoder, also called a shaft encoder, is an electro- mechanical device that converts the angular position.
VisHap: Guangqi Ye, Jason J. Corso, Gregory D. Hager, Allison M. Okamura Presented By: Adelle C. Knight Augmented Reality Combining Haptics and Vision.
Resolution Resolving power Measuring of the ability of a sensor to distinguish between signals that are spatially near or spectrally similar.
Range Imaging and Pose Estimation of Non-Cooperative Targets using Structured Light Dr Frank Pipitone, Head, Sensor Based Systems Group Navy Center for.
ABRF meeting 09 Light Microscopy Research Group. Why are there no standards? Imaging was largely an ultrastructure tool Digital imaging only common in.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Optical Alignment with Computer Generated Holograms
1 Comp300a: Introduction to Computer Vision L. QUAN.
Input/Output Devices Graphical Interface Systems Dr. M. Al-Mulhem Feb. 1, 2008.
Conceptual Design Review Senior Design University of Idaho College of Engineering.
Lecture 2 Photographs and digital mages Friday, 7 January 2011 Reading assignment: Ch 1.5 data acquisition & interpretation Ch 2.1, 2.5 digital imaging.
Conceptual Design Review Senior Design
Real-Time Object Tracking System Adam Rossi Meaghan Zorij
3-D Computer Vision Using Structured Light Prepared by Burak Borhan.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Digital Images The nature and acquisition of a digital image.
1 Basics of Digital Imaging Digital Image Capture and Display Kevin L. Lorick, Ph.D. FDA, CDRH, OIVD, DIHD.
1 QED In Vivo USB Input Output Box configuration This tutorial contains a number of instructions embedded in a great deal of explanation. Procedures that.
Introduction to Machine Vision Systems
On the Design, Construction and Operation of a Diffraction Rangefinder MS Thesis Presentation Gino Lopes A Thesis submitted to the Graduate Faculty of.
My Research Experience Cheng Qian. Outline 3D Reconstruction Based on Range Images Color Engineering Thermal Image Restoration.
Smartphone Overview iPhone 4 By Anthony Poland 6 Nov 2014.
A HIGH RESOLUTION 3D TIRE AND FOOTPRINT IMPRESSION ACQUISITION DEVICE FOR FORENSICS APPLICATIONS RUWAN EGODA GAMAGE, ABHISHEK JOSHI, JIANG YU ZHENG, MIHRAN.
CAP4730: Computational Structures in Computer Graphics 3D Concepts.
Image Formation. Input - Digital Images Intensity Images – encoding of light intensity Range Images – encoding of shape and distance They are both a 2-D.
Bala Lakshminarayanan AUTOMATIC TARGET RECOGNITION April 1, 2004.
Content 1- Background 2- Potential Transmitter 3- Potential Receiver 4- channel Distortion 5- Spatial Registration & Synchronization 6- Temporal Synchronization.
Visual Perception PhD Program in Information Technologies Description: Obtention of 3D Information. Study of the problem of triangulation, camera calibration.
3D-2D registration Kazunori Umeda Chuo Univ., Japan CRV2010 Tutorial May 30, 2010.
Sensing for Robotics & Control – Remote Sensors R. R. Lindeke, Ph.D.
Difference Between Raster and Vector Images Raster and vector are the two basic data structures for storing and manipulating images and graphics data on.
1 Formation et Analyse d’Images Session 2 Daniela Hall 7 October 2004.
Alessandro Nastro Laboratory of Optoelectronics, University of Brescia 2D Vision Course Creating and Projecting Fringes.
1 Finding depth. 2 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
November 13, 2014Computer Vision Lecture 17: Object Recognition I 1 Today we will move on to… Object Recognition.
Radiometric Correction and Image Enhancement Modifying digital numbers.
Global Illumination Models THE WHITTED IMAGE - BASIC RECURSIVE RAY TRACING Copyright © 1997 A. Watt and L. Cooper.
Introduction to Soft Copy Photogrammetry
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
March 2004 Charles A. DiMarzio, Northeastern University ECEG287 Optical Detection Course Notes Part 15: Introduction to Array Detectors Profs.
Development of a laser slit system in LabView
High Speed 3D Imaging Technology
0 Test Slide Text works. Text works. Graphics work. Graphics work.
Vision and Obstacle Avoidance In Cartesian Space.
1 Imaging Techniques for Flow and Motion Measurement Lecture 20 Lichuan Gui University of Mississippi 2011 Stereo High-speed Motion Tracking.
高精度高速度的三维测量技术. 3D stereo camera Stereo line scan camera 3D data and color image simultaneously 3D-PIXA.
EE 638: Principles of Digital Color Imaging Systems Lecture 17: Digital Camera Characterization and Calibration.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
Based on an Article Published 12/2011 by: En Hong, Honogwei Zhang, Reuven Katz, John S.Agapiou Presented by: Vadim Tutaev & Boris Sheendlin.
SCANNING LIDAR FLUOROSENSOR FOR REMOTE DIAGNOSTIC OF SURFACES Luisa Caneve F. Colao, R. Fantoni, L. Fiorani, A. Palucci ENEA - UTAPRAD Technical Unit Development.
Non-Contact Inspection of Internal Threads of Machined Parts
Date of download: 7/9/2016 Copyright © 2016 SPIE. All rights reserved. Photo of measurement boat and unit. Figure Legend: From: Development of shape measurement.
Barcode Vartika Agarwal Accurate institute of management and technology Computer science(3 rd year)
range from cameras stereoscopic (3D) camera pairs illumination-based
Aerial Images.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Common Classification Tasks
What Is Spectral Imaging? An Introduction
Image Based Modeling and Rendering (PI: Malik)
GAJENDRA KUMAR EC 3rd YR. ROLL NO
Multiple View Geometry for Robotics
Interactive Visual System
Object Recognition Today we will move on to… April 12, 2018
Lecture 2 Photographs and digital mages
Paige Thielen, ME535 Spring 2018
Linac Diagnostics Commissioning Experience
Presentation transcript:

Copyright © Gregory Avady. All rights reserved. Electro-optical 3D Metrology Gregory Avady, Ph.D. Overview

Copyright © Gregory Avady. All rights reserved. 3D Metrology Purpose: –Find target’s coordinates or orientation in 3D space Resources: –One or more optoelectronic sensors (cameras) –Some knowledge of the target, such as: geometry (shape), color, brightness, approximate location

Copyright © Gregory Avady. All rights reserved. Control Device Typical Block-Diagram X Y Z O XOXO YOYO ZOZOO X1X1 Y1Y1 Z1Z1 O1O1 X2X2 Y2Y2 Z2Z2 O2O2 ZNZN XNXN YNYN ONON Sensor 1 Sensor 2 Sensor N Processor Acquisition Device Output Target Laser Designator (Optional)

Copyright © Gregory Avady. All rights reserved. System Classifications Target Profile –Cooperative –Non-Cooperative Illumination Type –Active Systems (using laser designator) –Passive Systems Sensor Type –2D sensors (standard cameras) –1D sensors (linear CCD cameras) –Single element sensors (photodiodes) Metrology Objective –Range or angular position –3D measurement –Orientation measurement

Copyright © Gregory Avady. All rights reserved. Cooperative Configuration Known target geometry (including distance between reference marks) Some reference marks under system control (each mark can be turned on or off in any time) Typical Procedure: –For each mark: Activate mark Acquire image Measure 3D mark’s coordinates –Calculate target orientation

Copyright © Gregory Avady. All rights reserved. Non Cooperative Configuration Known target geometry (including distance between reference marks) The most difficult steps are: –Identification of reference marks by using: Mutual location of marks Previous target and / or marks location –Finding corresponding marks on all camera images

Copyright © Gregory Avady. All rights reserved. Illumination Types Active System –External target illumination –If laser designator orientation is known then one 2D sensor (or two 1D sensors) may be removed Passive System –Only target’s features are used –System is not detectable from the target

Copyright © Gregory Avady. All rights reserved. Sensors Types 2D scanning sensors (standard cameras) –Most flexible –Have more information than any other sensors 1D scanning sensors (linear CCD cameras) –Highest resolution in one direction and, as result, the highest measurement accuracy –Fastest (fewer total amount of pixels) –Required special cylindrical optics –High probability of “false parallax” in case of multiple reference marks Single element sensors (photodiodes) –Least expensive –Low accuracy

Copyright © Gregory Avady. All rights reserved. F Individual Pixels … Z X Y 1D Scanning Sensors Cylindrical Optics

Copyright © Gregory Avady. All rights reserved. Single Element Sensor Concept Half Mirror Mirror Object Active SensorReference Sensor Optics Variable Density Filter

Copyright © Gregory Avady. All rights reserved. Active System Concept 2D Camera Laser Designator Target Surface Xc Yc Zc Oc Xd Yd Zd Od Reference Mark

Copyright © Gregory Avady. All rights reserved. Analytical Description For sensor # i eleven calibration coefficients are required: a i1, a i2, …, a i11 Dependence between 2D coordinates on sensor # i and object’s 3D coordinates: Here: [X ai *, Y ai *, Z ai *] – point’s # a 2D coordinates on sensor # i (N is number of sensors)

Copyright © Gregory Avady. All rights reserved. Analytical Description (Direct Task) The following system of linear equations is used for calculating 3D coordinates for selected point on the object: (N – number of sensors) Here: [X ai *, Y ai *, Z ai *] – point’s # a 2D coordinates on sensor # i

Copyright © Gregory Avady. All rights reserved. Analytical Description (Inverse Task) The following equations are used for sensor # i calibration, i.e. for calculating coefficients a i1, a i2, …, a i11 : (N – number of sensors) (M – number of calibration data points) Here:[X ai *, Y ai *, Z ai *]– point’s # a 2D coordinates on sensor # i [X aj, Y aj, Z aj ]– point’s # a 3D coordinates

Copyright © Gregory Avady. All rights reserved. Multi-Channel Processing Procedure For each channel: –Acquire image –Filter acquired image –Find center of gravity for all marks on the image –Identify each mark by using: Mutual location of reference marks Previous target / marks location (if known) Find corresponding marks on each camera image Calculate each mark’s 3D coordinates Calculate target’s orientation