We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byMeredith Brisley
Modified about 1 year ago
Copyright © 2007-2008 Gregory Avady. All rights reserved. Electro-optical 3D Metrology Gregory Avady, Ph.D. Overview
Copyright © 2007-2008 Gregory Avady. All rights reserved. 3D Metrology Purpose: –Find target’s coordinates or orientation in 3D space Resources: –One or more optoelectronic sensors (cameras) –Some knowledge of the target, such as: geometry (shape), color, brightness, approximate location
Copyright © 2007-2008 Gregory Avady. All rights reserved. Control Device Typical Block-Diagram X Y Z O XOXO YOYO ZOZOO X1X1 Y1Y1 Z1Z1 O1O1 X2X2 Y2Y2 Z2Z2 O2O2 ZNZN XNXN YNYN ONON Sensor 1 Sensor 2 Sensor N Processor Acquisition Device Output Target Laser Designator (Optional)
Copyright © 2007-2008 Gregory Avady. All rights reserved. System Classifications Target Profile –Cooperative –Non-Cooperative Illumination Type –Active Systems (using laser designator) –Passive Systems Sensor Type –2D sensors (standard cameras) –1D sensors (linear CCD cameras) –Single element sensors (photodiodes) Metrology Objective –Range or angular position –3D measurement –Orientation measurement
Copyright © 2007-2008 Gregory Avady. All rights reserved. Cooperative Configuration Known target geometry (including distance between reference marks) Some reference marks under system control (each mark can be turned on or off in any time) Typical Procedure: –For each mark: Activate mark Acquire image Measure 3D mark’s coordinates –Calculate target orientation
Copyright © 2007-2008 Gregory Avady. All rights reserved. Non Cooperative Configuration Known target geometry (including distance between reference marks) The most difficult steps are: –Identification of reference marks by using: Mutual location of marks Previous target and / or marks location –Finding corresponding marks on all camera images
Copyright © 2007-2008 Gregory Avady. All rights reserved. Illumination Types Active System –External target illumination –If laser designator orientation is known then one 2D sensor (or two 1D sensors) may be removed Passive System –Only target’s features are used –System is not detectable from the target
Copyright © 2007-2008 Gregory Avady. All rights reserved. Sensors Types 2D scanning sensors (standard cameras) –Most flexible –Have more information than any other sensors 1D scanning sensors (linear CCD cameras) –Highest resolution in one direction and, as result, the highest measurement accuracy –Fastest (fewer total amount of pixels) –Required special cylindrical optics –High probability of “false parallax” in case of multiple reference marks Single element sensors (photodiodes) –Least expensive –Low accuracy
Copyright © 2007-2008 Gregory Avady. All rights reserved. F Individual Pixels … Z X Y 1D Scanning Sensors Cylindrical Optics
Copyright © 2007-2008 Gregory Avady. All rights reserved. Single Element Sensor Concept Half Mirror Mirror Object Active SensorReference Sensor Optics Variable Density Filter
Copyright © 2007-2008 Gregory Avady. All rights reserved. Active System Concept 2D Camera Laser Designator Target Surface Xc Yc Zc Oc Xd Yd Zd Od Reference Mark
Copyright © 2007-2008 Gregory Avady. All rights reserved. Analytical Description For sensor # i eleven calibration coefficients are required: a i1, a i2, …, a i11 Dependence between 2D coordinates on sensor # i and object’s 3D coordinates: Here: [X ai *, Y ai *, Z ai *] – point’s # a 2D coordinates on sensor # i (N is number of sensors)
Copyright © 2007-2008 Gregory Avady. All rights reserved. Analytical Description (Direct Task) The following system of linear equations is used for calculating 3D coordinates for selected point on the object: (N – number of sensors) Here: [X ai *, Y ai *, Z ai *] – point’s # a 2D coordinates on sensor # i
Copyright © 2007-2008 Gregory Avady. All rights reserved. Analytical Description (Inverse Task) The following equations are used for sensor # i calibration, i.e. for calculating coefficients a i1, a i2, …, a i11 : (N – number of sensors) (M – number of calibration data points) Here:[X ai *, Y ai *, Z ai *]– point’s # a 2D coordinates on sensor # i [X aj, Y aj, Z aj ]– point’s # a 3D coordinates
Copyright © 2007-2008 Gregory Avady. All rights reserved. Multi-Channel Processing Procedure For each channel: –Acquire image –Filter acquired image –Find center of gravity for all marks on the image –Identify each mark by using: Mutual location of reference marks Previous target / marks location (if known) Find corresponding marks on each camera image Calculate each mark’s 3D coordinates Calculate target’s orientation
1 Students: Paolo BellandiEmanuele Ferrari Course: Optical Measurements 2007 Development of a laser slit system in LabView.
3-D Computer Vision Using Structured Light Prepared by Burak Borhan.
高精度高速度的三维测量技术. 3D stereo camera Stereo line scan camera 3D data and color image simultaneously 3D-PIXA.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Bala Lakshminarayanan AUTOMATIC TARGET RECOGNITION April 1, 2004.
Image Formation. Input - Digital Images Intensity Images – encoding of light intensity Range Images – encoding of shape and distance They are both a 2-D.
Global Illumination Models THE WHITTED IMAGE - BASIC RECURSIVE RAY TRACING Copyright © 1997 A. Watt and L. Cooper.
0 Test Slide Text works. Text works. Graphics work. Graphics work.
Rotary Encoder. Wikipedia- Definition A rotary encoder, also called a shaft encoder, is an electro- mechanical device that converts the angular position.
March 2004 Charles A. DiMarzio, Northeastern University ECEG287 Optical Detection Course Notes Part 15: Introduction to Array Detectors Profs.
ABRF meeting 09 Light Microscopy Research Group. Why are there no standards? Imaging was largely an ultrastructure tool Digital imaging only common in.
1 1 Introduction to Machine Vision Systems Professor Nicola Ferrier Room 3128, ECB
VisHap: Guangqi Ye, Jason J. Corso, Gregory D. Hager, Allison M. Okamura Presented By: Adelle C. Knight Augmented Reality Combining Haptics and Vision.
1 Basics of Digital Imaging Digital Image Capture and Display Kevin L. Lorick, Ph.D. FDA, CDRH, OIVD, DIHD.
Date of download: 7/9/2016 Copyright © 2016 SPIE. All rights reserved. Photo of measurement boat and unit. Figure Legend: From: Development of shape measurement.
Digital Images The nature and acquisition of a digital image.
SCANNING LIDAR FLUOROSENSOR FOR REMOTE DIAGNOSTIC OF SURFACES Luisa Caneve F. Colao, R. Fantoni, L. Fiorani, A. Palucci ENEA - UTAPRAD Technical Unit Development.
Difference Between Raster and Vector Images Raster and vector are the two basic data structures for storing and manipulating images and graphics data on.
3D-2D registration Kazunori Umeda Chuo Univ., Japan CRV2010 Tutorial May 30, 2010.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
1 Finding depth. 2 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders.
Non-Contact Inspection of Internal Threads of Machined Parts E. Hong | H. Zhang | R. Katz | J.S. Agapiou Presentation by Guy Pinkas & Jonathan Avidan.
1 Comp300a: Introduction to Computer Vision L. QUAN.
1 Research Question Can a vision-based mobile robot with limited computation and memory, and rapidly varying camera positions, operate autonomously.
On the Design, Construction and Operation of a Diffraction Rangefinder MS Thesis Presentation Gino Lopes A Thesis submitted to the Graduate Faculty of.
1 QED In Vivo USB Input Output Box configuration This tutorial contains a number of instructions embedded in a great deal of explanation. Procedures that.
My Research Experience Cheng Qian. Outline 3D Reconstruction Based on Range Images Color Engineering Thermal Image Restoration.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
1 Formation et Analyse d’Images Session 2 Daniela Hall 7 October 2004.
Range Imaging and Pose Estimation of Non-Cooperative Targets using Structured Light Dr Frank Pipitone, Head, Sensor Based Systems Group Navy Center for.
Applications of one-class classification -- searching for comparable applications for negative selection algorithms.
Content 1- Background 2- Potential Transmitter 3- Potential Receiver 4- channel Distortion 5- Spatial Registration & Synchronization 6- Temporal Synchronization.
Input/Output Devices Graphical Interface Systems Dr. M. Al-Mulhem Feb. 1, 2008.
C2 Copyright by:Tel.: +49 (0)4531/ AT – Automation Technology GmbHFax: +49 (0)4531/ Hermann-Bössow-Str. 6-8
Sensing for Robotics & Control – Remote Sensors R. R. Lindeke, Ph.D.
Conceptual Design Review Senior Design
Visual Perception PhD Program in Information Technologies Description: Obtention of 3D Information. Study of the problem of triangulation, camera calibration.
November 13, 2014Computer Vision Lecture 17: Object Recognition I 1 Today we will move on to… Object Recognition.
EE 638: Principles of Digital Color Imaging Systems Lecture 17: Digital Camera Characterization and Calibration.
Based on an Article Published 12/2011 by: En Hong, Honogwei Zhang, Reuven Katz, John S.Agapiou Presented by: Vadim Tutaev & Boris Sheendlin.
Alessandro Nastro Laboratory of Optoelectronics, University of Brescia 2D Vision Course Creating and Projecting Fringes.
Laser Speckle Extensometer ME 53 Class 1 Device MESSPHYSIK MATERIALS TESTING © MESSPHYSIK Materials Testing Version 2.3.
Lecture 2 Photographs and digital mages Friday, 7 January 2011 Reading assignment: Ch 1.5 data acquisition & interpretation Ch 2.1, 2.5 digital imaging.
Image Forgery Detection by Gamma Correction Differences.
Real-Time Object Tracking System Adam Rossi Meaghan Zorij
IMPLEMENTATION ISSUES REGARDING A 3D ROBOT – BASED LASER SCANNING SYSTEM Theodor Borangiu, Anamaria Dogar, Alexandru Dumitrache University Politehnica.
QR Code Recognition Based On Image Processing Yunhua Gu, Weixiang Zhang, IEEE.
Introduction to Image Processing. What is Image Processing? Manipulation of digital images by computer. Image processing focuses on two major tasks: –Improvement.
Graftek Imaging, Inc. A National Instruments Alliance Member Providing Complete Solutions For Image Acquisition and Analysis.
© 2017 SlidePlayer.com Inc. All rights reserved.