Distortion Correction ECE 6276 Project Review Team 5: Basit Memon Foti Kacani Jason Haedt Jin Joo Lee Peter Karasev.

Slides:



Advertisements
Similar presentations
Feature Based Image Mosaicing
Advertisements

The fundamental matrix F
Lecture 11: Two-view geometry
Sumitha Ajith Saicharan Bandarupalli Mahesh Borgaonkar.
Some problems... Lens distortion  Uncalibrated structure and motion recovery assumes pinhole cameras  Real cameras have real lenses  How can we.
Computer vision: models, learning and inference
Vision Based Control Motion Matt Baker Kevin VanDyke.
Chapter 6 Feature-based alignment Advanced Computer Vision.
Computer Vision Optical Flow
Zheming CSCE715.  A wireless sensor network (WSN) ◦ Spatially distributed sensors to monitor physical or environmental conditions, and to cooperatively.
A Generic Concept for Camera Calibration Peter Sturm and Srikumar Ramaligam Sung Huh CPSC 643 Individual Presentation 4 April 15, 2009.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Computer Graphics Hardware Acceleration for Embedded Level Systems Brian Murray
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Motion Analysis (contd.) Slides are from RPI Registration Class.
Uncalibrated Epipolar - Calibration
ART: Augmented Reality Table for Interactive Trading Card Game Albert H.T. Lam, Kevin C. H. Chow, Edward H. H. Yau and Michael R. Lyu Department of Computer.
Data Partitioning for Reconfigurable Architectures with Distributed Block RAM Wenrui Gong Gang Wang Ryan Kastner Department of Electrical and Computer.
Storage Assignment during High-level Synthesis for Configurable Architectures Wenrui Gong Gang Wang Ryan Kastner Department of Electrical and Computer.
Image-Based Rendering using Hardware Accelerated Dynamic Textures Keith Yerex Dana Cobzas Martin Jagersand.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Projected image of a cube. Classical Calibration.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Vector Multiplication & Color Convolution Team Members Vinay Chinta Sreenivas Patil EECC VLSI Design Projects Dr. Ken Hsu.
Multiple View Geometry. THE GEOMETRY OF MULTIPLE VIEWS Reading: Chapter 10. Epipolar Geometry The Essential Matrix The Fundamental Matrix The Trifocal.
Real-Time Vision on a Mobile Robot Platform Mohan Sridharan Joint work with Peter Stone The University of Texas at Austin
Sub-Nyquist Sampling DSP & SCD Modules Presented by: Omer Kiselov, Daniel Primor Supervised by: Ina Rivkin, Moshe Mishali Winter 2010High Speed Digital.
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Symmetric Architecture Modeling with a Single Image
Graphics on Key by Eyal Sarfati and Eran Gilat Supervised by Prof. Shmuel Wimer, Amnon Stanislavsky and Mike Sumszyk 1.
Image Formation. Input - Digital Images Intensity Images – encoding of light intensity Range Images – encoding of shape and distance They are both a 2-D.
Camera Geometry and Calibration Thanks to Martial Hebert.
Geometry and Algebra of Multiple Views
Epipolar geometry The fundamental matrix and the tensor
Brief Introduction to Geometry and Vision
Projective cameras Motivation Elements of Projective Geometry Projective structure from motion Planches : –
Institute of Information Sciences and Technology Towards a Visual Notation for Pipelining in a Visual Programming Language for Programming FPGAs Chris.
Reconstructing 3D mesh from video image sequences supervisor : Mgr. Martin Samuelčik by Martin Bujňák specifications Master thesis
VHDL Project Specification Naser Mohammadzadeh. Schedule  due date: Tir 18 th 2.
© 2007 SET Associates Corporation SAR Processing Performance on Cell Processor and Xeon Mark Backues, SET Corporation Uttam Majumder, AFRL/RYAS.
Vision Review: Image Formation Course web page: September 10, 2002.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
1 Implementation in Hardware of Video Processing Algorithm Performed by: Yony Dekell & Tsion Bublil Supervisor : Mike Sumszyk SPRING 2008 High Speed Digital.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Implementing and Optimizing a Direct Digital Frequency Synthesizer on FPGA Jung Seob LEE Xiangning YANG.
Jason Li Jeremy Fowers 1. Speedups and Energy Reductions From Mapping DSP Applications on an Embedded Reconfigurable System Michalis D. Galanis, Gregory.
Spatiotemporal Saliency Map of a Video Sequence in FPGA hardware David Boland Acknowledgements: Professor Peter Cheung Mr Yang Liu.
November 29, 2011 Final Presentation. Team Members Troy Huguet Computer Engineer Post-Route Testing Parker Jacobs Computer Engineer Post-Route Testing.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
1 ® ® Agenda 8:30 a.m.Introduction to The MathWorks, Xilinx, and Avnet 9:00 a.m.Video System Design with Simulink 9:45 a.m.Break 10:00 a.m.FPGA Implementation.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Distortion Correction ECE 6276 Project Review Team 5: Basit Memon Foti Kacani Jason Haedt Jin Joo Lee Peter Karasev.
Distortion Correction ECE 6276 Project Review Team 5: Basit Memon Foti Kacani Jason Haedt Jin Joo Lee Peter Karasev.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching.
Tracking Under Low-light Conditions Using Background Subtraction Matthew Bennink Clemson University Clemson, SC.
Design and Calibration of a Multi-View TOF Sensor Fusion System Young Min Kim, Derek Chan, Christian Theobalt, Sebastian Thrun Stanford University.
Adaptive Median Filter
Hiba Tariq School of Engineering
Design for Embedded Image Processing on FPGAs
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Approximate Models for Fast and Accurate Epipolar Geometry Estimation
Structure from motion Input: Output: (Tomasi and Kanade)
Exposing Digital Forgeries by Detecting Traces of Resampling Alin C
Uncalibrated Geometry & Stratification
Dynamic High-Performance Multi-Mode Architectures for AES Encryption
Structure from motion Input: Output: (Tomasi and Kanade)
Deblurring Shaken and Partially Saturated Images
Presentation transcript:

Distortion Correction ECE 6276 Project Review Team 5: Basit Memon Foti Kacani Jason Haedt Jin Joo Lee Peter Karasev

2 ECE 6276 Final Project Team 5 7/14/2009 Outline Motivation Components Component Optimization Current Results Plans for Catapult C Schedule

3 ECE 6276 Final Project Team 5 7/14/2009 Objective Given a distorted image with known size and known lens distortion parameter, generate an undistorted image.

4 ECE 6276 Final Project Team 5 7/14/2009 Motivation – Why? The formation of undistorted images can be described by a series of matrix multiplications Distortion makes it very difficult to calibrate a camera to measure geometry (depth, size, orientation, etc) Many applications in image processing and computer vision like structure estimation, image mosaicing, and ultimately vision-based control.

5 ECE 6276 Final Project Team 5 7/14/2009 Motivation contd.. Application: Measure motion and geometry Problem: Known geometry in the scene is warped, relationship between 3D and 2D points is nonlinear. Solution: Undo the distortion, so x 2D = A * X 3D

6 ECE 6276 Final Project Team 5 7/14/2009 Literature Review (I) K.T. Gribbon, C.T. Johnston, and D.G. Bailey, “A Real-time FPGA Implementation of a Barrel Distortion Correction Algorithm with Bilinear Interpolation” Focus on reducing hardware complexity. Uses LUTs to store mapping data. No quantitative results provided. Logic resource utilization on RC-100 is 51%

7 ECE 6276 Final Project Team 5 7/14/2009 Literature Review (II) Qiang, L.; Allinson, N.M.;” FPGA Implementation of Pipelined Architecture for Optical Imaging Distortion Correction”, Signal Processing Systems Design and Implementation, SIPS '06.Signal Processing Systems Design and Implementation, SIPS '06. Same algorithm as previous one Implementation on a Xilinx FPGA XCS uses 75% of the hardware multipliers. Residual error of the undistorted image was 1.5% of the distorted image.

8 ECE 6276 Final Project Team 5 7/14/2009 Literature Review (III) Hany Farid & Alin C. Popescu, “Blind Removal of Lens Distortion”, Journal of the Optical Society of America 2001 For removal of distortion in absence of any calibration data. Uses polyspectral analysis to detect higher order correlations in frequency domain which are proportional to the distortion. Computationally intensive. No quantitative results. Accuracy is not comparable to those based on known distortion parameters.

9 ECE 6276 Final Project Team 5 7/14/2009 Components to Achieve Objective Component 1: Matlab forward distortion function –Verifies correctness of undistortion algorithm Component 2: Data ordering test bench –Order the C++ input stream from MATLAB generated data Component 3: Undistortion lookup table or lookup function –Initial prototype in MATLAB Component 4: Least squares interpolation lookup function –Initial prototype in MATLAB –Compare different techniques such as LUT vs NEAREST NEIGHBOR Component 5: Verification structure –Compare original C++ result to undisorted image in MATLAB MATLAB  C++  Catapult C  MATLAB

10 ECE 6276 Final Project Team 5 7/14/2009 Component Optimization Questions What size buffers do we need to compare against previous frames? For undistortion function can we compute them dynamically or do we need a pre-defined LUT? For finding fast least squares / linear system solver, compare speed cost vs. nearest neighbor and effect on output error

11 ECE 6276 Final Project Team 5 7/14/2009 Matlab Demo of Algorithm - Original

12 ECE 6276 Final Project Team 5 7/14/2009 Matlab Demo of Algorithm - Distorted

13 ECE 6276 Final Project Team 5 7/14/2009 Matlab Demo of Algorithm - Recovered

14 ECE 6276 Final Project Team 5 7/14/2009 Design Goals Limited Buffers 8 bits per sub-pixel (24 bits total) Resolution (up to 640x480) Concerned with geometry Area, Throughput, Latency Target a low-cost implementation that handles consumer video application pixel clocks of 165 MHz (Apprx 6 ns cycle time).

15 ECE 6276 Final Project Team 5 7/14/2009 Test Vectors The Line Test Original ImageDistorted Image Recovered Image

16 ECE 6276 Final Project Team 5 7/14/2009 Plans for Catapult C Code Use C/C++ and Algorithmic C data types to describe synthesizable hardware Architectures (type of hardware interface (streaming buffers) ) Constraints (Throughput, area, latency) RTL generation and verification Optimizations -Pipelining -Parallelism -Loop Unrolling -Scheduling -Streaming buffers/ Read & Write

17 ECE 6276 Final Project Team 5 7/14/2009 Project Timeline

18 ECE 6276 Final Project Team 5 7/14/2009 Project Risks Risk 1: Slowness of interpolation plan: the key goal of undistortion is “geometrical accuracy” which hold even if noise is injected. To mitigate this risk we have a forward mapping algorithm that is fast compared to any interpolation method but at the cost of missing pixels near the edges. Risk 2: Not enough storage space to keep a lookup table of coordinates. To mitigate this risk we could compute coordinates on the fly at the cost of math operations. Risk 3: Cannot find least squares method for FPGA. To mitigate this risk we could do nearest neighbor interpolation.

19 ECE 6276 Final Project Team 5 7/14/2009 Current Status Straight lines are recovered with minimal injection of missing points or noise through our algorithm. Add more quantitative results on the geometrical accuracy of recovered images than presented in previous results. Generalize the coordinate mapping to make the implementation more robust to varying distortion models and hence supporting more cameras “on the fly.”

20 ECE 6276 Final Project Team 5 7/14/2009 References Yi Ma, Stefano Soatto, et al., “An Invitation to 3-D Vision”Yi MaStefano Soatto Richard Hartley, Andrew Zisserman, “Multiple View Geometry in Computer Vision”Richard HartleyAndrew Zisserman Edward M. Mikhail, James S. Bethel, J. Chris McGlone, “Introduction to Modern Photogrammetry”James S. BethelJ. Chris McGlone

21 ECE 6276 Final Project Team 5 7/14/2009 Questions? ?