Computer Vision Computer Vision based Hole Filling Chad Hantak COMP256-001 December 9, 2003.

Slides:



Advertisements
Similar presentations
The fundamental matrix F
Advertisements

Lecture 11: Two-view geometry
3D Graphics Rendering and Terrain Modeling
Course Syllabus 1.Color 2.Camera models, camera calibration 3.Advanced image pre-processing Line detection Corner detection Maximally stable extremal regions.
3D Laser Stripe Scanner or “A Really Poor Man’s DeltaSphere” Chad Hantak December 6, 2004.
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
Computer Vision Detecting the existence, pose and position of known objects within an image Michael Horne, Philip Sterne (Supervisor)
Chapter 9: Morphological Image Processing
Informationsteknologi Wednesday, November 7, 2007Computer Graphics - Class 51 Today’s class Geometric objects and transformations.
Course Syllabus 1.Color 2.Camera models, camera calibration 3.Advanced image pre-processing Line detection Corner detection Maximally stable extremal regions.
Morphological Image Processing Md. Rokanujjaman Assistant Professor Dept of Computer Science and Engineering Rajshahi University.
Lecture 8: Stereo.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Image Correspondence and Depth Recovery Gene Wang 4/26/2011.
Morphology Structural processing of images Image Processing and Computer Vision: 33 Morphological Transformations Set theoretic methods of extracting.
Last Time Pinhole camera model, projection
CS223B Assignment 1 Recap. Lots of Solutions! 37 Groups Many different approaches Let’s take a peek at all 37 results on one image from the test set.
The plan for today Camera matrix
CMPUT 412 3D Computer Vision Presented by Azad Shademan Feb , 2007.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
© 2004 by Davi GeigerComputer Vision March 2004 L1.1 Binocular Stereo Left Image Right Image.
CSE473/573 – Stereo Correspondence
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
1 Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1. intrinsic images: a 2D representation.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
3D Scene Models Object recognition and scene understanding Krista Ehinger.
SVD(Singular Value Decomposition) and Its Applications
Shadows Computer Graphics. Shadows Shadows Extended light sources produce penumbras In real-time, we only use point light sources –Extended light sources.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Information Extraction from Cricket Videos Syed Ahsan Ishtiaque Kumar Srijan.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Lecture 5. Morphological Image Processing. 10/6/20152 Introduction ► ► Morphology: a branch of biology that deals with the form and structure of animals.
Course Syllabus 1.Color 2.Camera models, camera calibration 3.Advanced image pre-processing Line detection Corner detection Maximally stable extremal regions.
1 Shadows (2) ©Anthony Steed Overview n Shadows – Umbra Recap n Penumbra Analytical v. Sampling n Analytical Aspect graphs Discontinuity meshing.
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
Parameter estimation. 2D homography Given a set of (x i,x i ’), compute H (x i ’=Hx i ) 3D to 2D camera projection Given a set of (X i,x i ), compute.
Soccer Video Analysis EE 368: Spring 2012 Kevin Cheng.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
CAP4730: Computational Structures in Computer Graphics
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
November 2, MIDTERM GRADE DISTRIBUTION
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Nottingham Image Analysis School, 23 – 25 June NITS Image Segmentation Guoping Qiu School of Computer Science, University of Nottingham
3D Reconstruction Using Image Sequence
Convex Sets & Concave Sets A planar region R is called convex if and only if for any pair of points p, q in R, the line segment pq lies completely in R.
Morphological Image Processing Robotics. 2/22/2016Introduction to Machine Vision Remember from Lecture 12: GRAY LEVEL THRESHOLDING Objects Set threshold.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
TOPIC 12 IMAGE SEGMENTATION & MORPHOLOGY. Image segmentation is approached from three different perspectives :. Region detection: each pixel is assigned.
Lecture(s) 3-4. Morphological Image Processing. 3/13/20162 Introduction ► ► Morphology: a branch of biology that deals with the form and structure of.
Morphological Image Processing
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
3D Ojbects: Transformations and Modeling. Matrix Operations Matrices have dimensions: Vectors can be thought of as matrices: v=[2,3,4,1] is a 1x4 matrix.
Winter in Kraków photographed by Marcin Ryczek
Content Based Coding of Face Images
A Plane-Based Approach to Mondrian Stereo Matching
Lecture 5 Basic geometric objects
CSE 554 Lecture 1: Binary Pictures
Range Image Segmentation for Modeling and Object Detection in Urban Scenes Cecilia Chen & Ioannis Stamos Computer Science Department Graduate Center, Hunter.
Introduction to Computational and Biological Vision Keren shemesh
3D Graphics Rendering PPT By Ricardo Veguilla.
Chin-Ya Huang Mon-Ju Wu
Fitting Curve Models to Edges
Scott Tan Boonping Lau Chun Hui Weng
3D Rendering Pipeline Hidden Surface Removal 3D Primitives
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
ECE 692 – Advanced Topics in Computer Vision
Winter in Kraków photographed by Marcin Ryczek
Presentation transcript:

Computer Vision Computer Vision based Hole Filling Chad Hantak COMP December 9, 2003

Computer Vision Background Last Class (Range Data) Brief Recap –DeltaSphere acquires depth samples –Algorithms turn these samples into a polygonal mesh

Computer Vision Holes Again Last Class Scanner can only get information it “sees” –Occlusion –Non-reflective objects

Computer Vision Vision Technique Fill holes in polygonal mesh based on photographs of the scene –Given: Scan of scene (hole-filled mesh) Photographs of room from different views and locations Correlation between points in photographs and points on mesh –Method: Use image segmentation to extract connectivity information from photographs Based on determined connectivity add polygonal information to mesh

Computer Vision Implementation Fill in large planar regions –Create a more complete model –Does not work on a minute scale Three separate utilities –Correspondence Creator Link the photographs to the scan data –MATLAB Analysis Utility Image processing Polygon creation –Display Program Overlay the original scan with new polygonal information

Computer Vision Working Example Demonstrate the pipeline by example Simple scene was setup –One plane partially occluding three other planes

Computer Vision Working Example (cont.) Photographs of occluded planes

Computer Vision Step 1 - Correspondence Mark 6 correspondences Mark component in photograph to insert Mark 3 points on plane in model

Computer Vision Step 2 – MATLAB Region labeling on photograph –Canny Edge Detection –Morphological Operations (Dilation / Erosion) –Region Labeling Boundary of component –Convex Hull Find the camera projection matrix –Direct Linear Transformation (DLT) Create the polygon –Image points -> Mesh points

Computer Vision Step 2 – MATLAB: Region Labeling Canny Edge Detection –Detects strong and weak Edges –Include weak edges only if connected to a strong edge –Canny, J., A Computational Approach to Edge Detection, IEEE PAMI 8(6) , November, 1986.

Computer Vision Step 2 – MATLAB: Region Labeling Need components with closed boundaries for accurate region labeling Morphological Operations –Image processing based on shapes –Many different types Grow lines vertically and horizontally Remove extraneous information without separating objects Remove endpoints of lines without removing small objects

Computer Vision Step 2 – MATLAB: Region Labeling Region labeling marks the different bounded components with unique labels

Computer Vision Step 2 – MATLAB: Boundary Find the boundary of the component to add to mesh Use the component’s specified point as a start, find and walk along region’s border This implementation takes the convex hull of those points

Computer Vision Step 2 – MATLAB: Camera Matrix Image points have to be turned into world points Using 6 correspondences find the camera’s projection matrix Direct Linear Transformation (DLT) Objective Given n≥6 3D to 2D point correspondences {X i ↔x i }, determine the 3D projection matrix P such that x i =PX i Algorithm (i)For each correspondence X i ↔x i compute A i. Usually only two first rows needed. (ii)Assemble n 2x12 matrices A i into a single 2 n x12 matrix A (iii)Obtain SVD of A. Solution for p is last column of V (iv)Determine P from p

Computer Vision Step 2 – MATLAB: Camera Matrix Camera Matrix –Takes world points to image points –Holds the camera’s intrinsic information, orientation, and world position “Inverted” Camera Matrix –Cannot easily go the other way –Best which can be done is turn an image point into a ray in the world originating from the camera’s position –World point for an image point lies along this ray

Computer Vision Step 3 – MATLAB: Create Polygon For each image point on the boundary –Determine the world ray for this point from the camera matrix –User specified a plane in mesh the polygon lies on –Intersect ray with the plane to find the world point Triangulate the world points of the polygon Output PLY file containing the polygon

Computer Vision Results OriginalFilled

Computer Vision Limitations Good Lighting –Hard shadows create false edges –Saturation washes out edges

Computer Vision Limitations Weak borders –Not perfectly straight –Result of edge detection & morphological operations

Computer Vision Future Work Try it on some real data Straighter border edges –Canny Edge Detection combined with multiple line fittings through RANSAC Remove mesh plane specification –No longer require the user to specify plane where the polygon lies in model –Extract depth information from multiple photos and epipolar geometry Object recognition –Template matching –Database of models