A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.

Slides:



Advertisements
Similar presentations
CSE473/573 – Stereo and Multiple View Geometry
Advertisements

Morphing & Warping 2D Morphing Involves 2 steps 1.Image warping “get features to line up” 2.Cross-dissolve “mix colors” (fade-in/fadeout transition)
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Computer vision: models, learning and inference
Mapping: Scaling Rotation Translation Warp
PARAMETRIC RESHAPING OF HUMAN BODIES IN IMAGES SIGGRAPH 2010 Shizhe Zhou Hongbo Fu Ligang Liu Daniel Cohen-Or Xiaoguang Han.
Structure and Motion from Line Segments in Multiple Images Camillo J. Taylor, David J. Kriegman Presented by David Lariviere.
HCI 530 : Seminar (HCI) Damian Schofield.
Last Time Pinhole camera model, projection
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
Contents Description of the big picture Theoretical background on this work The Algorithm Examples.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
CS485/685 Computer Vision Prof. George Bebis
Introduction to Computer Vision 3D Vision Topic 9 Stereo Vision (I) CMPSCI 591A/691A CMPSCI 570/670.
The plan for today Camera matrix
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
MSU CSE 803 Fall 2008 Stockman1 CV: 3D sensing and calibration Coordinate system changes; perspective transformation; Stereo and structured light.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
Image Based Rendering And Modeling Techniques And Their Applications Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang.
David Luebke Modeling and Rendering Architecture from Photographs A hybrid geometry- and image-based approach Debevec, Taylor, and Malik SIGGRAPH.
Stereo vision A brief introduction Máté István MSc Informatics.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Computer vision: models, learning and inference
What Does the Scene Look Like From a Scene Point? Donald Tanguay August 7, 2002 M. Irani, T. Hassner, and P. Anandan ECCV 2002.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Image-Based Rendering from a Single Image Kim Sang Hoon Samuel Boivin – Andre Gagalowicz INRIA.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Image Synthesis Rabie A. Ramadan, PhD 4. 2 Review Questions Q1: What are the two principal tasks required to create an image of a three-dimensional scene?
stereo Outline : Remind class of 3d geometry Introduction
Feature Matching. Feature Space Outlier Rejection.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
Photo VR Editor: A Panoramic and Spherical Environment Map Authoring Tool for Image-Based VR Browsers Jyh-Kuen Horng, Ming Ouhyoung Communications and.
Computer vision: models, learning and inference M Ahad Multiple Cameras
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
John Morris These slides were adapted from a set of lectures written by Mircea Nicolescu, University of Nevada at Reno Stereo Vision Iolanthe in the Bay.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Model Refinement from Planar Parallax Anthony DickRoberto Cipolla Department of Engineering University of Cambridge.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Introduction To IBR Ying Wu. View Morphing Seitz & Dyer SIGGRAPH’96 Synthesize images in transition of two views based on two images No 3D shape is required.
1 2D TO 3D IMAGE AND VIDEO CONVERSION. INTRODUCTION The goal is to take already existing 2D content, and artificially produce the left and right views.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
3D Ojbects: Transformations and Modeling. Matrix Operations Matrices have dimensions: Vectors can be thought of as matrices: v=[2,3,4,1] is a 1x4 matrix.
Applications and Rendering pipeline
A Plane-Based Approach to Mondrian Stereo Matching
Processing visual information for Computer Vision
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
3D Graphics Rendering PPT By Ricardo Veguilla.
Mean Shift Segmentation
Common Classification Tasks
Vehicle Segmentation and Tracking in the Presence of Occlusions
Image Based Modeling and Rendering (PI: Malik)
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Presentation transcript:

A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image based techniques.

Current geometry-based methods -- a modeling program is used to manually position the elements of the scene. extremely labor-intensive process: surveying the site, locating and digitizing architectural plans, converting existing CAD data; difficult to verify the accuracy of the model; resulting models are noticeably computer-generated.

Image-based system – uses photographs as input and produces photorealistic renderings as output. Some image-based systems rely on the computer vision technique of computational stereopsis to determine the structure of the scene from the multiple photographs available. These systems are only as strong as the underlying stereo algorithms.

Stereo algorithms weaknesses: the photographs need to appear very similar for reliable results to be obtained; use many closely spaced images; in some cases employ significant amounts of user input for each image pair to supervise the stereo algorithm; Systems based on stereo algorithms are nor efficient for creating large-scale, freely navigable virtual environments from photographs.

The new method combines geometry-based and image-based methods and requires only a sparse set of photographs to produce realistic renderings from arbitrary view points. Three new modeling and rendering techniques: photogrammetric modeling; view-dependent texture mapping; model-based stereo.

Photogrammetric Modeling The computer determines the parameters of a hierarchical model of parametric polyhedral primitives to reconstruct the architectural scene. Façade -- interactive modeling program that allows the user to construct a geometric model of a scene from digitized photographs.

Photogrammetric Modeling User: selects a number of photographs; instantiates the components of the model; marks edges in the images; corresponds the edges in the images to the edges in the model. Blocks -- parameterized (length, width and height) geometric primitives such as boxes, prisms, and surfaces of revolution.

Photogrammetric Modeling A photograph of the Campanile, Berkeley’s clock tower.

Photogrammetric Modeling Each coordinate of each vertex of the block is expressed as linear combination of the block’s parameters, relative to an internal coordinate frame. P0=(-width, -height, length) T Each block has an associated bounding box.

Photogrammetric Modeling Blocks are organized in a hierarchical tree structure. Relations -- spatial relationships between blocks. The relation between a block and its parent is represented as a rotation matrix R and a translation vector t.

Photogrammetric Modeling The reconstruction algorithm works by minimizing an objective function that sums the disparity between the projected edges of the model and the edges marked in the images represents the disparity computed for edge feature i.

Photogrammetric Modeling University High School in Urbana, Illinois.

View-Dependent Texture-Mapping Projecting the original photographs from the camera position onto the model. Some parts of the model can shadow others with respect to the camera. Such shadowed regions are determined with an image-space shadow map algorithm. Render the model from a novel point of view -- use multiple images.

View-Dependent Texture-Mapping University High School in Urbana, Illinois.

View-Dependent Texture-Mapping If a pixel is mapped in only one rendering, its value from that rendering is used in the composite. If it is mapped in more than one rendering, the renderer has to decide which image to use. The pixel in the virtual view corresponding to the point on the model is assigned a weighted average of the corresponding pixels in actual views 1 and 2. The weights w1 and w2 are inversely proportional to the magnitude of angles a1 and a2.

View-Dependent Texture-Mapping Unwanted object in the original photograph: mask out the object with a reserved color; set the weights for any pixels corresponding to the masked regions to zero; decrease the weights of the pixels near the boundary as before to minimize seams; any regions in the composite image which are occluded in every projected image are filled in using a hole-filling algorithm.

Model-Based Stereopsis Measures how the actual scene deviates from the approximate model. Places the images into a common frame of reference that makes the stereo correspondence possible even for images taken from relatively far apart. The stereo correspondence information can then be used to render novel views of the scene using image- based rendering techniques. Model-based stereo computes the associated depth map for the key image by determining corresponding points in the key and offset images

Model-Based Stereopsis The warped offset image, produced by projecting the offset image onto the approximate model and viewing it from the position of the key camera. This projection eliminates most of the disparity and foreshortening with respect to the key image, greatly simplifying stereo correspondence.

Model-Based Stereopsis For determining stereo correspondence, a stereo algorithm is used. The warping step makes it dramatically easier to determine the correspondences.

Future Work The use of surfaces of revolution (e.g. domes, columns, and minarets) in the photogrammetric modeling approach. The system should be able to make better use of photographs taken in varying lighting conditions, and it should be able to render images of the scene as it would appear at any time of day, in any weather, and with any configuration of artificial light.

Future Work Selecting which original images to use when rendering a novel view of the scene. This problem is especially difficult when the available images are taken at arbitrary locations. The weighting function still allows seams to appear in renderings and does not consider issues arising from image resampling. Another form of view selection is required to choose which pairs of images should be matched to recover depth in the model-based stereo algorithm.

THANK YOU