The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.

Slides:



Advertisements
Similar presentations
Texture Synthesis on [Arbitrary Manifold] Surfaces Presented by: Sam Z. Glassenberg* * Several slides borrowed from Wei/Levoy presentation.
Advertisements

An Introduction to Light Fields Mel Slater. Outline Introduction Rendering Representing Light Fields Practical Issues Conclusions.
Direct Volume Rendering. What is volume rendering? Accumulate information along 1 dimension line through volume.
Week 10 - Monday.  What did we talk about last time?  Global illumination  Shadows  Projection shadows  Soft shadows.
Computer Graphics Viewing, Rendering, Antialiasing گرد آوري و تاليف: دكتر احمد رضا نقش نيل چي دانشگاه اصفهان گروه مهندسي كامپيوتر.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Light Field Rendering Shijin Kong Lijie Heng.
Lightfields, Lumigraphs, and Image-based Rendering.
Week 9 - Wednesday.  What did we talk about last time?  Fresnel reflection  Snell's Law  Microgeometry effects  Implementing BRDFs  Image based.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Rendering with Concentric Mosaics Heung – Yeung Shum and Li – Wei He Presentation By: Jonathan A. Bockelman.
Plenoptic Stitching: A Scalable Method for Reconstructing 3D Interactive Walkthroughs Daniel G. Aliaga Ingrid Carlbom
Sampling, Aliasing, & Mipmaps
18.1 Si31_2001 SI31 Advanced Computer Graphics AGR Lecture 18 Image-based Rendering Light Maps What We Did Not Cover Learning More...
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
High-Quality Video View Interpolation
Image or Object? Michael F. Cohen Microsoft Research.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
Computational Photography Light Field Rendering Jinxiang Chai.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
02/14/02(c) University of Wisconsin 2002, CS 559 Last Time Filtering Image size reduction –Take the pixel you need in the output –Map it to the input –Place.
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology Beyond Meshes Spring 2012.
© Chun-Fa Chang Sampling Theorem & Antialiasing. © Chun-Fa Chang Motivations “ My ray traced images have a lot more pixels than the TV screen. Why do.
Computer Graphics Inf4/MSc Computer Graphics Lecture 9 Antialiasing, Texture Mapping.
In the name of God Computer Graphics Modeling1. Today Introduction Modeling Polygon.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Computer Graphics Computer Graphics is everywhere: Visual system is most important sense: High bandwidth Natural communication Fast developments in Hardware.
Virtual Environments with Light Fields and Lumigraphs CS April 2002 Presented by Mike Pilat.
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
01/28/05© 2005 University of Wisconsin Last Time Improving Monte Carlo Efficiency.
Antialiasing CAP4730: Computational Structures in Computer Graphics.
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
03/10/03© 2003 University of Wisconsin Last Time Tone Reproduction and Perceptual Issues Assignment 2 all done (almost)
11/11/04© University of Wisconsin, CS559 Fall 2004 Last Time Shading Interpolation Texture mapping –Barycentric coordinates for triangles.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
09/11/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Graphics Pipeline Texturing Overview Cubic Environment Mapping.
2D/3D Shape Manipulation, 3D Printing Shape Representations Slides from Olga Sorkine February 20, 2013 CS 6501.
Image-based rendering Michael F. Cohen Microsoft Research.
03/12/03© 2003 University of Wisconsin Last Time NPR Assignment Projects High-Dynamic Range Capture Image Based Rendering Intro.
CS 638, Fall 2001 Today Project Stage 0.5 Environment mapping Light Mapping.
Lightfields, Lumigraphs, and Other Image-Based Methods.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
Unstructured Volume Rendering Jian Huang, CS 594, Spring 2002 This set of slides reference slides developed by Prof. Torsten Moeller, SFU, Canada.
Week 6 - Wednesday.  What did we talk about last time?  Light  Material  Sensors.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
Advances in digital image compression techniques Guojun Lu, Computer Communications, Vol. 16, No. 4, Apr, 1993, pp
Computer Graphics II University of Illinois at Chicago Volume Rendering Presentation for Computer Graphics II Prof. Andy Johnson By Raj Vikram Singh.
112/5/ :54 Graphics II Image Based Rendering Session 11.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
FREE-VIEW WATERMARKING FOR FREE VIEW TELEVISION Alper Koz, Cevahir Çığla and A.Aydın Alatan.
Real-Time Relief Mapping on Arbitrary Polygonal Surfaces Fabio Policarpo Manuel M. Oliveira Joao L. D. Comba.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
Bounding Volume Hierarchy. The space within the scene is divided into a grid. When a ray travels through a scene, it only passes a few boxes within the.
Computer vision: models, learning and inference M Ahad Multiple Cameras
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Global Illumination (3) Path Tracing. Overview Light Transport Notation Path Tracing Photon Mapping.
Eigen Texture Method : Appearance compression based method Surface Light Fields for 3D photography Presented by Youngihn Kho.
Real-Time Relief Mapping on Arbitrary Polygonal Surfaces Fabio Policarpo Manuel M. Oliveira Joao L. D. Comba.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Rendering Pipeline Fall, 2015.
Image-Based Rendering
© University of Wisconsin, CS559 Fall 2004
Software Equipment Survey
© 2005 University of Wisconsin
Presentation transcript:

The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time consuming) –Extensive use of image warps The trade-off: Small amounts of data, more complex algorithms Sampling remains a problem –Images tend to appear blurry –Relatively little work on reconstruction algorithms

Light Field Rendering or Lumigraphs Aims: –Sample the plenoptic function, or light field, densely –Store the samples in a data structure that is easy to access –Rendering is simply averaging of samples The plenoptic function gives the radiance passing through a point in space in a particular direction In free space: Gives the radiance along a line –Recall that radiance is constant along a line

Storing Light Fields Each sample of the light field represents radiance along a line Required operations: –Store the radiance associated with an oriented line –Look up the radiance of lines that are “close” to a desired line Hence, we need some way of describing, or parameterizing, oriented lines –A line is a 4D object –There are several possible parameterizations

Parameterizing Oriented Lines Desirable properties: –Efficient conversion from lines to parameters –Control over which subset of lines is of interest –Ease of uniform sampling of lines in space Parameterize lines by their intersection with two planes in arbitrary positions –Take (s,t) as intersection of line in one plane, (u,v) as intersection in other: L(s,t,u,v) –Light Slab: use two quadrilaterals (squares) and restrict each of s,t,u,v to (0,1)

Line Space An alternate parameterization is line space –Better for looking at subset of lines and verifying sampling patterns –In 2D, parameterize lines by their angle with the x-axis, and their perpendicular distance form the origin –Extension to 3D is straightforward Every line in space maps to a point in line space, and vice versa –The two spaces are dual –Some operations are much easier in one space than the other

Verifying Sampling Patterns

Capturing Light Fields Render synthetic images Capture digitized photographs –Use a gantry to carefully control which images are captured Makes it easy to control the light field sampling pattern Hard to build the gantry –Use a video camera Easy to acquire the images Hard to control the sampling pattern

Render synthetic images Decide which line you wish to sample, and cast a ray, or Render an array of images from points on the (u,v) plane – pixels in the images are points on the (s,t) plane Antialiasing is essential, both in (s,t) and (u,v) –Standard anitaliasing and aperture filtering

Tightly Controlled Capture Use a computer controlled gantry to move a camera to fixed positions and take digital images Looks in at an object from outside –Must acquire multiple slabs to get full coverage –Care must be taken with camera alignment and optics Object is rotated in front of gantry to get multiple slabs –Must ensure lighting moves with the object Effectively samples light field on a regular grid, so rendering is easier

Capture from Hand Held Video Place the object on a calibrated stage –Colored to allow blue-screening –Markers to allow easy determination of camera pose Wave the camera around in front of the object –Map to help guide where more samples are required Camera must be calibrated beforehand Output: A large number of non-uniform samples Problem: Have to re-sample to get regular sampling for rendering

Re-Sampling the Light Field Basic problem: –Input: The set of irregular samples from the video capture process –Output: Estimates of the radiance on a regular grid in parameter space Algorithm outline: –Use a multi-resolution algorithm to estimate radiance in under-sampled regions –Use a binning algorithm to uniformly resample without bias

Compression Light fields samples must be dense for good rendering Dense light fields are big: 1.6GB –When rendering, samples could come from any part of the light field –All of the light field must be in memory for real-time rendering –But lots of data redundancy, so compression should do well Desirable compression scheme properties: –Random access to compressed data –Asymmetric – slow compression, fast decompression

Compression Scheme Vector Quantization: –Compression: Choose a codebook of reproduction vectors Replace all the vectors in the data with the index into the “nearest” vector in the codebook –Storage: The codebook plus the indexes –Decompression: Replace each index with the vector from the codebook Follow up with Lempel-Ziv entropy encoding (gzip) –Decompress into memory

Alternate Compression Schemes Neighboring “images” in (u,v) are likely to be very similar –Picture doesn’t change much as you move the camera a little –We know what the camera motion is –BRDF changes smoothly for many cases –Use MPEG or similar to encode a sequence of images –This has been discussed but not implemented “Textures” should compress well –Use hardware rendering from compressed textures

Rendering Ray-tracing: For each pixel in the image: –Determine the ray passing through the eye and the pixel –Interpolate the radiance along that ray from the nearest rays in the light-field Texture Mapping: –Finding the (u,v) and (s,t) coordinates is exactly the texture mapping operation –Use graphics hardware to do the job, or write a software texture mapper (maybe faster – only have to texture map two polygons) Use various interpolation schemes to control aliasing

Exploiting Geometry When using the video capture approach, build a geometric model –Use a volume carving technique When determining the “nearest” samples for rendering, use the geometry to choose better samples This has been further extended: –Surface point used for improving sampling determines focus –By default, we want focus at the object, so use the object geometry –Using other surfaces gives depth of field and variable focus

Surface Light Fields Instead of storing the complete light-field, store only lines emanating from the surface –Parameterize the surface mesh (standard technique) –Choose sample points on the surface –Sample the space of rays leaving the surface from those points –When rendering, look up nearby sample points and appropriate sample rays Best for rendering complex BRDF models –An example of view dependent texturing

Summary Light-fields capture very dense representations of the plenoptic function –Fields can be stitched together to give walkthroughs –The data requirements are large –Sampling still not dense enough – filtering introduces blurring Next time: Using domain specific knowledge