Camera surface reference images desired ray ‘closest’ ray focal surface ‘closest’ camera Light Field Parameterization We take a non-traditional approach.

Slides:



Advertisements
Similar presentations
Exploration of bump, parallax, relief and displacement mapping
Advertisements

Digital Camera Essential Elements Part 1 Sept
Three Dimensional Visual Display Systems for Virtual Environments Presented by Evan Suma Part 3.
Light Field Rendering Shijin Kong Lijie Heng.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Video enhances, dramatizes, and gives impact to your multimedia application. Your audience will better understand the message of your application.
Polarization Processing for Low Light Imaging Abstract: An important property of still picture cameras and video cameras is their ability to present an.
Image-based Rendering of Real Objects with Complex BRDFs.
Design Realization lecture 26 John Canny 11/25/03.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
What’s on page 13-25? Tom Butkiewicz. Refresh Rates Flicker from shutter systems Halve refresh rates 2 eyed 120Hz != 1 eyed 60Hz Phosphors 2 Polarized.
Nahum D. Solomon Pr. Tim Johnson November 30, 2009 Wentworth Institute of Technology.
Computational Photography Light Field Rendering Jinxiang Chai.
Graftek Imaging, Inc. A National Instruments Alliance Member Providing Complete Solutions For Image Acquisition and Analysis.
CIS 681 Distributed Ray Tracing. CIS 681 Anti-Aliasing Graphics as signal processing –Scene description: continuous signal –Sample –digital representation.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Ch 1 Intro to Graphics page 1CS 367 First Day Agenda Best course you have ever had (survey) Info Cards Name, , Nickname C / C++ experience, EOS experience.
Ch 25 1 Chapter 25 Optical Instruments © 2006, B.J. Lieb Some figures electronically reproduced by permission of Pearson Education, Inc., Upper Saddle.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
Light Field. Modeling a desktop Image Based Rendering  Fast Realistic Rendering without 3D models.
Digital Photography Basics. Pixels A pixel is a contraction if the term PIcture ELement. Digital images are made up of small squares, just like a tile.
Digital Photography Fundamentals Rule One - all digital cameras capture information at 72 dots per inch (DPI) regardless of their total pixel count and.
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
2D TO 3D MODELLING KCCOE PROJECT PRESENTATION Student: Ashish Nikam Ashish Singh Samir Gaykar Sanoj Singh Guidence: Prof. Ashwini Jaywant Submitted by.
Erdem Alpay Ala Nawaiseh. Why Shadows? Real world has shadows More control of the game’s feel  dramatic effects  spooky effects Without shadows the.
Digital Photography Vocabulary
Digital Imaging Systems –I/O. Workflow of digital imaging Two Competing imaging format for motion pictures Film vs Digital Video( TV) Presentation of.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Computer Visualization BIM Curriculum 03. Topics  History  Computer Visualization Methods  Visualization Workflow  Technology Background.
Photography Lesson 2 Pinhole Camera Lenses. The Pinhole Camera.
Comparing Regular Film to Digital Photography
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
Integral University EC-024 Digital Image Processing.
Video Video.
Chapter 3 Digital Representation of Geographic Data.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
Digital Image Fundamentals. What Makes a good image? Cameras (resolution, focus, aperture), Distance from object (field of view), Illumination (intensity.
COMPUTER GRAPHICS Hochiminh city University of Technology Faculty of Computer Science and Engineering CHAPTER 01: Graphics System.
A Camera-Projector System for Real-Time 3D Video Marcelo Bernardes, Luiz Velho, Asla Sá, Paulo Carvalho IMPA - VISGRAF Laboratory Procams 2005.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
ABSTRACT A single camera can be a useful surveillance tool, but video recorded from a single point of reference becomes ineffective when objects of interest.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
A General-Purpose Platform for 3-D Reconstruction from Sequence of Images Ahmed Eid, Sherif Rashad, and Aly Farag Computer Vision and Image Processing.
: Chapter 11: Three Dimensional Image Processing 1 Montri Karnjanadecha ac.th/~montri Image.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
The Physics of Photography
DIGITAL CAMERAS Prof Oakes. Overview Camera history Digital Cameras/Digital Images Image Capture Image Display Frame Rate Progressive and Interlaced scans.
Graphics: Conceptual Model Real Object Human Eye Display Device Graphics System Synthetic Model Synthetic Camera Real Light Synthetic Light Source.
112/5/ :54 Graphics II Image Based Rendering Session 11.
Computer vision: models, learning and inference M Ahad Multiple Cameras
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
A Lumigraph Camera for Image Based Rendering Jason C. Yang Prof. Leonard McMillan May 10, 1999.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
CS559: Computer Graphics Lecture 36: Raytracing Li Zhang Spring 2008 Many Slides are from Hua Zhong at CUM, Paul Debevec at USC.
Auto-stereoscopic Light-Field Display By: Jesus Caban George Landon.
Presenting: Shlomo Ben-Shoshan, Nir Straze Supervisors: Dr. Ofer Hadar, Dr. Evgeny Kaminsky.
Padmasri Dr.BV Raju Institute Of Technology
: Chapter 11: Three Dimensional Image Processing
Distributed Ray Tracing
3D TV TECHNOLOGY.
Bashar Mu’ala Ahmad Khader
Factors that Influence the Geometric Detection Pattern of Vehicle-based Licence Plate Recognition Systems Martin Rademeyer Thinus Booysen, Arno Barnard.
Coding Approaches for End-to-End 3D TV Systems
Image Processing, Leture #20
Image Based Modeling and Rendering (PI: Malik)
Range calculations For each pixel in the left image, determine what pixel position in the right image corresponds to the same point on the object. This.
Presentation transcript:

camera surface reference images desired ray ‘closest’ ray focal surface ‘closest’ camera Light Field Parameterization We take a non-traditional approach to computer graphics modeling and rendering, in which a scene is represented by a collection of images rather than the geometry and surface properties used in typical computer graphics. Essentially, we treat a collection of images as a database of rays. New views can be constructed from this database on a ray-by-ray basis by selecting the closest ray to each desired ray. Image-Based Synthetic Aperture Rendering Interface Solution (PCI) Interface Solution (PCI) Sensor Pod - A BAB C A C D B D C A C D B D Motherboard FIFO Address Data Mux/ Logic Motherboard Interleaving Address Sensor Pod - A Sensor Pod - B Data 32 bit Sensor Pod CMOS Sensor (on board A/D) Control FPGA Logic SDRAM Address Data Address Data Address Project Goals Approach Real-time Acquisition Low Cost Acquisition Display Technology MIT PIs: Prof. Leonard McMillan (MIT LCS), Prof. Julie Dorsey (MIT LCS), and Dr. Hiroshi Murase (NTT) The primary focus of our research effort is to develop technology to create virtual experiences that will approach the fidelity of the real world. In the future, such technologies will have a dramatic impact on the way we work and play. They will enable new forms of commerce, bring together individuals separated by large distances, and provide us with new forms of entertainment. Our dynamically reparameterized light field representation allows us to synthesize images with photographic effects such as variable focus and depth-of-field. Depth- of-field effects are created by varying the extent of the reconstruction filters used on the camera surface. A variable focal length can be simulated by varying the focal plane used in the reconstruction process. In a synthetic aperture camera both the aperture and focal- length settings can be varied from pixel to pixel. The allows effects that are impossible with a traditional camera. Ultimately we intend to create a device for capturing and processing dynamically reparameterized light fields in real-time. We call this device a synthetic aperture camera array. It is composed of a two-dimensional array of randomly accessible image sensors that memory-mapped in the address space of a host processor. Such a system will allow images to be synthesized from a wide range of virtual camera positions in real-time. We plan to support multiple simultaneous video streams to support stereoscopic display as well as multiple viewers. A high-level block diagram of our proposed system is shown above. The camera’s host interface will be an industry standard personal computer bus. The camera array will be constructed from modular sensor units mounted on a common motherboard. The addressing of sensor modules will be interleaved in order to maximize the communication bandwidth between the image sensors and the host computer. Each sensor pod contains a CMOS image sensor, buffer memory, and glue logic. The multi- frame buffer memory is used for two functions. It is used to store information for noise cancellation, and it allows the host to access image rays asynch- ronous to the image scanning process. This modular design approach will allow us to upgrade to higher resolution sensors as they become available. We have also prototyped two low- cost devices for acquiring light fields. We have developed two acquisition systems for acquiring light fields of static scenes. The first uses a robotic XY-platform to move a digital camera. This system allows us to explore the trade-offs between camera spacing and resolution in order to estimate the per- formance of our camera array. This system uses a precision image sensor, precision optics, and a motion platform with a travel distance of approximately one meter squares. It can acquire a 16 by 16 image light field in under 20 minutes, and it cost approximately $10,000 US to construct. Our second system is based on an off-the-shelf flat bed scanner, and an array of plastic lenses. We have modified the scanner to operate off of battery power so that this system can be taken out into the field to acquire images. Additional processing is required to correct for shortcoming in the image sensor and low cost optics. None the less, the system can acquire an 8 by 12 image light field in under 3 minutes, and cost under $100. We have also developed techniques for direct auto- stereoscopic viewing of our light fields. These methods are similar to various lenticular techniques for viewing stereo images. Our synthetic aperture generation ap- proach provides much greater flexibility than tradition optical approaches. In particular it can overcome many limitations such a focus control and skewed frustums. We have demonstrated viewers with true parallax (both horizontal and vertical), and variable controlled focus. Our displays have nearly all of the desired properties of holograms, yet they are true color and viewable under normal lights. Furthermore, the technology is easily adaptable to the display of dynamic 3-D images. Currently we are only limited by the resolution of flat panel displays. The image on the left, when viewed through a hexagonal lens array, can be seen as a three- dimensional image of a flower. It can be simul- taneously seen by multiple viewers. It was computed from a dy- namically re- parameterized light field, which allows us to precisely control the focus at all viewing angles. The inset provides a magnified view of the image.