Sheila Frederixon, Matt Gillett, Amy Gracik Stereoscopics is the technology that combines two separate images to create a 3D image. It is the most used.

Slides:



Advertisements
Similar presentations
How Do 3D Glasses Work? Caitlin Riddle Types Of Paper 3D Glasses b Anaglyph b Pulfrich b Polarized.
Advertisements

The Wonderful World of Electronic Imaging Enrichment Mini-courses Program University of Ottawa How do we see in 3D? How can we reproduce 3D images?
Ray tracing. New Concepts The recursive ray tracing algorithm Generating eye rays Non Real-time rendering.
Computer Graphics Lecture 8 Arbitrary Viewing II: More Projection, Clipping and Mathematics of 3D Viewing.
HOW 3D GLASSES WORK JACQUELINE DEPUE.  In 1893, William Friese-Green created the first anaglyphic 3D motion picture by using a camera with two lenses.
3D Graphics Rendering and Terrain Modeling
1 Computer Graphics Week6 –Basic Transformations- Translation & Scaling.
CS485/685 Computer Vision Prof. George Bebis
Stereograms seeing depth requires “only” two different images on the retina.
IAT 3551 Computer Graphics Overview Color Displays Drawing Pipeline.
1 Lecture 11 Scene Modeling. 2 Multiple Orthographic Views The easiest way is to project the scene with parallel orthographic projections. Fast rendering.
The Pinhole Camera Model
Three-dimensional (3D) vision How comes that we can see in three dimensions? That we can tell which objects are closer, and which are more distant? Parallax.
Three-Dimensional Concepts
1 Chapter 5 Viewing. 2 Perspective Projection 3 Parallel Projection.
Visualization- Determining Depth From Stereo Saurav Basu BITS Pilani 2002.
Informationsteknologi Wednesday, November 14, 2007Computer Graphics - Class 81 Today’s class Orthogonal matrices Quaternions Shears Synthetic camera Viewing.
Cameras, lenses, and calibration
CAP4730: Computational Structures in Computer Graphics
Dinesh Ganotra. each of the two eyes sees a scene from a slightly different perspective.
2D Transformations. World Coordinates Translate Rotate Scale Viewport Transforms Hierarchical Model Transforms Putting it all together.
CS 450: Computer Graphics 2D TRANSFORMATIONS
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.
A presentation by Stephanie Zeil. Overview  The viewing of objects in (or as if in) 3D is referred to as stereoscopy.  Techniques involved include use.
UNIT - 5 3D transformation and viewing. 3D Point  We will consider points as column vectors. Thus, a typical point with coordinates (x, y, z) is represented.
Joshua Smith and Garrick Solberg CSS 552 Topics in Rendering.
Automatic Camera Calibration
Computer Graphics: Programming, Problem Solving, and Visual Communication Steve Cunningham California State University Stanislaus and Grinnell College.
CAP4730: Computational Structures in Computer Graphics 3D Concepts.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Camera Geometry and Calibration Thanks to Martial Hebert.
Epipolar geometry The fundamental matrix and the tensor
Week 2 - Wednesday CS361.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Integral University EC-024 Digital Image Processing.
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
Digital Cinema [digital_cinema.pdf]. Digital Cinema In March 2002, seven studios—Disney, Fox, MGM, Paramount, Sony Pictures Entertainment, Universal,
Homogeneous Form, Introduction to 3-D Graphics Glenn G. Chappell U. of Alaska Fairbanks CS 381 Lecture Notes Monday, October 20,
Shadows via Projection Glenn G. Chappell U. of Alaska Fairbanks CS 381 Lecture Notes Wednesday, November 5, 2003.
Geometric Camera Models
CS-378: Game Technology Lecture #4: Texture and Other Maps Prof. Okan Arikan University of Texas, Austin V Lecture #4: Texture and Other Maps.
The Rendering Pipeline CS 445/645 Introduction to Computer Graphics David Luebke, Spring 2003.
: Chapter 11: Three Dimensional Image Processing 1 Montri Karnjanadecha ac.th/~montri Image.
SE 313 – Computer Graphics Lecture 8: Transformations and Projections Lecturer: Gazihan Alankuş 1.
University of Texas at Austin CS384G - Computer Graphics Fall 2008 Don Fussell Affine Transformations.
OpenGL The Viewing Pipeline: Definition: a series of operations that are applied to the OpenGL matrices, in order to create a 2D representation from 3D.
Mark Nelson 3d projections Fall 2013
Basic 3D Concepts. Overview 1.Coordinate systems 2.Transformations 3.Projection 4.Rasterization.
EECS 274 Computer Vision Affine Structure from Motion.
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Audio/Video compression Recent progress Alain Bouffioux December, 20, 2006.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Stereoscopic Images Binocular vision enables us to measure depth using eye convergence and stereoscopic vision. Eye convergence is a measure of the angle.
Determining 3D Structure and Motion of Man-made Objects from Corners.
Digital Image Processing Additional Material : Imaging Geometry 11 September 2006 Digital Image Processing Additional Material : Imaging Geometry 11 September.
III- 1 III 3D Transformation Homogeneous Coordinates The three dimensional point (x, y, z) is represented by the homogeneous coordinate (x, y, z, 1) In.
Design Visualization Software Introduction / Review.
4. Affine transformations. Reading Required:  Watt, Section 1.1. Further reading:  Foley, et al, Chapter  David F. Rogers and J. Alan Adams,
Vidhyaeep Institute of Engineering & Technology. Name: Divyesh Kabariya Branch : E.C. Sem.: 5 th Enroll. No.: Subject : Audio Video System.
3D TV and CINEMA ICT, Explaining Contemporary Technologies Summer Term 2016.
Introduction to Symmetry Analysis Brian Cantwell Department of Aeronautics and Astronautics Stanford University Chapter 1 - Introduction to Symmetry.
AUDIO VIDEO SYSTEMS Prepared By :- KISHAN DOSHI ( ) PARAS BHRAMBHATT ( ) VAIBHAV SINGH THAKURALE ( )
Digital Video Representation Subject : Audio And Video Systems Name : Makwana Gaurav Er no.: : Class : Electronics & Communication.
and an introduction to matrices
Modeling 101 For the moment assume that all geometry consists of points, lines and faces Line: A segment between two endpoints Face: A planar area bounded.
3D Graphics Rendering PPT By Ricardo Veguilla.
Common Classification Tasks
Geometric Camera Models
Procedural Animation Lecture 2: 3D & Houdini basics
Presentation transcript:

Sheila Frederixon, Matt Gillett, Amy Gracik Stereoscopics is the technology that combines two separate images to create a 3D image. It is the most used form of 3D from Cinemas, to home theaters, and video games This is possible due to the fact that human eyes collect light independently from each other, thus the image we see is the combination of two 2D images.

Using Shutters to view Stereoscopic images. Keys to Home Theater 3D Video must be captured with two cameras or separated in complex computer renderings The video is sent to the TV as two separate images that alternate at a rate of 240Hz. The Liquid Crystal Shutter glasses close and open independently at 120hz in order to separate the observers image. The glasses and TV alternate between left and right images together via an IR, or Bluetooth link Your brain and eye can not operate quick enough to separate the two images, so they are combined into a Binocular 3D image (allows you to see depth)

How Shutter Glasses Work Since the human eye operates between 48 and 60Hz (depending on study) Shutter glasses manufacturers design the glasses to operate at 60Hz. That is the lenses turn on and off in 1/60th of a second. But when the lens is open for that 1/60th of a second the LCD lens is clear for ½ and shaded for the other. There fore it acts more like 240Hz This is why you need a 240Hz TV in order to get a truly smooth 3D image. (See movie)

How to Develop a 3D image with Cameras These two equations represent the perspective position of two 2D points with respect to the 3D space point. A and A’ are 3x3 matrices that specify the intrinsic parameters of the first camera with respect to the second. P n is a 3x4 matrix that normalizes the perspective projection. Here D represents a 4x4 matrix containing a rotation, R, and a translation, t. This transforms the 3D point from the real world coordinate system into a camera coordinate system. If we rearrange the first equation and make it dependent on the depth value Z, you get: If you then substitute the new equation into the second equation, you can find the depth dependent relation between the two perspective views and the same 3D scene.

When Creating a Stereoscopic Image On a stereoscopic 3D TV display there are two slightly different perspective views of a 3D scene. This means there is a slightly altered variant to the previous equations. There are still two cameras being used, however the difference lies in the part of the 3D scene that is going to be reproduced exactly on the display screen. This mathematically is done via shift-sensor approach. This is formulated as a displacement of the cameras principal point. This means that there is a horizontal shift of h, in the secondary camera In this case also, R=I, where I is the identity matrix. This lets simplifications be made to the original 3D imaging equation we found to become:

References Fehn, Christopher. "Depth-Image-Based Rendering (DIBR), Compression and Transmission for a New Approach on 3D-TV." Fraunhofer-Institut FÄur Nachrichtentechnik, Heinrich-Hertz-Institute. Print. Ozaktas, Haldun M, and Levent Onural. Three- dimensional Televison: Capture, Transmission, Display. Berlin: Springer, Print ture=channel “Why 120Hz is not enough” ture=channel