Stereopsis – Depth Perception How do we perceive depth? Left and right eyes see slightly different images Images are forwarded to the brain Brain combines.

Slides:



Advertisements
Similar presentations
HOW 3D GLASSES WORK JACQUELINE DEPUE.  In 1893, William Friese-Green created the first anaglyphic 3D motion picture by using a camera with two lenses.
Advertisements

COLOUR YEAR 11 - UNIT ONE PHYSICS
Imaging Science FundamentalsChester F. Carlson Center for Imaging Science The Human Visual System Part 2: Perception.
Advanced Effects CMSC 435/634. General Approach Ray Tracing – Shoot more rays Rasterization – Render more images.
Indian Institute of Technology Hyderabad 3D THE WAY OF OUR VISION YASHODHAR SATYAMITRA.
Depth of Field. What the what?? Is Depth of Field.
1 L 30 Light and Optics - 2 Measurements of the speed of light (c) Index of refraction v medium = c/n –the bending of light – refraction –total internal.
Imaging Science FundamentalsChester F. Carlson Center for Imaging Science Binocular Vision and The Perception of Depth.
Image Formation Mohan Sridharan Based on slides created by Edward Angel CS4395: Computer Graphics 1.
CIS 681 Distributed Ray Tracing. CIS 681 Anti-Aliasing Graphics as signal processing –Scene description: continuous signal –Sample –digital representation.
Monocular vs. Binocular View Monocular view: one eye only! Many optical instruments are designed from one eye view. Binocular view: two eyes with each.
CSC 461: Lecture 2 1 CSC461 Lecture 2: Image Formation Objectives Fundamental imaging notions Fundamental imaging notions Physical basis for image formation.
Put your 3D glasses on now… The red goes on your left eye.
Dinesh Ganotra. each of the two eyes sees a scene from a slightly different perspective.
A presentation by Stephanie Zeil. Overview  The viewing of objects in (or as if in) 3D is referred to as stereoscopy.  Techniques involved include use.
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
The Eye and sight Option A1. Structure of the Human Eye
Ch. 14 Light and Reflection. Spherical aberrations – a blurred image produced from rays that reflect at points on a mirror far from the principle axis.
Graphical images Bit-mapped (or raster-based) image: Matrix describing the individual dots that are the smallest elements (pixels) of resolution on a computer.
Joshua Smith and Garrick Solberg CSS 552 Topics in Rendering.
Light and Color.
Elements of Design. What are they? Line Colour Attributes Shape Categories Space Form.
How do we perceive colour? How do colours add?. What is colour? Light comes in many “colours”. Light is an electromagnetic wave. Each “colour” is created.
Viewing and Projections
Computer Graphics I, Fall 2008 Image Formation.
1 Image Formation. 2 Objectives Fundamental imaging notions Physical basis for image formation ­Light ­Color ­Perception Synthetic camera model Other.
1 CS6825: Image Formation How are images created. How are images created.
Photography Lesson 2 Pinhole Camera Lenses. The Pinhole Camera.
1.What is the fundamental difference between a real image and a virtual one? 2.Parallel light rays are focused on the focal point of a concave mirror.
Lenses. 3 camera obscura / pinhole camera 3 Focal length is the distance between the lens and the point where the light rays converge. It controls.
Optics Real-time Rendering of Physically Based Optical Effects in Theory and Practice Masanori KAKIMOTO Tokyo University of Technology.
Lenses. Diverging and Converging Lenses Double Convex lenses focus light rays to a point on the opposite side of the lens. Double Concave lenses diverge.
Digital Image Processing Part 1 Introduction. The eye.
COLORCOLOR. COLORCOLOR Why do I see all those pretty colors?
Waves How do we see color?
Reflection and color, Refraction, Lenses and Prisms 15-3 and 4.
Cameras 1 Cameras. Cameras 2 Introductory Question If you’re building a camera and want to make a larger image (a telephoto lens) you should: If you’re.
Cameras. Question: If you’re building a camera and want to make a larger image (a telephoto lens) you should: 1.increase the diameter of the lens 2.decrease.
Film/Sensor Where the light is recorded Lens Bends the light Trajectory of light Subject Source of light Focusing A look at the overall camera system.
1 Angel: Interactive Computer Graphics4E © Addison-Wesley 2005 Image Formation.
COLOR.
Colors of Pigment The primary colors of pigment are magenta, cyan, and yellow. [
Unit 57 – Photography Lenses. Lenses of different focal lengths allow photographers to have more creative control 1.Standard lens 2.Wide-angle lens 3.Telephoto.
The Microscope and Forensic Identification. Magnification of Images A microscope is an optical instrument that uses a lens or a combination of lenses.
Perception and VR MONT 104S, Fall 2008 Lecture 8 Seeing Depth
3D Imaging By Neil Campbell. Stereoscopy Imaging  Is a technique capable of creating the illusion of depth in an image.  The illusion of depth in a.
Different frequencies of “visible light”
Camera LENSES, APERTURE AND DEPTH OF FIELD. Camera Lenses Wide angle lenses distort the image so that extreme wide angle can look like its convex such.
Chapter 24: Perception April 20, Introduction Emphasis on vision Feature extraction approach Model-based approach –S stimulus –W world –f,
Complete the color equations below: White – cyan = _______________ Yellow + Blue - Cyan= __________________ Yellow + cyan + Magenta = _________________.
Lenses Lenses define 2 important things: Angle of view (focal length) Aperture.
Distributed Ray Tracing. Can you get this with ray tracing?
Auto-stereoscopic Light-Field Display By: Jesus Caban George Landon.
Mixing Colors Chapter Notes. White Light Recall that when the frequencies of all visible light is mixed together, it produces white White also.
Light and Color. An objects color depends on the wavelength of light it reflects and that our eyes detect. White light is a blend of all colors. When.
The Visible Spectrum And how we see it. What is Visible Light? The cones in the eye are only sensitive to a narrow range of EM frequencies. Visible Light.
Lenses. 3 camera obscura / pinhole camera 3 Focal length is the distance between the lens and the point where the light rays converge. It controls.
How good are you are judging distance?. We are learning about...We are learning how to... Perceiving the world visually Depth perception Binocular depth.
Classic Graphic Design TheoryClassic Graphic Design Theory* * “classic theory” because it forms the basis for many decisions in design.
CIS 681 Distributed Ray Tracing. CIS 681 Anti-Aliasing Graphics as signal processing –Scene description: continuous signal –Sample –digital representation.
Introduction to Camera
CS262 – Computer Vision Lect 4 - Image Formation
The Colour of Light: Additive colour theory.
Depth of Field Objective: to photograph a subject with a foreground and background, giving a better understanding of aperture and its relation to depth.
Distributed Ray Tracing
Light and Color.
Distributed Ray Tracing
Distributed Ray Tracing
Photographic Image Formation I
Distributed Ray Tracing
Presentation transcript:

Stereopsis – Depth Perception How do we perceive depth? Left and right eyes see slightly different images Images are forwarded to the brain Brain combines the two images

Image Construction How does the Brain combine two images into one? Horizontal Disparity – the difference in horizontal position of a point in view between the two images Brain implies depth based on this disparity – the greater the difference, the closer it must be. Small disparity implies object is farther away

Can you see Stereoscopic Images?

Problem We can simulate this phenomenon in graphics by creating two images of the same scene and combining them to form one image. But how do you represent the information from two separate images in one image?

Stereoscope One of the first solutions Each eye can only see one of the images Brian combines them into one image

Problem with Stereoscope Awkward Can’t translate well to other application such as movies Must keep head still in a certain position

A better solution….. Better to use only one actual image What if we divided the information contained in a pixel into parts? We could use one category of information strictly from the left image, and another category of information from the right image.

Share the Pixel!! Color of a pixel – Red, Green, and Blue Each pixel has a value for each Use only the Red and Blue values for the left image, disregard the Green Use only Green values for the right image, disregard the Red and Blue

Two Cameras

pixel Left Image pixel Right Image pixel Trioscopic Image Trioscopic 3D Image

Trioscopic Glasses

Color Schemes SchemeLeft eyeRight eyeColor rendering red-greenpure red pure greenmonochrome red-bluepure red pure bluemonochrome red-cyanpure red pure cyan (green+blue)color (poor reds, good greens) anachromedark red cyan (green+blue+some red)color (poor reds) mirachromedark red+lens cyan (green+blue+some red)color (poor reds) Trioscopicpure green pure magenta (red+blue) color (better reds, oranges and wider range of blues than red/cyan) INFICOLORcomplex magenta complex green color (almost full and pleasant natural colors with excellent skin tones perception) ColorCode 3D amber (red+green+neutral grey) pure dark blue (+optional lens) color (almost full-color perception) magenta-cyanmagenta (red+blue) cyan (green+blue)color (better than red-cyan) Infitec white (Red 629 nm, Green 532 nm, Blue 446 nm) white (Red 615 nm, Green 518 nm, Blue 432 nm) color (full color)

Light Filtering Merge the two images so that the final image has all of its Red and Blue values only from the left image, and all of its Green values only from the right image. Place colored cellophane filters over the eyes – one Magenta and the other Green. The Green cellophane will filter out the green values so that the left eye only sees the Red and Blue values.

Light Filtering Magenta cellophane filters out the Red and Blue values so that the right eye only sees the Green values. This way, each eye sees a completely separate image The Brain combines these images and infers depth based on Horizontal Disparity

Stereoscopy References Polarizers Polarizers lor_schemes lor_schemes

Traditional CG Camera Model Pinhole Camera All rays come from a single point Perfect Focus – unrealistic

Depth of Field The human eye has a limited depth of field That area is “In Focus” Other areas in the field of view appear sharper or fuzzier depending on their distance from the focal point along the viewing direction Camera Depth of Field Blur

How do we simulate Depth of Field in Rendering? Can create a blurring effect which is more or less severe depending on the distance from the focal point. In Object-space (In-Rendering) Distributed Ray Tracing aka stochastic ray tracing In Image-space (Post-Rendering) Per-pixel blur level control

Distributed Ray Tracing F – focal length n – aperture number C – circle of confusion VP = FP/(P-F) VD = FD/(D-F) C = (|VD –VP|/VD) (F/n) r = ½ (F/n) (D-P)/P R = (-VP/D) r R = ½ C

Distributed Ray Tracing

Per-pixel blur level control Save the depth information for each pixel in the image. Depth Map! If a pixel needs blurring, average the pixels around it – using a greater number of neighbors the more it needs to be blurred Gaussian blur

Depth of Field References cook.pdf?key1=808590&key2= &coll=DL&dl =ACM&ip= &CFID= &CFTOKEN= cook.pdf?key1=808590&key2= &coll=DL&dl =ACM&ip= &CFID= &CFTOKEN= phics/Week3/distributed-final.pdf phics/Week3/distributed-final.pdf signments/pres/slides/DRT.ppt signments/pres/slides/DRT.ppt