Internet Vision - Lecture 3 Tamara Berg Sept 10. New Lecture Time Mondays 10:00am-12:30pm in 2311 Monday (9/15) we will have a general Computer Vision.

Slides:



Advertisements
Similar presentations
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Advertisements

Ray tracing. New Concepts The recursive ray tracing algorithm Generating eye rays Non Real-time rendering.
Computer Vision CS 776 Spring 2014 Cameras & Photogrammetry 1 Prof. Alex Berg (Slide credits to many folks on individual slides)
Eyes for Relighting Extracting environment maps for use in integrating and relighting scenes (Noshino and Nayar)
Lecture Fall 2001 Visibility Back-Face Culling Painter’s Algorithm.
Automatic scene inference for 3D object compositing Kevin Karsch (UIUC), Sunkavalli, K. Hadap, S.; Carr, N.; Jin, H.; Fonte, R.; Sittig, M., David Forsyth.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Measurement, Inverse Rendering COMS , Lecture 4.
CS 4731: Computer Graphics Lecture 19: Shadows Emmanuel Agu.
Face Recognition Based on 3D Shape Estimation
Global Illumination May 7, Global Effects translucent surface shadow multiple reflection.
1 Confidence Interval for the Population Mean. 2 What a way to start a section of notes – but anyway. Imagine you are at the ground level in front of.
Stereo Computation using Iterative Graph-Cuts
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Basic Ray Tracing CMSC 435/634. Visibility Problem Rendering: converting a model to an image Visibility: deciding which objects (or parts) will appear.
Computer Graphics Inf4/MSc Computer Graphics Lecture Notes #16 Image-Based Lighting.
09/18/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Bump Mapping Multi-pass algorithms.
Cornell CS465 Fall 2004 Lecture 3© 2004 Steve Marschner 1 Ray Tracing CS 465 Lecture 3.
Cornell CS465 Fall 2004 Lecture 3© 2004 Steve Marschner 1 Ray Tracing CS 465 Lecture 3.
Guilford County Sci Vis V204.01
Thank you for using this pre-visit resource. We believe this will help strengthen student learning leading up to and during your gallery visit. Due to.
Single-view modeling, Part 2 CS4670/5670: Computer Vision Noah Snavely.
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
 Insert a picture from a file  Move and delete images  Use the Picture Tools tab  Add styles, effects, and captions to images  Resize photos  Use.
Computer Graphics Mirror and Shadows
~ How to create a basic website ~ Prepared by Jann Bradshaw April 2010.
CAP4730: Computational Structures in Computer Graphics 3D Concepts.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Image-Based Rendering from a Single Image Kim Sang Hoon Samuel Boivin – Andre Gagalowicz INRIA.
CAP5415: Computer Vision Lecture 4: Image Pyramids, Image Statistics, Denoising Fall 2006.
Eye Patterns Nose Patterns Mouth Patterns LOOK AT THE OBJECT MORE THAN THE PAPER!! (YOU DON’T WANT IT TO GET LONELY!!)
Tips & Tricks for Revit® 2013
Publishing Software to create, customize, publish materials such as newsletters, brochures, flyers, catalogs, and web sites.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
Windows Movie Maker Getting Started. What is Windows Movie Maker? Windows Movie Maker allows a user to capture (from a video camera) or import audio,
Ray Tracing Chapter CAP4730: Computational Structures in Computer Graphics.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Media Services Basics of Creating a Wide Format Poster in PowerPoint.
Basic Ray Tracing CMSC 435/634. Visibility Problem Rendering: converting a model to an image Visibility: deciding which objects (or parts) will appear.
Prof. James A. Landay University of Washington Autumn 2008 Video Prototyping October 14, 2008.
PowerPoint: Images Randy Graff HSC IT Center Training
How to insert a picture from the Clip Art Gallery. To insert a picture, open the desired display slide, and then click the Insert Clip Art tab or go to.
1 Formation et Analyse d’Images Session 7 Daniela Hall 25 November 2004.
Why is computer vision difficult?
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
Parallel Ray Tracer Computer Systems Lab Presentation Stuart Maier.
Lecture PowerPoint Slides Basic Practice of Statistics 7 th Edition.
Arrangements and Duality Motivation: Ray-Tracing Fall 2001, Lecture 9 Presented by Darius Jazayeri 10/4/01.
Microsoft PowerPoint 2007 Part 5. Agenda Editing Presentation Masters Editing Notes and Handout Masters Exporting Outlines and Slides Presenting to a.
A Photograph of two papers
Real-Time Dynamic Shadow Algorithms Evan Closson CSE 528.
Shadows David Luebke University of Virginia. Shadows An important visual cue, traditionally hard to do in real-time rendering Outline: –Notation –Planar.
Home Learning Task 3 Complete the writing guide to create the text you are going to include in your brochure. You can add your own ideas that aren’t covered.
1 Computational Vision CSCI 363, Fall 2012 Lecture 2 Introduction to Vision Science.
Prof. James A. Landay University of Washington Winter 2007 Video Prototyping January 22, 2007.
Digital Cameras in the Classroom Day One Basics Ann Howden UEN Professional Development
Introduction to 3-D Viewing Glenn G. Chappell U. of Alaska Fairbanks CS 381 Lecture Notes Monday, October 27, 2003.
Multiple View Geometry
Photorealistic Rendering vs. Interactive 3D Graphics
Lecture 26 Hand Pose Estimation Using a Database of Hand Images
Virtual Virtual Library
3D Graphics Rendering PPT By Ricardo Veguilla.
Computer Graphics.
Common Classification Tasks
Rob Fergus Computer Vision
CHAPTER 14: Confidence Intervals The Basics
Angles in Photography The camera angle marks the specific location at which the  camera  is placed to take a shot. A scene may be shot from several camera.
Add some WordArt to your cover slide
Presentation transcript:

Internet Vision - Lecture 3 Tamara Berg Sept 10

New Lecture Time Mondays 10:00am-12:30pm in 2311 Monday (9/15) we will have a general Computer Vision & Machine Learning review Please look at papers and decide which one you want to present by Monday – read topic/titles/abstracts to get an idea of which you are interested in

Thanks to Lalonde et al for providing slides!

Algorithm Outline

Inserting objects into images Have an image and want to add realistic looking objects to that image

Inserting objects into images User picks a location where they want to insert an object

Inserting objects into images Based some properties calculated about the image, possible objects are presented.

Inserting objects into images User selects which object to insert and the object is placed in the scene at the correct scale for the location

Inserting objects into images – Possible approaches Insert a clip art object Insert a clip art object with some idea of the environment Insert a rendered object with full model of the environment

Some objects will be easy to insert because they already “fit” into the scene

Collect a large database of objects. Let the computer decide which examples are easy to insert. Allow the user to select only among those.

When will an object “fit”? 1.) When the lighting conditions of the scene and object are similar 2.) When the camera pose of the scene & object match

2D vs 3D Use 3d information for: 1.) Annotating objects in the clip-art library with camera pose 2.) Estimating the camera pose in the query image 3.) Computing illumination context in both library & query images

Phase 1 - Database Annotation For each object we want: – Estimate of its true size and the camera pose it was captured under – Estimate of the lighting conditions it was captured under

Phase 1 - Database Annotation Estimate object size Objects closer to the camera appear larger than objects further from the camera

Phase 1 - Database Annotation Estimate object size *If* you know the camera pose then you can estimate the real height of an object from: location in the image, pixel height

Phase 1 - Database Annotation Estimate object size Annotate objects with their true heights and resize examples to a common reference size

Phase 1 - Database Annotation Estimate object size & camera pose Don’t know camera pose or object heights! Trick - Infer camera pose & object heights across all object classes in the database given only the height distribution for one class

Phase 1 - Database Annotation Estimate object size & camera pose Start with known heights for people

Phase 1 - Database Annotation Estimate object size & camera pose Estimate camera pose for images with multiple people

Phase 1 - Database Annotation Estimate object size & camera pose Use these images to estimate a prior over the distribution of poses How do people usually take pictures? Standing on the ground at eye level.

Phase 1 - Database Annotation Estimate object size & camera pose Use the learned pose distribution to estimate heights of other object categories that appear with people. Iteratively use these categories to learn more categories. Annotate all objects in the database with their true size and originating camera pose.

Phase 1 - Database Annotation Estimate object size & camera pose

Phase 1 - Database Annotation For each object we want: – Estimate of its true size and the camera pose it was captured under – Estimate of the lighting conditions it was captured under

Phase 1 - Database Annotation Estimate lighting conditions Estimate which pixels are ground, sky, vertical Black box for now (we’ll cover this paper later in the course) Ground Vertical Sky

Phase 1 - Database Annotation Estimate lighting conditions Distribution of pixel colors

Phase 2 – Object Insertion Query Image

Phase 2 – Object Insertion User specifies horizon line – use to calculate camera pose with respect to ground plane (lower -> tilted down, higher -> tilted up). Illumination context is calculated in the same way as for the database images.

Phase 2 – Object Insertion Insert an object into the scene that has matching lighting, and camera pose to the query image

Phase 2 – Object Insertion But wait it still looks funny!

Phase 2 – Object Insertion Shadows are important!

Phase 2 – Object Insertion

Shadow Transfer

Categorize images for easy selection in user interface

Big Picture It’s all about the data! Use lots of data to turn a hard problem into an easier one! – Place “my car” in a scene is much harder than place “some car” in a scene. Allow the computer to choose from among many examples of a class to find the easy ones.