1Notes  Movie today (compositing etc.). 2Example.

Slides:



Advertisements
Similar presentations
More on single-view geometry
Advertisements

Four Elements of a Good Shot COMPOSITION MOTION DEPTH VISUAL EFFECTS.
Computer Vision, Robert Pless
Medical Image Registration Kumar Rajamani. Registration Spatial transform that maps points from one image to corresponding points in another image.
Video Space Media Concepts The Spill Resource Page.
Verbs and Adverbs: Multidimensional Motion Interpolation Using Radial Basis Functions Presented by Sean Jellish Charles Rose Michael F. Cohen Bobby Bodenheimer.
Animation Following “Advanced Animation and Rendering Techniques” (chapter 15+16) By Agata Przybyszewska.
Self-calibration.
1Notes  Assignment 0 is due today!  To get better feel for splines, play with formulas in MATLAB!
1Notes  Questions?  Assignment 1 should be ready soon (will post to newsgroup as soon as it’s out)
CSCE 641: Forward kinematics and inverse kinematics Jinxiang Chai.
1Notes  Handing assignment 0 back (at the front of the room)  Read the newsgroup!  Planning to put 16mm films on the web soon (possibly tomorrow)
1Notes  Textbook: matchmove 6.7.2, B.9. 2 Match Move  For combining CG effects with real footage, need to match synthetic camera to real camera: “matchmove”
Announcements Final Exam May 16 th, 8 am (not my idea). Practice quiz handout 5/8. Review session: think about good times. PS5: For challenge problems,
Motion in Two Dimensions
3-D Geometry.
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
CS 376 Introduction to Computer Graphics 02 / 26 / 2007 Instructor: Michael Eckmann.
Now Playing: Gong Sigur Rós From Takk... Released September 13, 2005.
The Pinhole Camera Model
CS223b, Jana Kosecka Rigid Body Motion and Image Formation.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
1Notes. 2 Time integration for particles  Back to the ODE problem, either  Accuracy, stability, and ease-of- implementation are main issues  Obviously.
Camera Parameters and Calibration. Camera parameters From last time….
Scenes, Cameras & Lighting. Outline  Constructing a scene  Using hierarchy  Camera models  Light models.
Another Look at Camera Control
CS 450: Computer Graphics 2D TRANSFORMATIONS
CS 450: COMPUTER GRAPHICS 3D TRANSFORMATIONS SPRING 2015 DR. MICHAEL J. REALE.
Game Physics – Part IV Moving to 3D
Accurate Implementation of the Schwarz-Christoffel Tranformation
Camera Angles The shot angle is the level from which you look at your subject through the camera.
Chapter 4 Motion in Two Dimensions. Using + or – signs is not always sufficient to fully describe motion in more than one dimension Vectors can be used.
Epipolar geometry The fundamental matrix and the tensor
1 Another Look at Camera Control Karan Singh †, Cindy Grimm, Nisha Sudarsanan Media and Machines Lab Department of Computer Science and Engineering Washington.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
Taking Better Photos 15 Tips You Can Try. Move in CLOSER.  Take a few steps closer.  Use the zoom lens to zoom in.  Most people leave too much “dead.
CS559: Computer Graphics Lecture 9: Projection Li Zhang Spring 2008.
Projective geometry ECE 847: Digital Image Processing Stan Birchfield Clemson University.
Chapter 4 Motion in Two Dimensions. Kinematics in Two Dimensions Will study the vector nature of position, velocity and acceleration in greater detail.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Character Setup In addition to rigging for character models, rigging artists are also responsible for setting up animation controls for anything that is.
CS 450: COMPUTER GRAPHICS PROJECTIONS SPRING 2015 DR. MICHAEL J. REALE.
1 3D Screen-space Widgets for Non-linear Projection Patrick Coleman, Karan Singh (Univ of Toronto) Nisha Sudarsanam, Cindy Grimm (Washington Univ in St.
Reconstruction the 3D world out of two frames, based on camera pinhole model : 1. Calculating the Fundamental Matrix for each pair of frames 2. Estimating.
1cs426-winter-2008 Notes  Will add references to splines on web page.
Syed ardi syed yahya kamal 2011 chapter five.  Creating in-between positions is still a hallmark of animation.  Using techniques called interpolation.
Graphics CSCI 343, Fall 2015 Lecture 16 Viewing I
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Flexible Automatic Motion Blending with Registration Curves
Computer vision: models, learning and inference M Ahad Multiple Cameras
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Auto-calibration we have just calibrated using a calibration object –another calibration object is the Tsai grid of Figure 7.1 on HZ182, which can be used.
Cinematic Techniques - shots  Establishing Shot - The view is so far from the subject that he isn't even visible. Helps to establish the scene.  Long.
Digital Image Processing Additional Material : Imaging Geometry 11 September 2006 Digital Image Processing Additional Material : Imaging Geometry 11 September.
CS559: Computer Graphics Lecture 9: 3D Transformation and Projection Li Zhang Spring 2010 Most slides borrowed from Yungyu ChuangYungyu Chuang.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Planning Tracking Motions for an Intelligent Virtual Camera Tsai-Yen Li & Tzong-Hann Yu Presented by Chris Varma May 22, 2002.
Notes Textbook on cameras: section 3.2 section 4.10 section 7.11
Slope and Curvature Density Functions
Homogeneous Transformation Matrices
Robot Kinematics We know that a set of “joint angles” can be used to locate and orientate the hand in 3-D space We know that the joint angles can be combined.
The View Frustum Lecture 10 Fri, Sep 14, 2007.
Computer Animation Displaying animation sequences raster animation
Computer Graphics Lecture 15.
Viewing Transformations II
Chapter 4 . Trajectory planning and Inverse kinematics
GPAT – Chapter 7 Physics.
Presentation transcript:

1Notes  Movie today (compositing etc.)

2Example

3 Camera Control  Text: 3.4, 4.1  Three situations to think about:  CG film  Everything is synthetic, so control the camera however you want (artistic control or algorithms + artistic tweaking)  Video games  Fully automatic control, algorithm has to be smart enough to (almost) always be useful  Compositing vfx on real footage  Have to make sure CG camera matches the parameters and motion of real camera

4 CG film camera  Simplest case: fix it once for shot  Too many moving shots are distracting anyhow… Alexander Nevsky vs. The Constant Gardener  More generally: just more motion curves  But how to parameterize?  Can view camera as simply a 4x4 matrix transformation - but using a separate spline for each matrix entry is horrid

5 Peeling off camera parameters  Essentially always want to separate out amount of perspective = field-of-view = lens diameter = …  Should only change for a specific cinematographic reason (zooms, etc.)  I.e. perspective projection is a separate part of the parameterization  Similarly depth-of-field and focal plane is usually fixed except for some cinematographic effects  Separate out skews (avoid them, except for a few cheesy effects)

6 More peeling  Usually separate out selection of image plane  Scaling actually already handled by perspective, but no harm allowing redundant control  Normally image is centred  For some architectural shots, can keep vertical lines vertical…  Zooms  Left with specifying position and orientation of camera

7 Possible parameterizations  Could just do motion curves for position (x,y,z) and Euler angles (heading, pitch, roll)  Intensely annoying to keep an object of interest in centre of view  Could instead do LOOKFROM (x,y,z) LOOKAT (x,y,z) and VUP (x,y,z)  Position is at LOOKFROM  Camera z-direction is parallel to LOOKAT- LOOKFROM  Camera x-direction is VUP  CAMZ (careful!)  Camera y-direction is CAMZ  CAMX  Usually VUP is (0,1,0) but can control roll by moving it around --- or a separate roll angle

8 Special Cases  Circling around target  Parameterize target position (x,y,z), distance from target, and angles (heading, pitch, roll)  Very useful for interactive modeling also  Over the shoulder  Following path of object  All of these especially useful for video games

9 Over the shoulder  Glue the camera to the object’s position motion curve or a displacement from it  Either use object’s orientation for camera  Or get CAMZ from motion curve tangent and use VUP to define CAMX, CAMY  Maybe additional roll angle for feeling of inertia or G-forces

10 Path-following camera  Simplest approach: reuse object trajectory for camera - just lag  Camera position t frames behind object  Doesn’t handle acceleration/deceleration well!  Camera position distance d behind object  Need to do arc-length parameterization  But could be too jerky for comfort  Need to smooth out trajectory  Replace control points with weighted averages

11 Path-following camera orientation  CAMZ should be pointing forwards along trajectory  Simplest approach: use trajectory tangent (first derivative of curve)  But doesn’t really point at object  So use LOOKAT/VUP approach  Again, additional roll angle useful for controlling feeling of inertia, G-forces

12 Image Space Constraints  Another approach for camera specification  Specify some world space points that must stay at some image space points  From there calculate camera parameters  Special cases are tedious to derive  Better approach: Gleicher and Witkin, “Through-the-Lens Camera Control”, SIGGRAPH’92

13Through-the-Lens  Problem is that image-space constraints on world-space points are highly nonlinear  Directly solving is painful  Instead differentiate constraints  Constrain time derivative of camera parameters to follow time derivative of constraints  Constraint on derivatives is linear: f(p)=0 becomes df/fp(p) * dp/dt = 0  If not enough constraints, minimize camera change in some way - constrained optimization