Outdoor Motion Capturing of Ski Jumpers using Multiple Video Cameras Atle Nes Faculty of Informatics and e-Learning Trondheim University.

Slides:



Advertisements
Similar presentations
Computer Vision, Robert Pless
Advertisements

Last 4 lectures Camera Structure HDR Image Filtering Image Transform.
Institut für Elektrische Meßtechnik und Meßsignalverarbeitung Professor Horst Cerjak, Augmented Reality VU 2 Calibration Axel Pinz.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Computer vision: models, learning and inference
Camera Calibration. Issues: what are intrinsic parameters of the camera? what is the camera matrix? (intrinsic+extrinsic) General strategy: view calibration.
Computer vision. Camera Calibration Camera Calibration ToolBox – Intrinsic parameters Focal length: The focal length in pixels is stored in the.
Fisheye Camera Calibration Arunkumar Byravan & Shai Revzen University of Pennsylvania.
Camera calibration and epipolar geometry
Structure from motion.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Projective structure from motion
Capturing the 3D motion of ski jumpers Trip to Bonn (13-16 Nov 2005) Atle Nes Faculty of Informatics and e-Learning Trondheim University College.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Three-dimensional Motion Capture, Modelling and Analysis of Ski Jumpers Atle Nes CSGSC 2005 Trondheim, April 28th.
Uncalibrated Epipolar - Calibration
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Single-view geometry Odilon Redon, Cyclops, 1914.
Projected image of a cube. Classical Calibration.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Motion Capture of Ski Jumpers in 3D Trondheim University College Faculty of informatics and e-learning PhD student, Atle Nes Bonn, 24-28th of October 2004.
Capturing the Motion of Ski Jumpers using Multiple Stationary Cameras Atle Nes Faculty of Informatics and e-Learning Trondheim University.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Computer vision: models, learning and inference
Camera Geometry and Calibration Thanks to Martial Hebert.
Epipolar geometry The fundamental matrix and the tensor
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Imaging Geometry for the Pinhole Camera Outline: Motivation |The pinhole camera.
Camera calibration Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz,, Fred Pighin and Marc Pollefyes.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
CSCE 643 Computer Vision: Structure from Motion
3D Reconstruction Jeff Boody. Goals ● Reconstruct 3D models from a sequence of at least two images ● No prior knowledge of the camera or scene ● Use the.
Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.
1 Formation et Analyse d’Images Session 7 Daniela Hall 25 November 2004.
Vision Review: Image Formation Course web page: September 10, 2002.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
Affine Structure from Motion
Single-view geometry Odilon Redon, Cyclops, 1914.
CS-498 Computer Vision Week 7, Day 2 Camera Parameters Intrinsic Calibration  Linear  Radial Distortion (Extrinsic Calibration?) 1.
EECS 274 Computer Vision Affine Structure from Motion.
Calibration.
3D Reconstruction Using Image Sequence
MASKS © 2004 Invitation to 3D vision Uncalibrated Camera Chapter 6 Reconstruction from Two Uncalibrated Views Modified by L A Rønningen Oct 2008.
Single-view geometry Odilon Redon, Cyclops, 1914.
EECS 274 Computer Vision Projective Structure from Motion.
Camera calibration from multiple view of a 2D object, using a global non linear minimization method Computer Engineering YOO GWI HYEON.
Calibration ECE 847: Digital Image Processing Stan Birchfield Clemson University.
Calibrating a single camera
Motion and Optical Flow
Digital Visual Effects, Spring 2007 Yung-Yu Chuang 2007/4/17
Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/4/15
The Brightness Constraint
Epipolar geometry.
The Brightness Constraint
The Brightness Constraint
Digital Visual Effects Yung-Yu Chuang
Multiple View Geometry for Robotics
Uncalibrated Geometry & Stratification
Noah Snavely.
Video Compass Jana Kosecka and Wei Zhang George Mason University
Single-view geometry Odilon Redon, Cyclops, 1914.
Presentation transcript:

Outdoor Motion Capturing of Ski Jumpers using Multiple Video Cameras Atle Nes Faculty of Informatics and e-Learning Trondheim University College Department of Computer and Information Science Norwegian University of Science and Technology

General description Task:  Create a cheap and portable video camera system that can be used to capture and study the 3D motion of ski jumping during take-off and early flight. : Goals:  More reliable, direct and visual feedback  More effective outdoor training  Longer ski jumps!

2D  3D solution Multiple video cameras have been placed strategically around in the ski jumping hill capturing image sequences from different views synchronously. Allows us to reconstruct 3D coordinates if the same physical point is detected in at least two camera views.

Camera equipment 3 x AVT Marlin F080B (CCD-based) FireWire/1394a (no frame grabber card needed) 640 x 480 x 30 fps 8-bit / 256 grays (color cameras not chosen because of intensity interpolating bayer patterns) Exchangeable C-mount lenses (fixed and zoom)

Camera equipment (cont.) Video data (3 x 9MB/s = 27 MB/s): 2 GB RAM (5 seconds buffered to memory) 2 x WD Raptor rpm in RAID-0 (enables continuous capture) Extended range: 3 x 400 m optical fibre (full duplex firewire) Power from outlets around the hill 400 m BNC synchronization cable

Camera setup Video data + Control signals Synch pulse

Direct Linear Transformation XY Z object point O (x, y, z) image point I (u, v, 0) projection centre N (u 0, v 0, d) (x 0, y 0, z 0 ) principal point P (u 0, v 0, 0) object space (X, Y, Z) image plane (U, V) image space (U, V, W) W U V - Based on the pinhole model - Linear image formation

DLT: Fundamentals Classical collinearity equations Standard DLT equations (aka 11 parameter solution) Abdel-Aziz and Karara 1971

DLT: Camera Calibration Minimum n = 6 calibration points for each camera (2*n equations) DLT parameters (unknowns)

DLT: Point reconstruction Minimum m = 2 camera views of each reconstructed image point (2*m equations) Usually a redundant set (more equations than unknowns)  Linear Least Squares Method object coordinates (unknowns)

Direct Linear Transform Loved by the computer vision community - simplicity Hated by the photogrammetrists - lack of accuracy DLT indirectly solves both the Intrinsic/Interior parameters (- 3 -):  principal distance (d)  principal point (u0,v0) Extrinsic/Exterior parameters (- 6 -):  camera position (x0,y0,z0)  pointing direction [ R(ω, φ, κ) ]

Lens distortion / Optical errors Non-linearity is commonly introduced by imperfect lenses (straight lines are no longer straight) Should be taken into account for improved accuracy Additional parameters (- 7 -):  radial distortion (K1,K2,K3)  tangential distortion (P1,P2)  linear distortion (AF,ORT)

Radial distortion (symmetric) Barrel distortionNo distortionPincusion distortion UU VVV

Lens distortion / optical errors Tangential distortion (decentering) Linear distortion (affinity, orthogonality) Non-Square Pixels / Affinity Skewed image / Non-Orthogonality UU VV

Added nonlinear terms Extended collinearity equations Brown 1966, 1971

Bundle Adjustment Requires a good initial parameter guess (for instance from a DLT Calibration) Non-linear search - Iterative solution using the Levenberg Marquardt Method Basically: Update one parameter, keep the rest stable, see what happens …Do this systematically Calibration points and intrinsic/extrinsic parameters can be separated blockwise The matrix has a sparse structure which can be exploited for lowering the computation time

Detection of outliers Calibration points with the largest errors are removed automatically/manually resulting in a more stable geometry. Both image and object point coordinates are considered.

Overview Direct Linear Transformation is used to estimate the initial intrinsic and extrinsic parameter values for the 2D  3D mapping. Bundle Adjustment is used to refine the parameters and geometry iteratively, including the additional parameters. Intrinsic & Additional parameters off-site (focal length, principal point, lens distortion) Extrinsic parameters on-site (camera position & direction)

Calibration frame Was used for finding estimates of the intrinsic parameters. Exact coordinates in the hill was measured using differential GPS and a land survey robot station. Points made visible in the camera views using white marker spheres.

Video processing Points must be automatically detected, identified and tracked over time and accross different views. Points must be automatically detected, identified and tracked over time and accross different views. Reflective markers are placed on the ski jumpers suit, helmet and skies.

Video processing (cont.) Blur caused by fast moving jumpers (100 km/h) is avoided by tuning aperture and integration time. Three cameras gives a redundancy in case of occluded/undetected points (epipolar lines). Also possible to use information about the structure of human body to identify relative marker positions.

Granåsen ski jump arena

Visualization Moving feature points are connected back onto a dynamic 3D model of a ski jumper. Model is allowed to be moved and controlled in a large static model of the ski jump arena.

Results Reconstruction accuracy:  Distance: meters Points in the hill: ~3 cm xyz Points on the ski jumper: ~5 cm xyzD

Future work Real-Time Capturing and Visualization: Direct Feedback to the Jumpers Time Efficient Algorithms Linear & Closed-Form Solutions

Questions?