TEMPLATE DESIGN © 2008 www.PosterPresentations.com The basic model for a trigonometric setup requires that the HID be seen by at least two cameras at any.

Slides:



Advertisements
Similar presentations
Chayatat Ratanasawanya Min He May 13, Background information The goal Tasks involved in implementation Depth estimation Pitch & yaw correction angle.
Advertisements

Epipolar Geometry.
Motion Capture The process of recording movement and translating that movement onto a digital model Games Fast Animation Movies Bio Medical Analysis VR.
Virtual Me. Motion Capture (mocap) Motion capture is the process of simulating actual movement in a computer generated environment The capture subject.
Computer Graphics- SCC 342
Visual Servo Control Tutorial Part 1: Basic Approaches Chayatat Ratanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson.
Computer Vision, Robert Pless
3D reconstruction.
Biomedical Person Identification via Eye Printing Masoud Alipour Ali Farhadi Ali Farhadi Nima Razavi.
The Bioloid Robot Project Presenters: Michael Gouzenfeld Alexey Serafimov Supervisor: Ido Cohen Winter Department of Electrical Engineering.
Three Dimensional Modeling Transformations
Automatic Feature Extraction for Multi-view 3D Face Recognition
UAV pose estimation using POSIT algorithm
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
HCI 530 : Seminar (HCI) Damian Schofield. HCI 530: Seminar (HCI) Transforms –Two Dimensional –Three Dimensional The Graphics Pipeline.
Uncalibrated Geometry & Stratification Sastry and Yang
3-D Geometry.
The Pinhole Camera Model
Projected image of a cube. Classical Calibration.
MSU CSE 803 Fall 2008 Stockman1 CV: 3D sensing and calibration Coordinate system changes; perspective transformation; Stereo and structured light.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
CHAPTER 7 Viewing and Transformations © 2008 Cengage Learning EMEA.
James Augustin Benjamin Cole Daniel Hammer Trenton J. Johnson Ricardo Martinez.
WP3 - 3D reprojection Goal: reproject 2D ball positions from both cameras into 3D space Inputs: – 2D ball positions estimated by WP2 – 2D table positions.
Motion Capture Hardware
Geodetic Metrology and Engineering Geodesy Institute of Geodesy and Photogrammetry C URRENT I NVESTIGATIONS AT THE ETH Z URICH IN O.
CS 480/680 Computer Graphics Representation Dr. Frederick C Harris, Jr. Fall 2012.
Autonomous Tracking Robot Andy Duong Chris Gurley Nate Klein Wink Barnes Georgia Institute of Technology School of Electrical and Computer Engineering.
Chapter 2 Robot Kinematics: Position Analysis
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
Muscle Volume Analysis 3D reconstruction allows for accurate volume calculation Provides methods for monitoring disease progression Measure muscle atrophy.
Week 5 - Wednesday.  What did we talk about last time?  Project 2  Normal transforms  Euler angles  Quaternions.
Wii Care James Augustin Benjamin Cole Daniel Hammer Trenton J. Johnson Ricardo Martinez.
Digital Image Processing CCS331 Relationships of Pixel 1.
Wii mote interfacing. The product It is a wireless device, using standard Bluetooth technology to communicate The Wii Remote uses the standard Bluetooth.
Geometric Camera Models
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
11.3 Principle of Virtual Work for a System of Connected Rigid Bodies Method of virtual work most suited for solving equilibrium problems involving a system.
112/5/ :54 Graphics II Image Based Rendering Session 11.
Basic 3D Concepts. Overview 1.Coordinate systems 2.Transformations 3.Projection 4.Rasterization.
Group 7 Michael Kelly Kemal Koksal Kenneth Phelan
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Graphics CSCI 343, Fall 2015 Lecture 16 Viewing I
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Composite 3D Transformations. Example of Composite 3D Transformations Try to transform the line segments P 1 P 2 and P 1 P 3 from their start position.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Honours Graphics 2008 Session 2. Today’s focus Vectors, matrices and associated math Transformations and concatenation 3D space.
Determining 3D Structure and Motion of Man-made Objects from Corners.
Arizona’s First University. Command and Control Wind Tunnel Simulated Camera Design Jacob Gulotta.
Stereoscopic Imaging for Slow-Moving Autonomous Vehicle By: Alex Norton Advisor: Dr. Huggins February 28, 2012 Senior Project Progress Report Bradley University.
BMFS 3373 CNC TECHNOLOGY Lecture 10
What you need: In order to use these programs you need a program that sends out OSC messages in TUIO format. There are a few options in programs that.
Digital Image Processing CCS331 Relationships of Pixel 1.
Wii Remote Zibo Zou, Daniel Maertens, Steven Duan 1.
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
CMSC5711 Image processing and computer vision
Computer vision: models, learning and inference
Depth Analysis With Stereo Cameras
Paper – Stephen Se, David Lowe, Jim Little
PERSPECTIVE PROJECTION…...
+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper
Wireless hand game controller
Homogeneous Coordinates (Projective Space)
CENG 477 Introduction to Computer Graphics
Orthogonal Base Cartesian Coordinate System Works in general:
Reflections in Coordinate Plane
VIRTUAL ENVIRONMENT.
The Pinhole Camera Model
Presentation transcript:

TEMPLATE DESIGN © The basic model for a trigonometric setup requires that the HID be seen by at least two cameras at any given time for tracking to work. This experiment consisted of two Nintendo Wii remotes aimed at a common area. The remotes would return the pixel value locations of two infrared LEDs fixed in a horizontal plane on the front of a third Wii remote which would return accelerometer values for pitch [3]. Calculations on these return values find where the Wii remote is pointed on a screen. Practical Infrared Object Tracking with Wii Remotes Sage Browning, Phillip Weber, Jürgen Schulze California Institute for Telecommunications and Information Technology University of California San Diego Overview Final Approach Goal: To develop a low cost, high accuracy method of tracking objects in 3D space utilizing infrared LED's and cameras for use as human interface devices (HIDs) when working with large displays. A Trigonometric Approach Problems with a Trigonometric Approach Limited Working Space: With a maximum vertical viewing angle of 27º, and 45º in the horizontal plane, and with the requirement that a point must be in view of at least two cameras, the Wii remote provides a relatively small space to work in. i = Ø max (p i /p max -.5) y = d/(1/tan( 1 + Ø 1 ) + 1/tan( 2 + Ø 2 )) x = y/tan( 1 + Ø 1 ) -.5d z = y/tan( 3 + Ø 3 ) Ø x = tan -1 ((y 1 - y 2 )/(x 1 - x 2 )) x = ytan(Ø x ) z = ytan(Ø pitch ) X = x + x Z = z + z Loss of Tracking: The point of placing two IR LEDs on the front of a Wiimote is to capture rotation in the horizontal plane (yaw), which cannot be tracked by accelerometers (as they only determine the direction of down). However, in rolling the remote 90º so that the IR LEDs are lined up vertically, yaw tracking is lost. In the picture, only the small horizontal movement is perceived POSIT Method The Wii Remote as a Sensor The Nintendo Wii Remote contains, among other things, eleven buttons three accelerometers bluetooth connectivity an infrared camera a basic image processor These components, coupled with a low price and the availability of Wii remote libraries in a variety of programming languages make it an attractive sensor (or toy). Accelerometers: The remotes three accelerometers show the direction of gravity, and determine pitch and roll. Fig. 1: Demonstrates return values for a Wii remote in a continuous roll [1] Fig 4: View from above. Infrared Camera: The Wii remotes camera reads image data, and an internal processor determines which infrared points are worthy of tracking. Up to four infrared points are tracked, and their radius, and pixel locations in the image plane sized 1024 by 760 pixels are sent via bluetooth. Fig. 2: Wii remote vision [2] Given the pixel values p i of a single point: i = difference in degrees from the center of the camera to the point being tracked Ø i = difference in degrees from the center of the camera to the plane of the screen (x, y, z) = location in real space, given in units of d, where the origin is the point directly between the two Wii remotes d = distance between the two cameras Given (x, y, z) of two points: Ø x = difference from normal in reference to screen x, z = difference from absolute location, and pointed location X, Z = final location pointed to on screen Fig. 3: Geometric representation of camera setup as seen from above. Finding x 1, y 1 (left) and the horizontal direction pointed (right). z 1 and vertical rotation are found similarly. References [1] DarwiinRemote [2] Cwiid Library [3] Wiiuse Library, Michael Laforrest, [4] Dementhon, Daniel & Davis, Larry S., Model-Based Object Pose in 25 Lines of Code, 1995, International Journal of Computer Vision, 15, [5] Oliver Kreylos, Wiimote Hacking In order to achieve complete pose tracking and to increase the area tracked, a new algorithm was used [4]. DeMenthons POSIT code is designed to quickly estimate the location and orientation of a non-coplanar rigid object in 3D space given the objects dimensions, and the camera lens properties. The return values are a rotation matrix and a translation vector. Unfortunately, DeMenthons POSIT algorithm was designed for an object with many more points than is practical for a hand held device, and while his paper indicates a low error, he also uses a cube with eight points. Our handheld device with four points has a much greater error, giving the effect of a jittery object that constantly moves about the screen. Instead of creating our own system, we shifted to extending a pre-existing project [5] to multiple Wii remotes, in place of a single one as the original project was designed. This allows for a much greater tracked area. Further, this system makes use of both infrared and accelerometer data to achieve a more steady tracking algorithm. Under this setup, a Wii remote with infrared LEDs in a non-coplanar arrangement on the front will be tracked by at least one camera at any given time. Though only one camera is needed in order to compute the position and rotation of the remote, overlap between cameras provides a buffer so that all IR points are visible to at least one camera during any given pose calculation. The program was further extended to include a function to calculate initial camera offsets, as it would be useless to point the cameras straightforward and level. Once calculated, the program takes into account these values when making all subsequent pose calculations. The offsets are found by placing a rigid object on a level plane at the desired origin and finding the pose of the object. Since the objects pose is already known, we are in fact finding the pose of the cameras themselves. The rotation and translation offset vectors are then written to file along with the corresponding Wii remotes unique MAC address so that configuration and identification of the Wii remotes need only be run when camera positions are changed.