University of California, Santa Barbara An Integrated System of 3D Motion Tracker and Spatialized Sound Synthesizer John Thompson (Music) Mary Li (ECE)

Slides:



Advertisements
Similar presentations
University of California, Santa Barbara An Integrated System of 3D Motion Tracker and Spatialized Sound Synthesizer John Thompson (Music) Mary Li (ECE)
Advertisements

Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision.
Development of a system to reproduce the drainage from Tsujun Bridge for environment education Hikari Takehara Kumamoto National College of Technology.
CSE473/573 – Stereo and Multiple View Geometry
QR Code Recognition Based On Image Processing
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Joshua Fabian Tyler Young James C. Peyton Jones Garrett M. Clayton Integrating the Microsoft Kinect With Simulink: Real-Time Object Tracking Example (
Last 4 lectures Camera Structure HDR Image Filtering Image Transform.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Real-Time Human Pose Recognition in Parts from Single Depth Images Presented by: Mohammad A. Gowayyed.
Foreground Background detection from video Foreground Background detection from video מאת : אבישג אנגרמן.
Computer vision. Camera Calibration Camera Calibration ToolBox – Intrinsic parameters Focal length: The focal length in pixels is stored in the.
GIS and Image Processing for Environmental Analysis with Outdoor Mobile Robots School of Electrical & Electronic Engineering Queen’s University Belfast.
Move With Me S.W Graduation Project An Najah National University Engineering Faculty Computer Engineering Department Supervisor : Dr. Raed Al-Qadi Ghada.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Structure from motion.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
LYU0603 A Generic Real-Time Facial Expression Modelling System Supervisor: Prof. Michael R. Lyu Group Member: Cheung Ka Shun ( ) Wong Chi Kin ( )
John A. Bender Applications in Real-time 3D Tracking Collaborators: Pietro Perona Luis Goncalves Ken Goldberg Karl Chen Ilan Lobel Steve Nowlin.
Motion based Correspondence for Distributed 3D tracking of multiple dim objects Ashok Veeraraghavan.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
LYU0503 Document Image Reconstruction on Mobile Using Onboard Camera Supervisor: Professor Michael R.Lyu Group Members: Leung Man Kin, Stephen Ng Ying.
Project Presentation: March 9, 2006
Highlights Lecture on the image part (10) Automatic Perception 16
1 The Mathematics of Signal Processing - an Innovative Approach Peter Driessen Faculty of Engineering University of Victoria.
Sebastian Thrun and Jana Kosecha CS223B Computer Vision, Winter 2007 Stanford CS223B Computer Vision, Winter 2007 Lecture 4 Camera Calibration Professors.
1 Manipulating Digital Audio. 2 Digital Manipulation  Extremely powerful manipulation techniques  Cut and paste  Filtering  Frequency domain manipulation.
Eye Tracking Project Project Supervisor: Ido Cohen By: Gilad Ambar
Multi-camera Video Surveillance: Detection, Occlusion Handling, Tracking and Event Recognition Oytun Akman.
A Real-Time for Classification of Moving Objects
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Wavelet-based Image Fusion by Sitaram Bhagavathy Department of Electrical and Computer Engineering University of California, Santa Barbara Source: “Multisensor.
Binaural Sound Localization and Filtering By: Dan Hauer Advisor: Dr. Brian D. Huggins 6 December 2005.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Abstract In this project we expand our previous work entitled "Design of a Robotic Platform and Algorithms for Adaptive Control of Sensing Parameters".
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
Automatic Camera Calibration
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
Augmented Reality and 3D modelling Done by Stafford Joemat Supervised by Mr James Connan and Mr Mehrdad Ghaziasgar.
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
Shape Recognition and Pose Estimation for Mobile Augmented Reality Author : N. Hagbi, J. El-Sana, O. Bergig, and M. Billinghurst Date : Speaker.
Machine Vision for Robots
BraMBLe: The Bayesian Multiple-BLob Tracker By Michael Isard and John MacCormick Presented by Kristin Branson CSE 252C, Fall 2003.
Camera Geometry and Calibration Thanks to Martial Hebert.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Timo Haapsaari Laboratory of Acoustics and Audio Signal Processing April 10, 2007 Two-Way Acoustic Window using Wave Field Synthesis.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Non-Photorealistic Rendering and Content- Based Image Retrieval Yuan-Hao Lai Pacific Graphics (2003)
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Expectation-Maximization (EM) Case Studies
CSP Visual input processing 1 Visual input processing Lecturer: Smilen Dimitrov Cross-sensorial processing – MED7.
By Naveen kumar Badam. Contents INTRODUCTION ARCHITECTURE OF THE PROPOSED MODEL MODULES INVOLVED IN THE MODEL FUTURE WORKS CONCLUSION.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Automatic Equalization for Live Venue Sound Systems Damien Dooley, Final Year ECE Progress To Date, Monday 21 st January 2008.
Final Year Project. Project Title Kalman Tracking For Image Processing Applications.
Presented by: Idan Aharoni
Glencoe Introduction to Multimedia Chapter 8 Audio 1 Section 8.1 Audio in Multimedia Audio plays many roles in multimedia. Effective use in multimedia.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Visual Information Processing. Human Perception V.S. Machine Perception  Human perception: pictorial information improvement for human interpretation.
Advanced Computer Graphics
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
Eric Grimson, Chris Stauffer,
M ND: Music Improvisation and Narrative Design
Multiple View Geometry for Robotics
Filtering Things to take away from this lecture An image as a function
Noah Snavely.
Single-view geometry Odilon Redon, Cyclops, 1914.
Filtering An image as a function Digital vs. continuous images
Presentation transcript:

University of California, Santa Barbara An Integrated System of 3D Motion Tracker and Spatialized Sound Synthesizer John Thompson (Music) Mary Li (ECE) Michael Quinn (ECE)

University of California, Santa Barbara Goal Develop an interactive music synthesis system while exploring tracking and surveillance technologies, spatial music composition strategies, and sound synthesis techniques

University of California, Santa Barbara Hardware & Software Unibrain Fire-I Cameras PC Running Windows XP Apple Powerbook and G5 Intel’s OpenCV Libraries Max/MSP/Jitter SuperCollider

University of California, Santa Barbara Project Summary 2D Tracking Camera Calibration 3D Position Calculation Composition and Sound Synthesis

University of California, Santa Barbara 2D Tracking – Temporal Difference Temporal Difference –Subtract previous frame from the current frame to see what has changed.

University of California, Santa Barbara 2D Tracking - Background Subtraction Develop a background model Subtract background from current frame Objects not in model will show up in the difference

University of California, Santa Barbara 2D Tracking - Thresholding Values chosen based on variance of background model

University of California, Santa Barbara 2D Tracking – Center of Mass For now, we assume that only one object is being tracked. Thus, the image center of mass approximates the object center of mass. Center of Mass is then sent to the 3D section.

University of California, Santa Barbara Camera Calibration Purpose –A preparation for 3D estimation from 2D images Methods –Matlab camera calibration toolbox –Intel OpenCV calibration functions

University of California, Santa Barbara Camera Calibration -- intrinsic parameters Focal lengths: f x, f y Principal points: p x, p y Distortions: radial and tangential distortion coefficients DirectShow Filter runs under MS Windows

University of California, Santa Barbara Camera Calibration -- intrinsic parameters Defines pixel coordinate points with respect to camera coordinate system X image = M intr X camera Matlab Camera Calibration Toolbox

University of California, Santa Barbara Camera Calibration -- extrinsic parameters Defines camera coordinate points with respect to world coordinate system X camera = M extr X world OpenCV calibration routine (based on intrinsic parameters) left camera viewcenter camera viewright camera view

University of California, Santa Barbara 3D Tracking -- methods Obtain 2D motion centroid information Epipolar Geometry Least Square

University of California, Santa Barbara 3D Tracking -- results floor plan of visible space18-pt tracking example X_worldY_worldZ_world

University of California, Santa Barbara Tracking System Performance Realtime average 2.09 frames per second System performance can be improved by 1.Distributed computing: one PC for each camera 2.More cameras 3.Improve background segmentation

University of California, Santa Barbara TransMedia Systems Trans-media systems exist as independent engines behind artistic manifestations in diverse media. Input: In the case of our Motion Tracking System project, the implementation of a Motion Tracking algorithms tracking objects within a sensor space serves as a principle component to power the trans-media system. Transformation: In the middle layer, the data from the Motion Tracking system is interpreted and labled. This data is then used to determine the activity and state of the sensor space. Output: In the final stage of the trans-media system, specific media, such as sound, use the middle layer data to inform their processes. The sound is projected into the sensor space. Interactivity is enhanced when the participants in the sensor space become aware of their relationship with the system. Graphic Notations and Trans-media Systems: John Cage “Fontana Mix”

University of California, Santa Barbara Spatial Composition Strategies -- Sonic Nodes A system of nodes are layed out in the virtual space. The system of nodes is comprised of Generative Nodes and Transformative Nodes The nodes have an activation space surrounding them. Tracked objects activate nodes at various levels depending on the tracked objects’ measured distance from the nodes’ center. The nodes are in flux and adjust their positions over time to reflect the history of the space

University of California, Santa Barbara When a tracked object moves within the activation space of a particular node, the node executes its action. Figure 1 Nodes with various musical functions are represented by colored circles. Different paths create unique realizations of phrase level material in the mobile form Chord Nodes34 Impulse Nodes16 Sample Playback Nodes 64 Convolution Nodes 360 Pitch-Time Shift Nodes 50

University of California, Santa Barbara Pitch sets in the chord nodes [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] Thirty-four chordNodes are scattered in the virtual space. Seventeen of the chordNodes contain a unique four note pitch set. Fourteen of the Seventeen sets are unique in their normal order. Although the pitch sets are diverse, they are closely knitted in their makeup. This lends a unified quality to the pitched verticalities of the sonic space. As multiple users move throughout the space, the sonic material subtlely shifts, melding the space into a cohesive flow. Tracked objects leave histories of their paths in the space, the pitchSets transform in response. The space adapts its pitched contents to the actions of the users of the space.

University of California, Santa Barbara Sound Spatialization The system outputs quadraphonic audio distributed to speakers surrounding the sensor space. The position of the sounds within the sensor space is determined by the position of the tracked object. (Figure 1) Distance is simulated through direct sound to reverberant sound mixture. This ratio is dictated by the following formulas: Direct Sound Amplitude1/zPosition.abs Reverb Sound Amplitude = 1/zPosition.abs.sqrt Figure 1 Sensor Space Speakers Object Position

University of California, Santa Barbara Future Work Improve the system by enabling the tracking of multiple objects as well as incorporating features such as shape, size, and color. Improve integration with the musical synthesis system.

University of California, Santa Barbara Professor B.S. Manjunath Professor G. Legrady Professor J. Kuchera-Morin NSF IGERT Program Fellow IGERTers Special Thanks

University of California, Santa Barbara Q ??