Cut-And-Stitch: Efficient Parallel Learning of Linear Dynamical Systems on SMPs Lei Li Computer Science Department School of Computer Science Carnegie.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Variational Methods for Graphical Models Micheal I. Jordan Zoubin Ghahramani Tommi S. Jaakkola Lawrence K. Saul Presented by: Afsaneh Shirazi.
Yasuhiro Fujiwara (NTT Cyber Space Labs)
Learning from Demonstrations Jur van den Berg. Kalman Filtering and Smoothing Dynamics and Observation model Kalman Filter: – Compute – Real-time, given.
Reducing Drift in Parametric Motion Tracking
Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1, Lehigh University.
Hidden Markov Models Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004.
Page 1 Hidden Markov Models for Automatic Speech Recognition Dr. Mike Johnson Marquette University, EECE Dept.
Statistical NLP: Lecture 11
Hidden Markov Models Theory By Johan Walters (SR 2003)
1 Gholamreza Haffari Anoop Sarkar Presenter: Milan Tofiloski Natural Language Lab Simon Fraser university Homotopy-based Semi- Supervised Hidden Markov.
Presenter: Yufan Liu November 17th,
On Systems with Limited Communication PhD Thesis Defense Jian Zou May 6, 2004.
Yanxin Shi 1, Fan Guo 1, Wei Wu 2, Eric P. Xing 1 GIMscan: A New Statistical Method for Analyzing Whole-Genome Array CGH Data RECOMB 2007 Presentation.
… Hidden Markov Models Markov assumption: Transition model:
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Shape and Dynamics in Human Movement Analysis Ashok Veeraraghavan.
1 Graphical Models in Data Assimilation Problems Alexander Ihler UC Irvine Collaborators: Sergey Kirshner Andrew Robertson Padhraic Smyth.
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
Hidden Markov Model 11/28/07. Bayes Rule The posterior distribution Select k with the largest posterior distribution. Minimizes the average misclassification.
1 Distributed localization of networked cameras Stanislav Funiak Carlos Guestrin Carnegie Mellon University Mark Paskin Stanford University Rahul Sukthankar.
A Data-Driven Approach to Quantifying Natural Human Motion SIGGRAPH ’ 05 Liu Ren, Alton Patrick, Alexei A. Efros, Jassica K. Hodgins, and James M. Rehg.
1 ENHANCED RSSI-BASED HIGH ACCURACY REAL-TIME USER LOCATION TRACKING SYSTEM FOR INDOOR AND OUTDOOR ENVIRONMENTS Department of Computer Science and Information.
Phylogenetic Trees Presenter: Michael Tung
Hidden Markov Models K 1 … 2. Outline Hidden Markov Models – Formalism The Three Basic Problems of HMMs Solutions Applications of HMMs for Automatic Speech.
Discriminative Training of Kalman Filters P. Abbeel, A. Coates, M
Maximum Likelihood (ML), Expectation Maximization (EM)
Learning HMM parameters Sushmita Roy BMI/CS 576 Oct 21 st, 2014.
Capturing the Motion of Ski Jumpers using Multiple Stationary Cameras Atle Nes Faculty of Informatics and e-Learning Trondheim University.
Parsimonious Linear Fingerprinting for Time Series Lei Li joint work with B. Aditya Prakash, Christos Faloutsos School of Computer Science Carnegie Mellon.
Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields Yong-Joong Kim Dept. of Computer Science Yonsei.
Tracking Pedestrians Using Local Spatio- Temporal Motion Patterns in Extremely Crowded Scenes Louis Kratz and Ko Nishino IEEE TRANSACTIONS ON PATTERN ANALYSIS.
Annealing Paths for the Evaluation of Topic Models James Foulds Padhraic Smyth Department of Computer Science University of California, Irvine* *James.
Fundamentals of Hidden Markov Model Mehmet Yunus Dönmez.
Learning Stable Multivariate Baseline Models for Outbreak Detection Sajid M. Siddiqi, Byron Boots, Geoffrey J. Gordon, Artur W. Dubrawski The Auton Lab.
Hidden Markov Models Yves Moreau Katholieke Universiteit Leuven.
Virtual Vector Machine for Bayesian Online Classification Yuan (Alan) Qi CS & Statistics Purdue June, 2009 Joint work with T.P. Minka and R. Xiang.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
PGM 2003/04 Tirgul 2 Hidden Markov Models. Introduction Hidden Markov Models (HMM) are one of the most common form of probabilistic graphical models,
Lei Li Computer Science Department Carnegie Mellon University Pre Proposal Time Series Learning completed work 11/27/2015.
CS Statistical Machine learning Lecture 24
PhD Candidate: Tao Ma Advised by: Dr. Joseph Picone Institute for Signal and Information Processing (ISIP) Mississippi State University Linear Dynamic.
Mixture Kalman Filters by Rong Chen & Jun Liu Presented by Yusong Miao Dec. 10, 2003.
Beam Sampling for the Infinite Hidden Markov Model by Jurgen Van Gael, Yunus Saatic, Yee Whye Teh and Zoubin Ghahramani (ICML 2008) Presented by Lihan.
Pattern Recognition and Machine Learning-Chapter 13: Sequential Data
Presented by: Fang-Hui Chu Discriminative Models for Speech Recognition M.J.F. Gales Cambridge University Engineering Department 2007.
A Dynamic Conditional Random Field Model for Object Segmentation in Image Sequences Duke University Machine Learning Group Presented by Qiuhua Liu March.
Chapter 8. Learning of Gestures by Imitation in a Humanoid Robot in Imitation and Social Learning in Robots, Calinon and Billard. Course: Robots Learning.
Bayesian Speech Synthesis Framework Integrating Training and Synthesis Processes Kei Hashimoto, Yoshihiko Nankaku, and Keiichi Tokuda Nagoya Institute.
Multimedia Systems and Communication Research Multimedia Systems and Communication Research Department of Electrical and Computer Engineering Multimedia.
1 Hidden Markov Models Hsin-min Wang References: 1.L. R. Rabiner and B. H. Juang, (1993) Fundamentals of Speech Recognition, Chapter.
Statistical Models for Automatic Speech Recognition Lukáš Burget.
D YNA MM O : M INING AND S UMMARIZATION OF C OEVOLVING S EQUENCES WITH M ISSING V ALUES Christos Faloutsos joint work with Lei Li, James McCann, Nancy.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
D YNA MM O : M INING AND S UMMARIZATION OF C OEVOLVING S EQUENCES WITH M ISSING V ALUES Lei Li joint work with Christos Faloutsos, James McCann, Nancy.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Markov Networks: Theory and Applications Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208
Hidden Markov Model Parameter Estimation BMI/CS 576 Colin Dewey Fall 2015.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
2 Research Department, iFLYTEK Co. LTD.
The Linear Dynamic System and its application in moction capture
Pre Proposal Time Series Learning completed work
Video-based human motion recognition using 3D mocap data
LECTURE 15: REESTIMATION, EM AND MIXTURES
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
Presentation transcript:

Cut-And-Stitch: Efficient Parallel Learning of Linear Dynamical Systems on SMPs Lei Li Computer Science Department School of Computer Science Carnegie Mellon University 1 School of Computer Science

2 Motion stitching via effort minimization with James McCann, Nancy Pollard and Christos Faloutsos [Eurographics 2008] Parallel learning of linear dynamical systems with Wenjie Fu, Fan Guo, Todd Mowry and Christos Faloutsos [KDD 2008]

Background Motion Capture Markers on human body, optical cameras to capture the marker positions, and translated into body local coordinates. Application: – Movie/game/medical industry 3

Outline Background Motivation: effortless motion stitching Parallel learning with Cut-And-Stitch Experiments and Results Conclusion 4

Motivation Given two human motion sequences, how to stitch them together in a natural way( = looks natural in human’s eyes)? e.g. walking to running Given a human motion sequence, how to find the best natural stitchable motion in motion capture database? 5

Intuition Intuition: – Laziness is a virtue. Natural motion use minimum energy Laziness-score (L-score) = energy used during stitching Objective: – Minimize laziness-score 6

Example 7 Taking off landing

Example, Natural stitching 8 Taking off landing

But, how about this way? 9 Taking off landing

Observations Naturalness depends on smoothness Naturalness also depends on motion speed 10

Proposed Method Estimate stitching path using Linear Dynamical Systems 11

Proposed Method (cont’) Estimate the velocity and acceleration during the stitching, compute energy (defined as L- score) 12

Proposed Method (cont’) Minimize L-score with respect to any stitching hops. (defined as elastic L-score) 13

Example stitching Link to video 14

Outline Background Motivation: effortless motion stitching Parallel learning with Cut-And-Stitch Experiments and Results Conclusion 15

Parallel Learning for LDS Challenge: – Learning Linear Dynamical System is slow for long sequences Traditional Method: – Maximum Likelihood Estimation via Expectation- Maximization(EM) algorithm Objective: – Parallelize the learning algorithm Assumption: – shared memory architecture 16

Linear Dynamical System aka. Kalman Filter 17 Parameters:  =(u 0, V 0, A, Γ, C, Σ) Observation: y 1 …y n Hidden variables: z 1 … z n 17 Z1Z1 Z1Z1 Z2Z2 Z2Z2 Z3Z3 Z3Z3 Z4Z4 Z4Z4 Z5Z5 Z5Z5 Y1Y1 Y1Y1 Y2Y2 Y2Y2 Y3Y3 Y3Y3 Y4Y4 Y4Y4 Y5Y5 Y5Y5  (A∙z 1, Γ)  (u 0, V 0 )  (C∙z 3, Σ)  (A∙z 2, Γ)  (C∙z 1, Σ)  (C∙z 2, Σ)  (C∙z 4, Σ)  (A∙z 3, Γ)  (C∙z 5, Σ)  (A∙z 4, Γ)

Example 18 given positions, estimate dynamics (i.e. params) z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 z6z6 z6z6 y6y6 y6y6 z2z2 z2z2 Time Position of left elbow

Traditional: How to learn LDS? 19

Sequential Learning (EM) z1z1 z1z1 z2z2 z2z2 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 z6z6 z6z6 y6y6 y6y6 Compute P(z 1 | y 1 ) Time * Measured Estimated Position of left elbow

Sequential Learning (EM) 21 z1z1 z1z1 z2z2 z2z2 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 z6z6 z6z6 y6y6 y6y6 From P(z 1 | y 1 )  Compute P(z 2 | y 1, y 2 ) Time * Intuition: z 2 may be close to z 1 * Measured Estimated Position of left elbow

Sequential Learning (EM) 22 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 z6z6 z6z6 y6y6 y6y6 From P(z 2 | y 1, y 2 )  Compute P(z 3 | y 1, y 2, y 3 ) z2z2 z2z2 Time * * * Measured Estimated Position of left elbow

Sequential Learning (EM) 23 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 z6z6 z6z6 y6y6 y6y6 From P(z 3 | y 1, y 2, y 3 )  Compute P(z 4 | y 1, y 2, y 3, y 4 ) z2z2 z2z2 23 Time * * * * Measured Estimated Position of left elbow

Sequential Learning (EM) 24 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 z6z6 z6z6 y6y6 y6y6 z2z2 z2z2 From P(z 4 | y 1, y 2, y 3, y 4 )  Compute P(z 5 | y 1, y 2, y 3, y 4, y 5 ) Time * * * * * Measured Estimated Position of left elbow

Sequential Learning (EM) 25 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 z6z6 z6z6 y6y6 y6y6 z2z2 z2z2 From P(z 5 | y 1, y 2, y 3, y 4, y 5 )  Compute P(z 6 | y 1, y 2, y 3, y 4, y 5, y 6 ) Time * * * * * * Measured Estimated Position of left elbow

* Sequential Learning (EM) 26 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 z6z6 z6z6 y6y6 y6y6 z2z2 z2z2 From P(z 6 | y 1, y 2, y 3, y 4, y 5, y 6 )  Compute P(z 5 | y 1, y 2, y 3, y 4, y 5, y 6 ) 26 Time * * * * Measured Estimated * * Intuition: take the future backward Position of left elbow

Sequential Learning (EM) 27 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 z6z6 z6z6 y6y6 y6y6 z2z2 z2z2 From P(z 6 | y 1, y 2, y 3, y 4, y 5, y 6 )  Compute P(z 4 | y 1, y 2, y 3, y 4, y 5, y 6 ) * 27 Time * * * * Measured * * * Estimated * * * * * Position of left elbow

Sequential Learning (EM) z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 z6z6 z6z6 y6y6 y6y6 z2z2 z2z2 From P(z 4 | y 1, y 2, y 3, y 4, y 5, y 6 )  Compute P(z 3 | y 1, y 2, y 3, y 4, y 5, y 6 ) 28 * Time * * * * Measured * * * * Estimated * * * * * Position of left elbow

Sequential Learning (EM) z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 z6z6 z6z6 y6y6 y6y6 z2z2 z2z2 From P(z 3 | y 1, y 2, y 3, y 4, y 5, y 6 )  Compute P(z 2 | y 1, y 2, y 3, y 4, y 5, y 6 ) 29 * Time * * * * Measured * * * * * Estimated * * * * * Position of left elbow

Sequential Learning (EM) 30 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 z6z6 z6z6 y6y6 y6y6 z2z2 z2z2 From P(z 2 | y 1, y 2, y 3, y 4, y 5, y 6 )  Compute P(z 1 | y 1, y 2, y 3, y 4, y 5, y 6 ) 30 * * Time * * * * Measured * * * * * Estimated * * * * * Position of left elbow

Sequential Learning (EM) z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 z6z6 z6z6 y6y6 y6y6 z2z2 z2z2 From all posterior z 1, z 2, z 3, z 4, z 5, z 6 P(z 1 | y 1, y 2, y 3, y 4, y 5, y 6 ), P(z 2 | y 1, y 2, y 3, y 4, y 5, y 6 )… Compute sufficient statistics E[z i ] E[z i z i ’] E[z i-1 z i ’]

Sequential Learning (EM) 32 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 z6z6 z6z6 y6y6 y6y6 z2z2 z2z2 * * Time * * * * Measured * * * * * with sufficient statistics, compute argmax ←likelihood(θ) θ reconstructed signal Position of left elbow

Sequential Learning (EM) 33 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 z6z6 z6z6 y6y6 y6y6 z2z2 z2z2 From P(z 6 | y 1, y 2, y 3, y 4, y 5, y 6 )  Compute P(z 5 | y 1, y 2, y 3, y 4, y 5, y 6 ) 33 Time Position * * * * * Measured Estimated * * * * * *

How to parallelize it? 34 Speed Bottleneck: sequential computation of posterior z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 y6y6 y6y6 z2z2 z2z2 z6z6 z6z6

“Leap of faith” start computation without feedback from previous node (cut), and reconcile later (stitch) 35

Proposed Method: Cut-And-Stitch 36 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z5z5 z5z5 y5y5 y5y5 y6y6 y6y6 z2z2 z2z2 z6z6 z6z6 υ2,Φ2,η2,Ψ2υ2,Φ2,η2,Ψ2 υ1,Φ1,η1,Ψ1υ1,Φ1,η1,Ψ1 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z' 2 z2z2 z2z2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z' 4 z5z5 z5z5 y5y5 y5y5 y6y6 y6y6 z6z6 z6z6 υ3,Φ3,η3,Ψ3υ3,Φ3,η3,Ψ3 start computation without feedback from previous node (cut) reconcile later (stitch)

Cut-And-Stitch υ2,Φ2,η2,Ψ2υ2,Φ2,η2,Ψ2 υ1,Φ1,η1,Ψ1υ1,Φ1,η1,Ψ1 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z' 2 z2z2 z2z2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z' 4 z5z5 z5z5 y5y5 y5y5 y6y6 y6y6 z6z6 z6z6 υ3,Φ3,η3,Ψ3υ3,Φ3,η3,Ψ3 Cut step: Estimate posteriors (E) Time Measured Estimated Intuition: compute all three at once * * * P(z 1 | y 1 ), P(z 3 | y 3 ), P(z 5 | y 5 ) Position of left elbow

Cut-And-Stitch υ2,Φ2,η2,Ψ2υ2,Φ2,η2,Ψ2 υ1,Φ1,η1,Ψ1υ1,Φ1,η1,Ψ1 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z' 2 z2z2 z2z2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z' 4 z5z5 z5z5 y5y5 y5y5 y6y6 y6y6 z6z6 z6z6 υ3,Φ3,η3,Ψ3υ3,Φ3,η3,Ψ3 Cut step: Estimate posteriors (E) Time * * * * * * Measured Estimated Position of left elbow

Cut-And-Stitch υ2,Φ2,η2,Ψ2υ2,Φ2,η2,Ψ2 υ1,Φ1,η1,Ψ1υ1,Φ1,η1,Ψ1 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z' 2 z2z2 z2z2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z' 4 z5z5 z5z5 y5y5 y5y5 y6y6 y6y6 z6z6 z6z6 υ3,Φ3,η3,Ψ3υ3,Φ3,η3,Ψ3 Cut step: Estimate posteriors (E) Time Measured * * * * * * * * * Intuition: backward adjust all at once Position of left elbow

Cut-And-Stitch υ2,Φ2,η2,Ψ2υ2,Φ2,η2,Ψ2 υ1,Φ1,η1,Ψ1υ1,Φ1,η1,Ψ1 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z' 2 z2z2 z2z2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z' 4 z5z5 z5z5 y5y5 y5y5 y6y6 y6y6 z6z6 z6z6 υ3,Φ3,η3,Ψ3υ3,Φ3,η3,Ψ3 Stitch step: Collect sufficient statistics for each block (C) E[z i ] E[z i z i ’] E[z i-1 z i ’]

Cut-And-Stitch Stitch step: Collect sufficient Statistics (C) Maximize parameters (M) υ2,Φ2,η2,Ψ2υ2,Φ2,η2,Ψ2 υ1,Φ1,η1,Ψ1υ1,Φ1,η1,Ψ1 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z' 2 z2z2 z2z2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z' 4 z5z5 z5z5 y5y5 y5y5 y6y6 y6y6 z6z6 z6z6 υ3,Φ3,η3,Ψ3υ3,Φ3,η3,Ψ3 41 Time Measured * * * * * * * * * Position of left elbow

Cut-And-Stitch υ2,Φ2,η2,Ψ2υ2,Φ2,η2,Ψ2 υ1,Φ1,η1,Ψ1υ1,Φ1,η1,Ψ1 z1z1 z1z1 y1y1 y1y1 y2y2 y2y2 z' 2 z2z2 z2z2 z3z3 z3z3 y3y3 y3y3 z4z4 z4z4 y4y4 y4y4 z' 4 z5z5 z5z5 y5y5 y5y5 y6y6 y6y6 z6z6 z6z6 υ3,Φ3,η3,Ψ3υ3,Φ3,η3,Ψ3 Stitch together: Re-estimate block parameters (R) Time Measured * * * * * * * * * * * * Intuition: exchange messages cross block Iterate… reconstructed signal Position of left elbow

Outline Background Motivation: effortless motion stitching Parallel learning with Cut-And-Stitch Experiments and Results Conclusion 43

Experiments Q1: How much speed up can we get? Q2: How good is the reconstruction accuracy? 44

Experiments Dataset: – 58 human motion sequences, 200 – 500 frames – Each frame with 93 bone positions in body local coordinates – Setup: – Supercomputer: SGI Altix system, distributed shared memory architecture – Multi-core desktop: 4 Intel Xeon cores, shared memory Task: – Learn the dynamics, hidden variables and reconstruct motion 45

Q1: How much speed up? 46 Supercomputer Result speedup # of processors ideal average of 58

Q1: How much speed up? 47 Multi-core Result speedup # of cores ideal average of 58

Q2: How good? 48 Result: ~ IDENTICAL accuracy

Conclusion & Contributions A distance function for motion stitching – Based on first principle: minimize effort General approximate parallel learning algorithm for LDS – Near linear speed up – Accuracy (NRE): ~ identical to sequential learning – Easily extended to HMM and other chain Markovian models Software (C++ w. openMP) and datasets:

Promising Extensions Extension – HMM – other Markov models (similar graphical model) Open Problem: – Can prove the error bound? 50

Thank you 51 Questions

52

53 Approximate Parallel Learning Iteration 0 Warm Up Iteration 1 Step 1 Step 2 Iteration 2 Step 3 Step 4 Iteration 3 Step 5 Step 6