Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Learning Dynamic Models from Unsequenced Data Jeff Schneider School of Computer Science Carnegie Mellon University joint work with Tzu-Kuo Huang, Le.

Similar presentations


Presentation on theme: "1 Learning Dynamic Models from Unsequenced Data Jeff Schneider School of Computer Science Carnegie Mellon University joint work with Tzu-Kuo Huang, Le."— Presentation transcript:

1 1 Learning Dynamic Models from Unsequenced Data Jeff Schneider School of Computer Science Carnegie Mellon University joint work with Tzu-Kuo Huang, Le Song

2 2 Hubble Ultra Deep Field Learning Dynamic Models Hidden Markov Models e.g. for speech recognition Dynamic Bayesian Networks e.g. for protein/gene interaction System Identification e.g. for control [source: Wikimedia Commons] [source: SISL ARLUT] [Bagnell & Schneider, 2001] [source: UAV ETHZ] Key Assumption: SEQUENCED observations What if observations are NOT SEQUENCED?

3 3 When are Observations not Sequenced? Galaxy evolution dynamics are too slow to watch Slow developing diseases Alzheimers Parkinsons Biological processes measurements are often destructive [source: STAGES] [source: Getty Images] [source: Bryan Neff Lab, UWO] How can we learn dynamic models for these?

4 4 Outline Linear Models [Huang and Schneider, ICML, 2009] Nonlinear Models [Huang, Song, Schneider, AISTATS, 2010] Combining Sequence and Unsequenced Data [Huang and Schneider, NIPS, 2011]

5 5 Problem Description Estimate A from the sample of x i ’s

6 6 Doesn't seem impossible …

7 7 Identifiability Issues

8 8

9 9 A Maximum Likelihood Approach suppose we knew the dynamic model and the predecessor of each point …

10 10 Likelihood continued

11 11 Likelihood (continued) we don’t know the time either so also integrate out over time then use the empirical density as an estimate for the resulting marginal distribution

12 12 Unordered Method (UM): Estimation

13 13 Expectation Maximization

14 14 input output Sample Synthetic Result

15 15 Partial-order Method (PM)

16 16 Partial Order Approximation (PM) Perform estimation by alternating maximization Replace UM's E-step with a maximum spanning tree on the complete graph over data points -weight on each edge is probability of one point being generated from the other given A and  -enforces a global consistency on the solution M-step is unchanged: weighted regression

17 17 Learning Nonlinear Dynamic Models [Huang, Song, Schneider, AISTATS, 2010]

18 18 Learning Nonlinear Dynamic Models An important issue Linear model provides a severely restricted space of models -we know a model is wrong because the regression yields large residuals and low likelihoods The nonlinear models are too powerful; they can fit anything! Solution: restrict the space of nonlinear models 1.form the full kernel matrix 2.use a low-rank approximation of the kernel matrix

19 19 Synthetic Nonlinear Data: Lorenz Attractor Estimated gradients by kernel UM

20 20 Ordering by Temporal Smoothing

21 21 Ordering by Temporal Smoothing

22 22 Ordering by Temporal Smoothing

23 23 Evaluation Criteria

24 24 Results: 3D-1

25 25 Results: 3D-2

26 26 3D-1: Algorithm Comparison

27 27 3D-2: Algorithm Comparison

28 28 Methods for Real Data 1.Run k-means to cluster the data 2.Find an ordering of the cluster centers TSP on pairwise L1 distances (TSP+L1) OR Temporal Smoothing Method (TSM) 3.Learn a dynamic model for the cluster centers 4.Initialize UM/PM with the learned model

29 29 Gene Expression in Yeast Metabolic Cycle

30 30 Gene Expression in Yeast Metabolic Cycle

31 31 Results on Individual Genes

32 32 Results over the whole space

33 33 Cosine score in high dimensions Probability of random direction achieving a cosine score > 0.5 dimension

34 34 Suppose we have some sequenced data linear dynamic model: perform a standard regression: what if the amount of data is not enough to regress reliably?

35 35 Regularization for Regression add regularization to the regression: can the unsequenced data be used in regularization? ridge regression: lasso:

36 36 Lyapunov Regularization Lyapunov equation relates dynamic model to steady state distribution: Q – covariance of steady state distribution 1.estimate Q from the unsequenced data! 2.optimize via gradient descent using the unpenalized or the ridge regression solution as the initial point

37 37 Lyapunov Regularization: Toy Example 2-d linear system 2 nd column of A fixed at the correct value given 4 sequence points given 20 unsequenced points -0.428 0.572 -1.043 -0.714 A =  = 1

38 38 Lyapunov Regularization: Toy Example

39 39 Results on Synthetic Data Random 200 dimensional sparse (1/8) stable system

40 40 Work in Progress … cell cycle data from: [Zhou, Li, Yan, Wong, IEEE Trans on Inf Tech in Biomedicine, 2009] 49 features on protein subcellular location 34 sequences having a full cycle and length at least 30 were identified another 11,556 are unsequenced use the 34 sequences as ground truth and train on the unsequenced data A set of 100 sequenced images A tracking algorithm identified 34 sequences

41 41 Preliminary Results: Protein Subcellular Location Dynamics cosine score normalized error

42 42 Conclusions and Future Work Demonstrated ability to learn (non)linear dynamic models from unsequenced data Demonstrated method to use sequenced and unsequenced data together Continuing efforts on real scientific data Can we do this with hidden states?

43 43 EXTRA SLIDES

44 44 Real Data: Swinging Pendulum Video

45 45 Results: Swinging Pendulum Video

46 46

47 47


Download ppt "1 Learning Dynamic Models from Unsequenced Data Jeff Schneider School of Computer Science Carnegie Mellon University joint work with Tzu-Kuo Huang, Le."

Similar presentations


Ads by Google