Download presentation

Presentation is loading. Please wait.

Published byRey Matthewson Modified over 3 years ago

1
Directed Exploration of Complex Systems International Conference on Machine Learning Montreal, Canada 2009/06/16 Michael C. Burl*, Esther Wang ^ Collaborators: Brian Enke +, William J. Merline + *Jet Propulsion Laboratory + Southwest Research Institute ^ California Institute of Technology

2
Overview Motivation: Physics-based simulations are widely used to model complex systems + high-fidelity representation of actual system behavior -- cumbersome and computationally expensive Goal: Build a simplified predictive model of system behavior and use this model to understand which input parameters produce a desired output (+1) from the simulator. Simulator Grading Script θ2θ2 θ1θ1 q(θ) + 1 - 1 θ Assumptions: the simulator is deterministic (and non- chaotic) and the learner is only informed of a binary-valued outcome. Really want a function: such that agrees with q over most of domain

3
Example: Asteroid Collisions Movie courtesy B. Enke, D. Durda, SwRI. Currently working with asteroid collision simulator: SPH + N-body gravity code Various science questions can be studied with this simulator: Under what conditions is a collision “catastrophic”? How are asteroid satellites (e.g., Ida- Dactyl) generated? How were particular asteroid families (e.g., Karin) formed? But … Input space is 5-dimensional Tgt body size (km) Impactor velocity (km/s) Impactor angle (degrees) Impactor size (km) Target composition (rubble/solid) And … Each trial takes 1 CPU- day! Grid sampling scales ~k d where k is number of steps along each dimension

4
Directed Exploration Approach Grey: Points with unknown labels White: Points with known labels + 1 Black: Points with known labels - 1 1. Learn a model from current knowledge. 2. Decide which unlabeled point would be most valuable to label. 3. Get the label for the selected point from the oracle (by running the simulator/ grading script) and add that point and its label into “current knowledge”. 4. Repeat until we have a good understanding of the oracle. Current Knowledge

5
Two Key Steps 1.Learn a model from current knowledge. •Support Vector Machines (SVM) •Kernel Density Estimation (KDE) •Gaussian Process Classifier (GP) 2. Decide which unlabeled point would be most valuable to label. •Passive – randomly select an unlabeled point. •Most-Confused Point (MCP) – select the unlabeled point whose label is most uncertain. •Most-Informative Point (MIP) – select the unlabeled point such that the expected information gain about the entire set of unlabeled points is maximized. [Holub et al, 2008] •Meta-strategy – treat the base active learner’s valuation as expressing a preference over points; choose randomly while honoring the strength of the preference. (Similar to the epsilon-greedy approach used in RL to balance exploration and exploitation.) [Burl et al, 2006]

6
Active Learning •Determine which new point(s) in input space, if labeled, would be most informative for refining the boundaries of the solution region •Imagine discretized input space: L (n) = set of labeled instances at end of trial n (red or green) U (n) = set of unlabeled instances at end of trial n (black) •Ideally, choose (n+1) from U (n) such that the classifier learned from L (n+1) = L (n) + (n+1) will bring q closer to q.

7
A defect with MCP •MCP would choose Point A, but knowing the label of this point would not reveal much about the labels of the other unlabeled points. •On the other hand, knowing the label of Point B would probably reveal the labels of the other nearby unlabeled points. A Figure from [Holub et al, 2008]. B

8
Most Informative Point Prediction given Current Knowledge Probability of labels being +1 (assuming y h = +1) Probability of the labels being +1 (assuming y h = -1) I gain = H 0 – (H + p + + H - p - ) H(Y) = -Σ p(y i ) log p(y i ) H(Y): Entropy/measure of uncertainty p(y i ): Probabilities of label i being +1 Current Knowledge Expected Information Gain Grey: Points with unknown labels White: Points with known +1 labels Black: Points with known -1 labels

9
Experimental Setup LevelSet 2D Oracles (9) LevelSet 3D Oracles (3) Inverted Pendulum on Cart (1) SVM = Support Vector Machine KDE = Kernel Density Estimation GP = Gaussian Process PAS = Passive Learning (random) MCP = Most Confused Point MIP = Most Informative Point META = Randomize based on MCP PASMCPMIPMETA SVMXX---X KDEXXX--- GPXXX--- Algorithms (9) Oracles (13) •Difficult to evaluate effectiveness with real simulator since the true q() function is unknown. •Use synthetic oracles instead. •Added benefit: allow other researchers to replicate or improve upon results.

10
Set of Synthetic Oracles 1 2 3 4 5 6 7 89 1011 12 13

11
Results GP-MCP ROC Curves Oracle 7 Run each algorithm-oracle combination for 400 rounds of active learning. Repeat seven times to get handle on variance. Use classifier learned after r rounds to make a probabilistic prediction at each point in a fine discretization of the domain. Threshold probabilities and compare against ground truth to generate ROC curves.

12
Active Passive SVM-MCP •Active outperforms passive •Exception: SVM-MCP •KDE-MCP reaches 90% at round 100 •KDE-PAS doesn’t until after round 300 => Speedup over 3x Pd @ Pfa = 0.05 on Oracle 9 Oracle 9

13
Head-to-Head KDE-MIP vs KDE-PAS @ Round 50 •Active wins on 9 oracles •Passive wins on 2 oracles (#11, #12) •Toss-up on 2 oracles (#9, #13)

14
Head-to-Head KDE-MIP vs KDE-PAS @ Round 100 •Active is equal or better on almost all oracles

15
OracleKDE-PASKDE-MCPKDE-MIP 10.880 +/- 0.0300.988 +/- 0.0060.962 +/- 0.021 20.865 +/- 0.0450.972 +/- 0.0090.981 +/- 0.005 30.772 +/- 0.0430.947 +/- 0.0180.929 +/- 0.019 40.713 +/- 0.0350.813 +/- 0.0480.822 +/- 0.015 50.642 +/- 0.0470.795 +/- 0.0400.832 +/- 0.033 60.763 +/- 0.0510.890 +/- 0.0150.885 +/- 0.017 70.588 +/- 0.0520.731 +/- 0.0230.634 +/- 0.028 80.674 +/- 0.0550.793 +/- 0.0230.815 +/- 0.015 90.622 +/- 0.0380.725 +/- 0.0350.615 +/- 0.035 100.603 +/ - 0.0790.781 +/- 0.0500.881 +/- 0.029 110.264 +/- 0.0390.235 +/- 0.0570.204 +/- 0.026 120.332 +/- 0.0710.513 +/- 0.0360.167 +/- 0.028 130.246 +/- 0.0790.140 +/- 0.0540.235 +/- 0.114 MCP vs. MIP? •MCP appears to be better than the more sophisticated MIP •Similar result holds for GP-MCP vs GP-MIP •MIP more sensitive to errors in estimated probabilities?

16
Head-to-Head GP-MCP vs SVM-META @ Round 50 •GP-MCP and SVM-META perform similarly.

17
Decision Surface Solution region obtained for a 3D-version of the problem in prior work [Burl et al, 2006] using SVM-META.

18
Conclusion •Directed exploration shows promise for efficiently understanding the behavior of complex systems modeled by numerical simulations. Use simulator as an oracle to sequentially generate labeled training data. Learn predictive model from currently available training data. Use active learning to choose which simulation trials to run next. Get new labeled example(s) and repeat. •Performance was systematically evaluated over synthetic oracles. •Active learning yielded significant improvement over passive learning. •In some cases, 3x reduction in number of trials to reach 90% PD at PFA = 0.05 •Exploiting the time-savings •Same final result can be obtained with fewer simulation trials. •More simulation trials can be conducted in a given amount of time. •Higher-fidelity (e.g., finer spatio-temporal resolution) simulation trials can be used. •Concentrate trials at the region between interesting and non-interesting regions. •There was no clear-cut winner among the different supervised learning techniques and active learning algorithms. •MCP method or SVM-META may be preferable to MIP due to lower computation. •Further study needed.

19
Future Work •Kernel contraction •Shrink kernel bandwidth parameter as more data is acquired •Hill-climbing for point selection •Maximize throughput on computer cluster •choose new point while other runs in-progress •Scalability of MIP Acknowledgements W.J. Merline, B. Enke, D. Nesvornoy, D. Durda A. Holub, P. Perona, D. Decoste, D. Mazzoni, L. Scharenbroich Erika C. Vote SURF Fellowship, The Housner Fund NASA Applied Information Systems Research Program, J. Bedekamp

Similar presentations

OK

Features-based Object Recognition P. Moreels, P. Perona California Institute of Technology.

Features-based Object Recognition P. Moreels, P. Perona California Institute of Technology.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google