We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byEmerald Stephens
Modified about 1 year ago
© 2008 Warren B. Powell 1. Optimal Learning Informs TutORials October, 2008 Warren Powell Peter Frazier Princeton University © 2008 Warren B. Powell, Princeton University
© 2008 Warren B. Powell 2 Slide 2 Outline Introduction
© 2008 Warren B. Powell 3 Applications Sports »Who should be in the batting lineup for a baseball team? »What is the best group of five basketball players out of a team of 12 to be your starting lineup? »Who are the best four people to man the four-person boat for crew racing? »Who will perform the best in competition for your gymnastics team?
© 2008 Warren B. Powell 4 Applications Figure out Manhattan: »Walking »Subway/walking »Taxi »Street bus »Driving
© 2008 Warren B. Powell 5 Applications Biomedical research »How do we find the best drug to cure cancer? »There are millions of combinations, with laboratory budgets that cannot test everything. »We need a method for sequencing experiments.
© 2008 Warren B. Powell 6 Applications Biosurveillance »What is the prevalence of drug-resistant TB, MRSA, HIV/AIDS, malaria…., in the population? »How do we efficiently collect information about the state of disease around the world? »What are the best strategies for minimizing transmission? Deaths from vector-born diseases
© 2008 Warren B. Powell 7 Applications High technology »What is the best sensor to use to evaluate the status of optics for the National Ignition Facility? »When should lenses be inspected? »How often should an experiment be run to test a new hypothesis on the physics of fusion? National Ignition Facility
© 2008 Warren B. Powell 8 Applications Stochastic optimization »Stochastic search over surfaces that can only be measured with uncertainty »Simulation-optimization – What is the best set of parameters to produce the best manufacturing configuration? »Active learning – How do we choose which samples to collect for machine learning applications? »Exploration vs. exploitation in approximation dynamic programming – How do we decide which states to visit to balance our need to estimate the value of being in a state versus the reward from visiting a state?
© 2008 Warren B. Powell 9 Introduction Deterministic optimization »Find the choice with the highest reward (assumed known): The winner!
© 2008 Warren B. Powell 10 Introduction Stochastic optimization »Now assume the reward you will earn is stochastic, drawn from a normal distribution. The reward is revealed after the choice is made. The winner!
© 2008 Warren B. Powell 11 Introduction Optimal learning »Now, you have a budget of 10 measurements to determine which of the 5 choices is best. You have an initial probability distribution for the reward that each will return, but you are willing to change your belief as you make choices. How should you sequence your measurements to produce the best answer in the end? We might keep trying the option we think is best: … but what if the third or fourth choice is actually the best?
© 2008 Warren B. Powell 12 Introduction Now assume we have five choices, with uncertainty in our belief about how well each one will perform. Imagine you can make a single measurement, after which you have to make a choice about which one is best. What would you do?
© 2008 Warren B. Powell 13 Introduction Now assume we have five choices, with uncertainty in our belief about how well each one will perform. Imagine you can make a single measurement, after which you have to make a choice about which one is best. What would you do? No improvement
© 2008 Warren B. Powell 14 Introduction Now assume we have five choices, with uncertainty in our belief about how well each one will perform. Imagine you can make a single measurement, after which you have to make a choice about which one is best. What would you do? New solution The value of learning is that it may change your decision.
© 2008 Warren B. Powell 15© 2008 Warren B. Powell Slide 15 Outline Types of learning problems
© 2008 Warren B. Powell 16 Elements of a learning problem Things we have to think about: »How do we make measurements? What is the nature of the measurement decision? »What is the effect of a measurement? How does it change our state of knowledge? »What do we do with the results of what we learn from a measurement? What is the nature of the measurement decision? »How do we evaluate how well we have done with the results of our measurement? »Do we learn as we go, or are we able to make a series of measurements before solving a problem?
© 2008 Warren B. Powell 17 Elements of a learning problem Types of measurement decisions » Stopping problems – observe until you have to make a decision, such as selling an asset. » Finite (and not too big) set of choices » Subset selection –What is the best group of people for a sports team –What is the best subset of energy saving technologies for a building » What is the best price, density, temperature, speed » Linear, nonlinear and integer programming
© 2008 Warren B. Powell 18 Elements of a learning problem Optimal learning »Now assume that you do not know the distribution of the reward, although you have an estimate (a “prior”). »After you make your choice, you observe the actual reward which changes your belief about the distribution of rewards. Observation
© 2008 Warren B. Powell 19 Elements of a learning problem Updating the distribution »Frequentist view Assume we start with observations: Statistics: Frequentist interpretation: – and are random variables reflecting the randomness in the observations of
© 2008 Warren B. Powell 20 Elements of a learning problem Updating the distribution »Bayesian view We assume we start with a distribution of belief about the true mean Next we observe, which we assume comes from a distribution with variance (we assume the variance is known). Using Bayes theorem, we can show that our new distribution of belief about the true mean is normally distributed with mean and variance. We first define the precision of a distribution as the inverse variance: – The updating formulas are
© 2008 Warren B. Powell 21 Elements of a learning problem Frequentist vs. Bayesian »For optimal learning applications, we are generally in the situation where we have some knowledge about our choices, and we have to decide which one to measure to improve our final decision. »The state of knowledge: Frequentist view: Bayesian view: »For the remainder of our talk, we will adopt a Bayesian view since it allows us to introduce prior knowledge, a common property of learning problems.
© 2008 Warren B. Powell 22 Elements of a learning problem Relationships between beliefs and measurements »Beliefs Uncorrelated – What we know about one choice tells us nothing about what we know about another choice Correlated – If our belief of one choice is high, our belief about another choice might be higher »Measurement noise Uncorrelated - If we were to make two measurements at the same time, the measurements are independent. Correlated: –At a point in time – Simultaneous measurements are correlated. –Over time – Measurements of different choices may or may not be correlated, but measurements of the same choice at different points in time are correlated.
© 2008 Warren B. Powell 23 Elements of a learning problem Types of learning probems »On-line learning Learn as you earn Give example problems –Finding the best path to work –What is the best set of energy-saving technologies to use for your building? –What is the best medication to control your diabetes? »Off-line learning There is a phase of information collection with a finite (sometimes small) budget. You are allowed to make a series of measurements, after which you make an implementation decision. Examples: –Finding the best drug compound through laboratory experiments –Finding the best design of a manufacturing configuration or engineering design which is evaluated using an expensive simulation. –What is the best combination of designs for hydrogen production, storage and conversion.
© 2008 Warren B. Powell 24 Elements of a learning problem Measuring the benefits of knowledge: »Minimizing/maximizing a cost or reward Minimizing expected cost/maximizing reward or utility Minimizing expected opportunity cost (minimizing the gap from the best possible) Collecting information to produce a better solution to an optimization problem. »Making the right choice Maximizing the probability of making the correct selection Indifference zone selection – maximizing the probability of collecting a choice whose performance is within of the optimal. »Statistical measures Minimizing a measure (square, absolute value) of the distance between observations and a predictive function (classical estimation) Minimizing a metric (e.g. Kullback-Leibler divergence) measuring the distance between actual and predicted probability distributions. Minimizing entropy (or entropic loss)
© 2008 Warren B. Powell 25© 2008 Warren B. Powell Slide 25 Outline Measurement policies
© 2008 Warren B. Powell 26 Measurement policies What do we know? »The real average path times: »Mean time Path 1 20 minutes Path 2 22 minutes Path 3 24 minutes Path 4 26 minutes Errors are +/- 10 minutes »What we think: Path 1 25 minutes Path 2 24 minutes Path 3 22 minutes Path 4 20 minutes »We act by choosing the path that we “think” is the best. The only way we learn anything new is by choosing a path.
© 2008 Warren B. Powell 27 Measurement policies Illustration of calculations:
© 2008 Warren B. Powell 28 Measurement policies
© 2008 Warren B. Powell 29 Measurement policies
© 2008 Warren B. Powell 30 Measurement policies
© 2008 Warren B. Powell 31 Measurement policies
© 2008 Warren B. Powell 32 Measurement policies For problems with a finite number of alternatives »On-line learning (learn as you earn) This is known in the literature as the multi-armed bandit problem, where you are trying to find the slot machine with the highest payoff. It is necessary to trade off what you think you will earn with each decision, against the value of the information you will gain that might improve decisions in the future. »Off-line learning You have a budget for taking measurements. After your budget is exhausted, you have to make a final choice. This is known as the ranking and selection problem.
© 2008 Warren B. Powell 33 Measurement policies Elements of a measurement policy: »Deterministic or sequential Deterministic policy - you decide what you are going to measure in advance. Sequential policy – Future measurements depend on past observations. »Designing a measurement policy We have to strike a balance between the value of a good measurement policy and the cost of computing it If we are drilling oil exploration holes, we might be willing to spend a day on the computer deciding what to do next We may need a trivial calculation if we are guiding an algorithm that will perform thousands of iterations. »Evaluating a policy The goal is to find a policy that gets us close enough to the truth that we make the optimal (or near-optimal) decisions To do this, we have to assume a truth, and then use a policy to try to guess at the truth.
© 2008 Warren B. Powell 34 Measurement policies Finding an optimal policy »Dynamic programming formulation Let be the “state of knowledge” –E.g. if we have 10 choices, each with a mean and variance, our state would be An optimal learning policy is characterized by Bellman’s equation: »Computational challenges State variable has 20 dimensions, each is continuous. Solving this is impossible (and this is a simple problem!)
© 2008 Warren B. Powell 35 Measurement policies Special case: on-line learning with independent beliefs »Multi-armed bandit problem – Which slot machine should I try next to maximize total expected rewards? »Breakthrough (Gittins and Jones, 1974) Do not need to solve the high-dimensional dynamic program Compute a single index (the “Gittins index”) for each slot machine Try the slot machine with the largest index For normally distributed rewards, the index looks like: »Notes Yao (2006) and Brezzi and Lai (2002) provide analytical approximation for Despite extensive literature on index policies, range of applications is fairly limited. Standard deviation of measurementGittins index for mean zero, variance 1Current estimate of the reward from machine x
© 2008 Warren B. Powell 36 Measurement policies Heuristic measurement policies »Pure exploitation – Always make the choice that appears to be the best. »Pure exploration – Make choices at random so that you are always learning more, but without regard to the cost of the decision. »Hybrid Explore with probability and exploit with probability Epsilon-greedy exploration – explore with probability. Goes to zero as, but not too quickly. »Boltzmann exploration Explore choice x with probability »Interval estimation (upper confidence bounding) Choose x which maximizes
© 2008 Warren B. Powell 37 Measurement policies Approximate policies for off-line learning »Optimal computing budget allocation Brief description »LL(s) – Batch linear loss (Chick et al) »Maximizing the expected value of a single measurement (R1, R1, …,R1) Gupta and Miescke (1996) EVI (Chick, Branke and Schmidt, under review) “Knowledge gradient” (Frazier and Powell, 2008)
© 2008 Warren B. Powell 38 Measurement policies Evaluating measurement policies »How do we compare one measurement policy to another? »One possibility: … but we would be wrong!
© 2008 Warren B. Powell 39 Measurement policies Illustration »Setup: Option 1 is worth 15 Remaining 999 options are worth 10 Standard deviation of a measurement is 5 »Policy 1: Measure each option 10 times »Policy 2: Measure remaining 999 options once. Measure first option 9,001 times »Which measurement policy produces the best result?
© 2008 Warren B. Powell 40 Measurement policies Measuring each alternative 10 times Best choice
© 2008 Warren B. Powell 41 Measurement policies Measuring option 1 9,001 times, and everything else once. Lucky choice
© 2008 Warren B. Powell 42 Measurement policies What did we find? »Although option 1 is best, we will almost always identify some other option as being better, just through randomness. This method rewards collecting too little information. A better way: »Assume a truth for each x. We do this by choosing a sample realization of a truth from a prior probability distribution for the mean. »Given this truth, apply policy to produce statistical estimates given by. Let be the best solution based on these estimates. Repeat this n times and evaluate the policy using » Note: This must be done with realistic (but not real) data.
© 2008 Warren B. Powell 43© 2008 Warren B. Powell Slide 43 Outline The knowledge gradient policy
© 2008 Warren B. Powell 44 The knowledge gradient Basic principle: »Assume you can make only one measurement, after which you have to make a final choice (the implementation decision). »What choice would you make now to maximize the expected value of the implementation decision? Change in estimate of value of option 5 due to measurement. Change which produces a change in the decision.
© 2008 Warren B. Powell 45 The knowledge gradient General model »Off-line learning – We have a measurement budget of N observations. After we do our measurements, we have to make an implementation decision. »Notation:
© 2008 Warren B. Powell 46 The knowledge gradient »The knowledge gradient is the expected value of a single measurement x, given by »The challenge is a computational one: how do we compute the expectation? Knowledge state stateImplementation decisionUpdated knowledge state given measurement xExpectation over different measurement outcomesMarginal value of measuring x (the knowledge gradient)
© 2008 Warren B. Powell 47 The knowledge gradient Derivation »Notation »We update the precision using »In terms of the variance, this is the same as
© 2008 Warren B. Powell 48 The knowledge gradient Derivation »The change in variance can be found to be »Next compute the normalized influence: »Let »Knowledge gradient is computed using
© 2008 Warren B. Powell 49 The knowledge gradient Knowledge gradient
© 2008 Warren B. Powell 50 The knowledge gradient The knowledge gradient policy Properties »Effectively a myopic policy, but also similar to steepest ascent for nonlinear programming. »The best single measurement you can make (by construction) »Asymptotically optimal (more difficult proof). As the measurement budget grows, we get the optimal solution. »The knowledge gradient policy is the only stationary policy with this behavior. Many policies are asymptotically optimal (e.g. pure exploration, hybrid exploration/exploitation, epsilon-greedy), but are not myopically optimal.
© 2008 Warren B. Powell 51 The knowledge gradient Current estimate of value of a decisionCurrent estimate of standard deviationValue of knowledge gradient
© 2008 Warren B. Powell 52 The knowledge gradient
© 2008 Warren B. Powell 53 The knowledge gradient
© 2008 Warren B. Powell 54 The knowledge gradient Experimental comparisons: »KG vs: Boltzmann Interval estimation Equal allocation OCBA Pure exploitation Linear loss KG - Boltzmann KG – Equal alloc KG – Pure exploit KG – Interval est. KG – OCBA KG – LL(S)
© 2008 Warren B. Powell 55 The knowledge gradient Notes: »KG slightly outperforms Interval Estimation (IE), OCBA, and LL(S), and is easier to compute than OCBA and LL(S). »KG is fairly easy to compute for independent, normally distributed rewards. »But KG is a general concept which generalizes to other important problem classes: Correlated beliefs Correlated measurements (e.g. Common Random Numbers) On-line applications … more general optimization problems
© 2008 Warren B. Powell 56© 2008 Warren B. Powell Slide 56 Outline The knowledge gradient with correlated beliefs
© 2008 Warren B. Powell 57 Correlated beliefs Applications »Measurements of continuous functions »Subset selection »Multiattribute
© 2008 Warren B. Powell 58 CKG technique Animations on a line Subset selection illustration (diabetes?) EGO technique? Contrast with CKG KG for online
© 2008 Warren B. Powell 59 KG for more general applications »On a graph »LP’s??? »KG with a physical state
© 2008 Warren B. Powell 60
© 2008 Warren B. Powell 61 Solution methods Dynamic programming for pure learning (knowledge state without a physical state) »On-line learning Gittins indices and the uncertainty bonus Gittins “index” Std. dev of The “uncertainty bonus” The estimated value of a decision
© 2008 Warren B. Powell 62 Optimal measuring - uncorrelated Knowledge gradient policy Measurement Updated knowledge Economic decision (using the information) Measurement decision (collecting information)
© 2008 Warren B. Powell 63 Solution methods Generalizations »Measurements may be correlated. We may measure an object with a multidimensional attribute vector a. Measuring a tells us about an object with attribute a’ if the two share common attributes. Events on a line – a sensor at location x may provide a rough measurement at nearby locations. May assume that measurements at x and y are correlated inversely with their distance. –Click hereClick here
© 2008 Warren B. Powell 64 Solution methods Knowledge gradient adapted to on-line learning »Finite horizon problems »Infinite horizon problems
© 2008 Warren B. Powell 65 Examples of learning: »Transportation You just took a new job and there are different paths you can take to get to work. You have an idea how long each path is, but you do not know anything about traffic delays, waiting for subways/commuter trains, missed connections, late service.
© 2008 Warren B. Powell 66 Figure out Manhattan: »Walking »Subway/walking »Taxi »Street bus »Driving
© 2008 Warren B. Powell 67 Information acquisition Finding the best path to work »Four paths, but everyone time I drive on one, I sample a new time. »I want to choose the path that is best on average.
© 2008 Warren B. Powell 68 Information acquisition What do we know? »What we think: Path 1 25 minutes Path 2 24 minutes Path 3 22 minutes Path 4 20 minutes »We act by choosing the path that we “think” is the best. The only way we learn anything new is by choosing a path.
© 2008 Warren B. Powell 69 Information acquisition The shortest path game (game 1) »Starting with the estimates at the top, choose paths so that you discover the best path.
© 2008 Warren B. Powell 70
© 2008 Warren B. Powell 71
UNIT V: LEARNING. LEARNING Learning from Observation Inductive Learning Decision Trees Explanation based Learning Statistical Learning methods Reinforcement.
Probability and Statistics Representation of Data Measures of Center for Data Simple Analysis of Data.
Sampling and monitoring the environment Marian Scott Sept 2006.
Chapter 3: Supervised Learning. CS583, Bing Liu, UIC 2 Road Map Basic concepts Decision tree induction Evaluation of classifiers Rule induction Classification.
Effect Sizes and Power Review. Statistical Power Statistical power refers to the probability of finding a particular sized effect Specifically, it is.
Facility Location Lindsey Bleimes Charlie Garrod Adam Meyerson.
Building an Emulator. EGU short course – session 22 Outline Recipe for building an emulator – MUCM toolkit Screening – which simulator inputs matter Design.
Asset Return Predictability. Chapter 1, CLM –Introduce notation to a limited extent. –Discuss the basic assumptions financial economists make about returns.
Quality Tools and Techniques in the School and Classroom.
By: Marco Antonio Guimarães Dias Petrobras and PUC-Rio, Brazil Visit the first real options website: Real Options in Petroleum:
© Copyright QinetiQ limited 2007 QinetiQ Proprietary Human Aspects of NEC: Decision- Making, Organisation and Information Dr Andy Belyavin.
Pseudo Random and Random Numbers Vivek Bhatnagar and Chaitanya Cheruvu.
Introduction to Programming Logic Instructor: Professor Stephen Osborne.
AP Statistics Hamilton/Mann. Confidence intervals are one of the two most common types of statistical inference. Use confidence intervals when you.
Robotic Rational Reasoning! Lecture 2 P.H.S. Torr.
Reinforcement Learning I: The setting and classical stochastic dynamic programming algorithms Tuomas Sandholm Carnegie Mellon University Computer Science.
An introduction to Impact Evaluation Markus Goldstein Poverty Reduction Group The World Bank.
Povertyactionlab.org Planning Sample Size for Randomized Evaluations Esther Duflo J-PAL.
Reinforcement Learning Yijue Hou. What is learning? Learning takes place as a result of interaction between an agent and the world, the idea behind learning.
Direct Time study: Selecting and timing the job First step in time study is to select the job to be studied. There is always a reason why a particular.
Sampling and monitoring the environment Marian Scott March 2009.
Simulation with Arena, 5th ed.Chapter 4 – Modeling Basic Operations and InputsSlide 1 of 68 Modeling Basic Operations and Inputs Chapter 4 Last revision.
Chapter 12 Technology. INTRODUCTION This chapter considers technology in general, with some limited emphasis on software. The life cycle and software.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlikeLicense. Your use of this material constitutes acceptance of that license.
INTERMEDIATE 1 PHYSICAL EDUCATION STRUCTURES AND STRATEGIES INFORMATION PACK Name : _____________________________________ Class : _________ Year : ______.
FOCUSING ON MATHEMATICAL REASONING: TRANSITIONING TO THE 2014 GED ® TEST Presenters: Bonnie Goonen Susan Pittman
G. Pottie, Sensys, November 7, 2003 Multi-Terminal Information Theory Problems in Sensor Networks Gregory J Pottie Professor, Electrical Engineering Department.
GROUP 5 PALLAV SUDHIR PRITY BALA ANUP SARANSH RAJAT HIRNI ANTHONY CHETNA ROGER.
1 Chapter 2: Decision Making, Systems, Modeling, and Support Conceptual Foundations of Decision Making The Systems Approach How Support is Provided.
Learning Theory Reza Shadmehr Bayesian learning 1: Bayes rule, priors and maximum a posteriori.
© 2016 SlidePlayer.com Inc. All rights reserved.