# Ordinal Optimization. Copyright by Yu-Chi Ho2 Some Additional Issues F Design dependent estimation noises vs. observation noise F Goal Softening to increase.

## Presentation on theme: "Ordinal Optimization. Copyright by Yu-Chi Ho2 Some Additional Issues F Design dependent estimation noises vs. observation noise F Goal Softening to increase."— Presentation transcript:

Ordinal Optimization

Copyright by Yu-Chi Ho2 Some Additional Issues F Design dependent estimation noises vs. observation noise F Goal Softening to increase the Alignment Probability F Constrained Ordinal Optimization

Copyright by Yu-Chi Ho3 Goal Soften  Performance distribution: Normal(0,  signal )  Additive noise:N(0,  noise ) F # of sample taken:N=100 F The selected set: The best of N samples, i.e., |S|=1 F The good enough set:G=top5%

Copyright by Yu-Chi Ho4 Goal Soften (contd.)

Advanced Ordinal Optimization - Breadth vs. Depth in Searches - Y.C.Ho & X.C. Lin October, 2001 Tsinghua University, China

Copyright by Yu-Chi Ho6 Fundamental Difficulties in the Design of Complex Systems 1. Time consuming performance evaluation via simulation - the 1/(T) 1/2 limit 2. Combinatorially large search space - the NP-hard limit 3. Lack of structure - the No Free Lunch Theorem limit 4. Finite limit of computational resource

Copyright by Yu-Chi Ho7 Premises F It is a finite world! F Computing resources are finite

Copyright by Yu-Chi Ho8 Breadth vs. Depth in Stochastic Optimization F At any point in an optimization process, the question is “How do I spend my next unit of computing resources?” –1. Making sure that the current estimated best is indeed the current best (local or depth) –2. Explore additional points to find a better solution (global or breadth) F Trade off between these two sub-goals to optimize finite resource utilization

Copyright by Yu-Chi Ho9 The “Marriage” Metaphor F You have limited resources in time, money, and energy F You know a given set of possible candidates with imperfect information on each candidate F You also have a preference order on these candidate based on the imperfect information F You can spent the next unit of resources to get better information (=> ordering) about the candidates F Or you can spent the next unit of resources to get more (possibly better) samples, i.e., meet new people. F You can quantify this trade off

Copyright by Yu-Chi Ho10 Definitions & Pb Statement  {  1,...,  M }: M designs sampled   b : best observed design among the M designs   * : true best solution among the M designs  N i : computing resources to be spent on  i F T : computing resources available F G: good enough set  Problem (P) : Max Prob(  b  G) s.t.  N i =T

Copyright by Yu-Chi Ho11 Decomposition of (P)  P(  b  G)  P(  b =  *)P(  *  G)+P(  b  *,  b  G) Probability that the observed best is indeed the true best sampled Probability that the true best sampled is good enough Depth Breadth

Copyright by Yu-Chi Ho12 “Depth” in searches F Each observed performance can be modeled as g.r.v. with mean and variances F variances depends on computing resources  P(J obs (  b ) < J obs (  i )) can be easily calculated as a function of resources allocated to each observation  P(  b =  *)   i  b {P(J obs (  b ) < J obs (  i ))} can be easily lower bounded

Copyright by Yu-Chi Ho13 “Breadth” in searches  Define g=|G|/|  |, e.g., 0.05 => G is top 5 %  P(  *  G) = 1- P(  *  G) = 1-(1-g) M  P(  *  G) as a function of M, samples taken, can be easily evaluated. F More advanced adaptive sampling, e.g., nested partition search

Copyright by Yu-Chi Ho14 Evaluating “g” More Generally  Samples are taken from each partition uniformly but not uniformly over . –Let there be partitions R i i=1,.., k ; U R i =  –g i  |G  R i |/|R i |; M i  samples in R i ; L i =#of observed top designs in R i  P(  *  G) = 1-  (1-g i ) M i  g i(est) =g * (L i / R i )/(L/  ) F can prove that g i(est) is unbiased and consistent

Copyright by Yu-Chi Ho15 Breadth vs. Depth (contd.) F T=computing resources used so far  Depth => Prob (observed best so far= actual best so far)  P(  b =  *)  P d (T)  Breadth => Prob (actual best so far  Good Enough set G)  P(  *  G)  P b (T) F Compare marginal increases to depth and breadth.

Copyright by Yu-Chi Ho16 General and Unified Framework At current state of search with M samples of observed performances J b (  i ), i=1,..., M Calculate r d and r b Decision breadth depth More samples Better estimate of performances

Copyright by Yu-Chi Ho17 Applications F Test problems F Apparel manufacturing problem F Stock Option problem F See X. C. Lin “ A New Approach to Stochastic Optimization” Ph.D.thesis Harvard University, Oct. 2000

Download ppt "Ordinal Optimization. Copyright by Yu-Chi Ho2 Some Additional Issues F Design dependent estimation noises vs. observation noise F Goal Softening to increase."

Similar presentations