Presentation is loading. Please wait.

Presentation is loading. Please wait.

Diversity Loss in General Estimation of Distribution Algorithms J. L. Shapiro PPSN (Parallel Problem Solving From Nature) ’06 BISCuit 2 nd EDA Seminar.

Similar presentations


Presentation on theme: "Diversity Loss in General Estimation of Distribution Algorithms J. L. Shapiro PPSN (Parallel Problem Solving From Nature) ’06 BISCuit 2 nd EDA Seminar."— Presentation transcript:

1 Diversity Loss in General Estimation of Distribution Algorithms J. L. Shapiro PPSN (Parallel Problem Solving From Nature) ’06 BISCuit 2 nd EDA Seminar

2 Abstract A class of EDAs on which universal results on the rate of diversity loss can be derived.  The class (SML-EDA) requires two restrictions.  In each generation, the new probability model is built using only sampled from the current probability model.  Maximum likelihood is used to set model parameters. The algorithm will never find the optimum unless the population size grows exponentially.

3 EDA Initialize the EDA probability model (Start with a random population of M vectors.) Repeat Select N vectors using a selection method Learn a new probability model from the selected population Sample M vectors from the probability model Until stopping criteria are satisfied

4 SML-EDA A restricted class of EDAs needs 3 assumptions.  1. Data in generation t can only affect data in generation t+1 through the probability model.  2. The parameters of the estimated model are chosen using maximum likelihood.  3. The sample size M and the size of the population N are of a constant ratio independent of the number of variables L. SML-EDA (Simple, Maximum-Likelihood EDA)  The class of EDAs for which assumptions 1 and 2 hold are called SML-EDAs.  This class of EDAs includes BOA, MIMIC, FDA, UMDA, etc.

5 The Empirical frequency (the component i has the value A through the population) One diversity measure The trace of the empirical co-variance matrix Some Definitions for diversity loss Population member Population size Expectation that 2 components both take the value A Expectation that they take value A independently. the value is fixates, 0 random population, maximum value

6 Diversity Loss on a Flat Fitness Landscape Theorem 1  For EDAs in class SML-EDA on a flat landscape, the expected value of v is reduced in each generation by the following.  Because the parameters are set by ML, empirical variance is reduced by a factor (1-1/N) from the parent population. Universal “expected diversity loss” for all SML-EDA  decays with characteristic time approximately equal to the population size for large N.

7 A universal bound for the minimum population size in the needle problem Needle in a haystack problem  There is one special state (the needle), which has a high fitness value, and all others have the same low fitness value.

8 Probability that the needle is never sampled. Theorem 2  In the limit that such that for any EDA in SML-EDA searching on the Needle problem.  The population size must grow at least as fast as for the optimum to be found. The time that the needle is first sampled.

9 Proof of theorem 2  Lemma 1  Let t* be defined as If the needle has not been found after a time t > t*, the probability that the needle will never be found is greater that 1 – ε, where.

10  Proof of Lemma 1  We can write the probability about the diversity loss with the formula that the variance is small.  Choosing t* to be large enough so that if v t is so small, there must be fixation at every component except possibly one component. If L-1 components are fixed, the needle will only be sampled if they are fixed at values found in the needle. We are not certain that v t appropriately small, it is just probable so. Take  t* is calculated by solving simultaneous equations.

11  Lemma 2  The probability that the needle is not found after t* steps obeys  Proof –The probability that SML-EDA does not find the needle in time t* the result is found by putting into this equation the value for t* The Prob. of not finding the needle

12  So, the probability of never finding the needle can be decomposed into  Combining Lemma 1, 2 gives the following  If in the limit, N grows sufficiently slowly that the third term vanishes, then the probability of never finding the needle will go to 1.  Thus, if, as, the needle will never be found.

13 Expected Runtime for the Needle Problem and the Limits of Universality Corollary 1.  If the needle is found, the time to find it is bounded above by t* with probability Convergence conditions will not be universal, but will be particular to the EDA.


Download ppt "Diversity Loss in General Estimation of Distribution Algorithms J. L. Shapiro PPSN (Parallel Problem Solving From Nature) ’06 BISCuit 2 nd EDA Seminar."

Similar presentations


Ads by Google