Experimental Algorithmics Reading Group, UBC, CS Presented paper: Fine-tuning of Algorithms Using Fractional Experimental Designs and Local Search by Belarmino.

Slides:



Advertisements
Similar presentations
Prof. Steven D.Eppinger MIT Sloan School of Management.
Advertisements

Empirical Algorithmics Reading Group Oct 11, 2007 Tuning Search Algorithms for Real-World Applications: A Regression Tree Based Approach by Thomas Bartz-Beielstein.
Robust Design – The Taguchi Philosophy
Computational Statistics. Basic ideas  Predict values that are hard to measure irl, by using co-variables (other properties from the same measurement.
Topic 12 – Further Topics in ANOVA
1 Chapter 4 Experiments with Blocking Factors The Randomized Complete Block Design Nuisance factor: a design factor that probably has an effect.
Chapter 4 Randomized Blocks, Latin Squares, and Related Designs
Fractional Factorial Designs of Experiments
IBM Labs in Haifa © 2005 IBM Corporation Adaptive Application of SAT Solving Techniques Ohad Shacham and Karen Yorav Presented by Sharon Barner.
Variance reduction techniques. 2 Introduction Simulation models should be coded such that they are efficient. Efficiency in terms of programming ensures.
Experimental Design, Response Surface Analysis, and Optimization
Stat 321 A Taguchi Case Study Experiments to Minimize Variance.
Automatic Tuning1/33 Boosting Verification by Automatic Tuning of Decision Procedures Domagoj Babić joint work with Frank Hutter, Holger H. Hoos, Alan.
On the Potential of Automated Algorithm Configuration Frank Hutter, University of British Columbia, Vancouver, Canada. Motivation for automated tuning.
1cs542g-term Notes  Assignment 1 due tonight ( me by tomorrow morning)
Using process knowledge to identify uncontrolled variables and control variables as inputs for Process Improvement 1.
Design of Engineering Experiments - Experiments with Random Factors
Self-Tuning and Self-Configuring Systems Zachary G. Ives University of Pennsylvania CIS 650 – Database & Information Systems March 16, 2005.
The AutoSimOA Project Katy Hoad, Stewart Robinson, Ruth Davies Warwick Business School WSC 07 A 3 year, EPSRC funded project in collaboration with SIMUL8.
Markov Decision Models for Order Acceptance/Rejection Problems Florian Defregger and Heinrich Kuhn Florian Defregger and Heinrich Kuhn Catholic University.
Lecture 5: Learning models using EM
Optimizing General Compiler Optimization M. Haneda, P.M.W. Knijnenburg, and H.A.G. Wijshoff.
MAE 552 Heuristic Optimization
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #16 3/1/02 Taguchi’s Orthogonal Arrays.
Parametric Query Generation Student: Dilys Thomas Mentor: Nico Bruno Manager: Surajit Chaudhuri.
Experimental Evaluation
I.5 Taguchi’s Philosophy  Some Important Aspects  Loss Functions  Exploiting Nonlinearities  Examples  Taguchi - Comments and Criticisms.
Elements of the Heuristic Approach
CSCI 347 / CS 4206: Data Mining Module 06: Evaluation Topic 01: Training, Testing, and Tuning Datasets.
Introduction to Robust Design and Use of the Taguchi Method.
1 Efficient Stochastic Local Search for MPE Solving Frank Hutter The University of British Columbia (UBC), Vancouver, Canada Joint work with Holger Hoos.
DOE – An Effective Tool for Experimental Research
MultiSimplex and experimental design as chemometric tools to optimize a SPE-HPLC-UV method for the determination of eprosartan in human plasma samples.
Design-plots for factorial and fractional-factorial designs Russell R Barton Journal of Quality Technology; 30, 1 ; Jan 1998 報告者:李秦漢.
by B. Zadrozny and C. Elkan
Chapter 13Design & Analysis of Experiments 8E 2012 Montgomery 1.
Factorial Design of Experiments Kevin Leyton-Brown.
Chapter 13 ANOVA The Design and Analysis of Single Factor Experiments - Part II Chapter 13B Class will begin in a few minutes. Reaching out to the internet.
1 The General 2 k-p Fractional Factorial Design 2 k-1 = one-half fraction, 2 k-2 = one-quarter fraction, 2 k-3 = one-eighth fraction, …, 2 k-p = 1/ 2 p.
Parallel Algorithm Configuration Frank Hutter, Holger Hoos, Kevin Leyton-Brown University of British Columbia, Vancouver, Canada.
Slide 1 Tutorial: Optimal Learning in the Laboratory Sciences Overview December 10, 2014 Warren B. Powell Kris Reyes Si Chen Princeton University
1 Chapter 6 Estimates and Sample Sizes 6-1 Estimating a Population Mean: Large Samples / σ Known 6-2 Estimating a Population Mean: Small Samples / σ Unknown.
MSE-415: B. Hawrylo Chapter 13 – Robust Design What is robust design/process/product?: A robust product (process) is one that performs as intended even.
6.1 Inference for a Single Proportion  Statistical confidence  Confidence intervals  How confidence intervals behave.
Supplementary PPT File for More detail explanation on SPSS Anova Results PY Cheng Nov., 2015.
Solutions. 1.The tensile strength of concrete produced by 4 mixer levels is being studied with 4 replications. The data are: Compute the MS due to mixers.
Schreiber, Yevgeny. Value-Ordering Heuristics: Search Performance vs. Solution Diversity. In: D. Cohen (Ed.) CP 2010, LNCS 6308, pp Springer-
KNR 445 Statistics t-tests Slide 1 Introduction to Hypothesis Testing The z-test.
Tetris Agent Optimization Using Harmony Search Algorithm
Orthogonal arrays of strength 3 with full estimation capacities Eric D. Schoen; Man V.M. Nguyen.
1 The Role of Statistics in Engineering ENM 500 Chapter 1 The adventure begins… A look ahead.
Human and Optimal Exploration and Exploitation in Bandit Problems Department of Cognitive Sciences, University of California. A Bayesian analysis of human.
Lecture 8 Source detection NASSP Masters 5003S - Computational Astronomy
Machine Learning 5. Parametric Methods.
Special Topics in Educational Data Mining HUDK5199 Spring term, 2013 March 6, 2013.
Evolving RBF Networks via GP for Estimating Fitness Values using Surrogate Models Ahmed Kattan Edgar Galvan.
1 Robust Parameter Design and Process Robustness Studies Robust parameter design (RPD): an approach to product realization activities that emphasizes choosing.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Custom Computing Machines for the Set Covering Problem Paper Written By: Christian Plessl and Marco Platzner Swiss Federal Institute of Technology, 2002.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 6 –Multiple hypothesis testing Marshall University Genomics.
Designs for Experiments with More Than One Factor When the experimenter is interested in the effect of multiple factors on a response a factorial design.
2 k factorial designs l Examine the response for all possible combinations of k factors at two levels l Are carried out as replicated or unreplicated experiments.
Data Mining CH6 Implementation: Real machine learning schemes(2) Reporter: H.C. Tsai.
TAUCHI PHILOSOPHY SUBMITTED BY: RAKESH KUMAR ME
Axially Variable Strength Control Rods for The Power Maneuvering of PWRs KIM, UNG-SOO Dept. of Nuclear and Quantum Engineering.
Question So, I’ve done my factor analysis.
Optimization with Meta-Heuristics
DESIGN OF EXPERIMENTS by R. C. Baker
Design of Experiments CHM 585 Chapter 15.
Presentation transcript:

Experimental Algorithmics Reading Group, UBC, CS Presented paper: Fine-tuning of Algorithms Using Fractional Experimental Designs and Local Search by Belarmino Adenso-Díaz (Barcelona) and Manuel Laguna (Colorado) OR Journal 2006 Presenter: Frank Hutter, 23 Aug 2006

23 Aug, 2006Fine-tuning of algorithms2 Motivation Anecdotal evidence that of the total time for designing and testing a new (meta) heuristic  10% is spent on development  90% is spent on fine-tuning parameter  (In my opinion, 90% is maybe a little bit too high)  If you see any real stats about this sometime, please let me know !

23 Aug, 2006Fine-tuning of algorithms3 Motivation (2) Barr et al (1995) (we read this Nov 2004) “The selection of parameter values that drive heuristics is itself a scientific endeavor and deserves more attention than it has received in the operations research literature. This is an area where the scientific method and statistical analysis could and should be employed.”

23 Aug, 2006Fine-tuning of algorithms4 Motivation (3) Parameter tuning in the OR literature  1) “Parameter values have been established experimentally” (without stating the procedure)  2) Just give parameters without explanation, often different for problem classes or even for each instance  3) Use parameter values that were previously determined to be effective (simulated annealing, Guided Local Search for MPE)  4) Sometimes employed experimental design is stated

23 Aug, 2006Fine-tuning of algorithms5 Objective function to be minimized Runtime for solving a training set of decision problems  They only do one run per instance and (wrongly?) refer to runs on different instances as replication Average deviation from optimal solution in optimization algorithms In general: some combination of speed and accuracy

23 Aug, 2006Fine-tuning of algorithms6 Design of experiments Includes  1) The set of treatments included in the study  2) The set of experimental units included in the study  3) The rules and procedures by which treatments are assigned to experimental units  4) Analysis (measurements that are made on the experimental units after the treatments have been applied)

23 Aug, 2006Fine-tuning of algorithms7 Different designs Full factorial experimental design  2 k factorial – 2 levels (critical values) per variable  3 k factorial Fractional factorial experiment  Orthogonal array with n=8 runs, k=5 factors, s=2 levels and strength t=2  n x k array with entries 0 to s-1 and property that in any t columns the s t possible combinations appear equally often (the projections to lower dimensions are balanced)

23 Aug, 2006Fine-tuning of algorithms8 Aside: Taguchi design of experiments Genichi Taguchi  Robust parameter design Set controllable parameters to achieve maximum output with low variance

23 Aug, 2006Fine-tuning of algorithms9 Taguchi design applied here L9(3 4 ) is a design with nine runs, 4 variables with 3 values each and strength 2 (for each combination of variables, each of the 9 value combinations occurs exactly once) Based on this, you can estimate the “optimal condition”, even if it’s not one of the 9 runs performed (how ?  separate topic)

23 Aug, 2006Fine-tuning of algorithms10 The CALIBRA software Limited to 5 parameters Starts with full factorial bi-level design (32 runs)  25% and 75% “quantiles” of each parameter as levels  Fix parameter with least significant main effect to its best value From then on, do “local search”  Choose 3 levels around last best setting  Do L9(3 4 ) Taguchi design  Narrow down the levels around a the best predicted solution When local optimum is reached  Build new starting point for local search by combining previous local optima/previous worst solutions  This is meant to trade off exploration and exploitation but seem fairly ad-hoc

23 Aug, 2006Fine-tuning of algorithms11 The CALIBRA software (2) Only in Windows Requires the algorithm to be tuned as a.exe file  Just write a.exe wrapper

23 Aug, 2006Fine-tuning of algorithms12 The CALIBRA software (3) Objective function can be based on multiple instances Deal with that inside the algorithm  Well, in the wrapper

23 Aug, 2006Fine-tuning of algorithms13 The CALIBRA software - Live demo Let’s hope it works … They do some caching that’s not mentioned in the paper

23 Aug, 2006Fine-tuning of algorithms14 Backup in case it doesn’t work

23 Aug, 2006Fine-tuning of algorithms15 Experimental analysis Pretty straight-forward MAXEX is a parameter of major importance !! They do a little bit better than the manually found parameter settings (or those found with Taguchi designs) For these domains, not too much promise in per-instance tuning (Table 5 compared to Table 2) Figure 9 vs 10 probably only shows that their performance metric means different things for different domains

23 Aug, 2006Fine-tuning of algorithms16 Points for improvements Objective function evaluation requires solving many instances (possibly many times)  Takes lots of time even if results are abysmal  Can stop evaluation if it’s (statistically) clear that the result won’t be better than the best one we already have “CALIBRA should be more effectice in situations when the interactions among parameters are negligible”  But then you really don’t need anything like this !  Related work (DACE) builds a model of the whole surface - I expect that to work better