Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania E-mail:

Slides:



Advertisements
Similar presentations
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Advertisements

Fundamentals of Data Analysis Lecture 12 Methods of parametric estimation.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
1 Detection and Analysis of Impulse Point Sequences on Correlated Disturbance Phone G. Filaretov, A. Avshalumov Moscow Power Engineering Institute, Moscow.
All Hands Meeting, 2006 Title: Grid Workflow Scheduling in WOSE (Workflow Optimisation Services for e- Science Applications) Authors: Yash Patel, Andrew.
Reliability Based Design Optimization. Outline RBDO problem definition Reliability Calculation Transformation from X-space to u-space RBDO Formulations.
Gizem ALAGÖZ. Simulation optimization has received considerable attention from both simulation researchers and practitioners. Both continuous and discrete.
Decision Making: An Introduction 1. 2 Decision Making Decision Making is a process of choosing among two or more alternative courses of action for the.
Aspects of Conditional Simulation and estimation of hydraulic conductivity in coastal aquifers" Luit Jan Slooten.
CHAPTER 16 MARKOV CHAIN MONTE CARLO
Maximum likelihood (ML) and likelihood ratio (LR) test
Resampling techniques Why resampling? Jacknife Cross-validation Bootstrap Examples of application of bootstrap.
Stochastic Differentiation Lecture 3 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO Working Group on Continuous.
CF-3 Bank Hapoalim Jun-2001 Zvi Wiener Computational Finance.
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
Job Release-Time Design in Stochastic Manufacturing Systems Using Perturbation Analysis By: Dongping Song Supervisors: Dr. C.Hicks & Dr. C.F.Earl Department.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Maximum likelihood (ML) and likelihood ratio (LR) test
Evaluating Hypotheses
Planning operation start times for the manufacture of capital products with uncertain processing times and resource constraints D.P. Song, Dr. C.Hicks.
Maximum likelihood (ML)
Nonlinear Stochastic Programming by the Monte-Carlo method Lecture 4 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO.
1 Assessment of Imprecise Reliability Using Efficient Probabilistic Reanalysis Farizal Efstratios Nikolaidis SAE 2007 World Congress.
CHAPTER 15 S IMULATION - B ASED O PTIMIZATION II : S TOCHASTIC G RADIENT AND S AMPLE P ATH M ETHODS Organization of chapter in ISSO –Introduction to gradient.
Introduction to Monte Carlo Methods D.J.C. Mackay.
Component Reliability Analysis
Stochastic Approximation and Simulated Annealing Lecture 8 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO Working.
CHAPTER 14 CHAPTER 14 S IMULATION - B ASED O PTIMIZATION I : R EGENERATION, C OMMON R ANDOM N UMBERS, AND R ELATED M ETHODS Organization of chapter in.
Bayesian parameter estimation in cosmology with Population Monte Carlo By Darell Moodley (UKZN) Supervisor: Prof. K Moodley (UKZN) SKA Postgraduate conference,
The Triangle of Statistical Inference: Likelihoood
ENCI 303 Lecture PS-19 Optimization 2
Statistical inference. Distribution of the sample mean Take a random sample of n independent observations from a population. Calculate the mean of these.
1 HMM - Part 2 Review of the last lecture The EM algorithm Continuous density HMM.
CSDA Conference, Limassol, 2005 University of Medicine and Pharmacy “Gr. T. Popa” Iasi Department of Mathematics and Informatics Gabriel Dimitriu University.
Module 1: Statistical Issues in Micro simulation Paul Sousa.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Various topics Petter Mostad Overview Epidemiology Study types / data types Econometrics Time series data More about sampling –Estimation.
Simulated Annealing.
Lecture 2 Basics of probability in statistical simulation and stochastic programming Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius,
Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.
Monte-Carlo method for Two-Stage SLP Lecture 5 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO Working Group on Continuous.
HMM - Part 2 The EM algorithm Continuous density HMM.
Vaida Bartkutė, Leonidas Sakalauskas
CHAPTER 17 O PTIMAL D ESIGN FOR E XPERIMENTAL I NPUTS Organization of chapter in ISSO –Background Motivation Finite sample and asymptotic (continuous)
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
1 CLUSTER VALIDITY  Clustering tendency Facts  Most clustering algorithms impose a clustering structure to the data set X at hand.  However, X may not.
Monte-Carlo based Expertise A powerful Tool for System Evaluation & Optimization  Introduction  Features  System Performance.
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
Diversity Loss in General Estimation of Distribution Algorithms J. L. Shapiro PPSN (Parallel Problem Solving From Nature) ’06 BISCuit 2 nd EDA Seminar.
Geology 5670/6670 Inverse Theory 20 Feb 2015 © A.R. Lowry 2015 Read for Mon 23 Feb: Menke Ch 9 ( ) Last time: Nonlinear Inversion Solution appraisal.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
Computacion Inteligente Least-Square Methods for System Identification.
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
Introduction We consider the data of ~1800 phenotype measurements Each mouse has a given probability distribution of descending from one of 8 possible.
ESTIMATION.
6.5 Stochastic Prog. and Benders’ decomposition
Decomposition Methods
Unfolding Problem: A Machine Learning Approach
Chap 3. The simplex method
Discrete Event Simulation - 4
Slides for Introduction to Stochastic Search and Optimization (ISSO) by J. C. Spall CHAPTER 15 SIMULATION-BASED OPTIMIZATION II: STOCHASTIC GRADIENT AND.
Instructor :Dr. Aamer Iqbal Bhatti
Unfolding with system identification
6.5 Stochastic Prog. and Benders’ decomposition
Stochastic Methods.
Presentation transcript:

Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania

CONTENT Introduction Monte-Carlo estimators Stochastic differentiation  Dual solution approach (DS)  Finite difference approach (FD)  Simulated perturbation stochastic approximation (SPSA)  Likelihood ratio approach (LR) Numerical study of stochastic gradient estimators Stochastic optimization by series of Monte-Carlo estimators Numerical study of stochastic optimization algorithm Conclusions

Introduction We consider the stochastic approach for stochastic linear problems which distinguishes by  adaptive regulation of the Monte-Carlo estimators  statistical termination procedure  stochastic ε–feasible direction approach to avoid “jamming” or “zigzagging” in solving a constraint problem

Two-stage stochastic programming problem with recourse subject to the feasible set where W, T, h are random in general and defined by absolutely continuous probability density

Monte-Carlo estimators of objective function Let the certain number N of scenarios for some is provided : and the sampling estimator of the objective function as well as the sampling variance are computed

The gradient Monte-Carlo estimators of stochastic gradient as well as the sampling covariance matrix: are evaluated using the same random sample, where

Statistical testing of optimality hypothesis under asymptotic normality Optimality hypothesis is rejected if 1) the statistical hypothesis of equality of gradient to zero is rejected 2) or confidence interval of the objective function exceeds the admissible value

Stochastic differentiation We examine several estimators for stochastic gradient:  Dual solution approach (DS);  Finite difference approach (FD);  Simulated perturbation stochastic approach (SPSA);  Likelihood ratio approach (LR).

Dual solution approach (DS) The stochastic gradient is expressed as using the set of solutions of the dual problem

Finite difference (FD) approach In this approach the each i th component of the stochastic gradient is computed as: is the vector with zero components except i th one, equal to 1, is certain small value

Simulated perturbation stochastic approximation (SPSA) where is the random vector, which components obtain values 1 or -1 with probabilities p=0.5, is some small value (Spall (2003))

Likelihood ratio (LR) approach Rubinstein, Shapiro (1993), Sakalauskas (2002)

Methods for stochastic differentiation have been explored with testing functions here Numerical study of stochastic gradient estimators (1)

Numerical study of stochastic gradient estimators (2) Stochastic gradient estimators from samples of size (number of scenarios) N was computed at the known optimum point X (i.e. ) for test functions, depending on n parameters. This repeated 400 times and the corresponding sample of Hotelling statistics was analyzed according to and criteria

criteria on variable number n and Monte Carlo sample size N (critical value 0,46) Nn Nn

criteria on variable number n and Monte Carlo sample size N (critical value 2,49) N n

Sample size, N 10000,925, ,764, ,553, ,682, ,231, ,191, ,120,66 Statistical criteria on Monte Carlo sample size N for number of variable n=40 (critical values 0,46 ir 2,49)

Imties tūris, N 10004,4223, ,316, ,176, ,462, ,221, ,090,56 Statistical criteria on Monte Carlo sample size N for number of variable n=60 (critical values 0,46 ir 2,49)

Imties tūris, N ,5383, ,3927, ,793, ,271, ,130, ,070,39 Statistical criteria on Monte Carlo sample size N for number of variable n=80 (critical values 0,46 ir 2,49)

Conclusion: T 2 -statistics distribution may be approximated by Fisher law, when number of scenarios: Variable number, n Number of scenarios, N min (Monte Carlo sample size) Numerical study of stochastic gradient estimators (8)

Frequency of optimality hypothesis on the distance to optimum (n=2)

Frequency of optimality hypothesis on the distance to optimum (n=10)

Frequency of optimality hypothesis on the distance to optimum (n=20)

Frequency of optimality hypothesis on the distance to optimum (n=50)

Frequency of optimality hypothesis on the distance to optimum (n=100)

Conclusion: stochastic differentiation by Dual Solution and Finite Difference approaches enables us to reliably estimate the stochastic gradient, when:. SPSA and Likelihood Ratio works when Numerical study of stochastic gradient estimators (14)

Gradient search procedure Let some initial point be chosen, the random sample of a certain initial size N 0 be generated at this point, and Monte-Carlo estimators be computed. The iterative stochastic procedure of gradient search is: where the projection of to ε - feasible set:

The rule to choose number of scenarios We propose a following rule to regulate number of scenarios: Thus, the iterative stochastic search is performed until statistical criteria don’t contradict to optimality conditions

Linear convergence Under some conditions on finiteness and smooth differentiability of the objective function the proposed algorithm converges a.s. to the stationary point: with linear rate where K, L, C, l are some constants (Sakalauskas (2002), (2004))

Linear Convergence Since the Monte-Carlo sample size increases with geometric progression rate it follows: Conclusion: the approach proposed enables us to solve SP problems by computing a finite number times of expected objective function

Numerical study of stochastic optimization algorithm Test problems have been solved from the Data Base of two-stage stochastic linear optimisation problems: l1/20x20.1/. Dimensionality of the tasks from n=20 to n=80 (30 to 120 at the second stage) All solutions given in data base are achieved and in a number of that we succeeded to improve the known decisions, especially for large number of variables

Two stage stochastic programming problem (n=20) The estimate of the optimal value of the objective function given in the database is  (improved to  ) N 0 =N min =100, N max =10000 Maximal number of iterations, generation of trials was broken when the estimated confidence interval of the objective function exceeds admissible value. Initial data as follows : Solution repeated 500 times

Frequency of stopping under number of iterations and admissible confidence interval

Change of the objective function under number of iterations and admissible interval

Change of confidence interval under number of iterations and admissible interval

Change of the Hotelling statistics under admissible interval

Change of the Monte-Carlo sample size under number of iterations and admissible interval

Ratio under admissible interval (1)

AccuracyObjective Function Ratio under admissible interval (2)

Two-stage SP problem first stage: 80 variables, 40 constraints second stage: 80 variables, 120 constraints DB given solution  Solution by developed algorithm:  Solving DB Test Problems (1)

Two-stage SP problem first stage: 80 variables, 40 constraints second stage: 80 variables, 120 constraints DB given solution  Solution by developed algorithm:  Solving DB Test Problems (2)

Two-stage SP problem first stage: 80 variables, 40 constraints second stage: 80 variables, 120 constraints DB given solution  Solution by developed algorithm:  Solving DB Test Problems (3)

Comparison with Benders decomposition

Conclusions The stochastic iterative method has been developed to solve the SLP problems by a finite sequence of Monte- Carlo sampling estimators The approach presented is reasoned by the statistical termination procedure and the adaptive regulation of size of Monte-Carlo samples The computation results show the approach developed provides estimators for a reliable solving and testing of optimality hypothesis in a wide range of dimensionality of SLP problems (2<n<100). The approach developed enables us generate almost unbounded number of scenarios and solve SLP problems with admissible accuracy Total volume of computations solving SLP exceeds only several times the volume of scenarios needed to evaluate one value of the expected objective function

References Rubinstein, R, and Shapiro, A. (1993). Discrete events systems: sensitivity analysis and stochastic optimization by the score function method. Wiley & Sons, N.Y. Shapiro, A., and Homem-de-Mello, T. (1998). A simulation-based approach to two-stage stochastic programming with recourse. Mathematical Programming, 81, pp Sakalauskas, L. (2002). Nonlinear stochastic programming by Monte-Carlo estimators. European Journal on Operational Research, 137, Spall G. (2003) Simultaneous Perturbation Stochastic Approximation. J.Wiley&Sons Sakalauskas, L. (2004). Application of the Monte-Carlo method to nonlinear stochastic optimization with linear constraints. Informatica, 15(2), Sakalauskas L. (2006) Towards implementable nonlinear stochastic programming. In Eds K.Marti et al. Coping with uncertainty, Springer Verlag

Announcements Welcome to the EURO Mini Conference “Continuous Optimization and Knowledge Based Technologies (EUROPT-2008)” May 20-23, 2008, Neringa, Lithuania