Presentation is loading. Please wait.

Presentation is loading. Please wait.

Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania E-mail:

Similar presentations


Presentation on theme: "Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania E-mail:"— Presentation transcript:

1 Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania E-mail:

2 CONTENT Introduction Monte-Carlo estimators Stochastic differentiation  Dual solution approach (DS)  Finite difference approach (FD)  Simulated perturbation stochastic approximation (SPSA)  Likelihood ratio approach (LR) Numerical study of stochastic gradient estimators Stochastic optimization by series of Monte-Carlo estimators Numerical study of stochastic optimization algorithm Conclusions

3 Introduction We consider the stochastic approach for stochastic linear problems which distinguishes by  adaptive regulation of the Monte-Carlo estimators  statistical termination procedure  stochastic ε–feasible direction approach to avoid “jamming” or “zigzagging” in solving a constraint problem

4 Two-stage stochastic programming problem with recourse subject to the feasible set where W, T, h are random in general and defined by absolutely continuous probability density

5 Monte-Carlo estimators of objective function Let the certain number N of scenarios for some is provided : and the sampling estimator of the objective function as well as the sampling variance are computed

6 The gradient Monte-Carlo estimators of stochastic gradient as well as the sampling covariance matrix: are evaluated using the same random sample, where

7 Statistical testing of optimality hypothesis under asymptotic normality Optimality hypothesis is rejected if 1) the statistical hypothesis of equality of gradient to zero is rejected 2) or confidence interval of the objective function exceeds the admissible value

8 Stochastic differentiation We examine several estimators for stochastic gradient:  Dual solution approach (DS);  Finite difference approach (FD);  Simulated perturbation stochastic approach (SPSA);  Likelihood ratio approach (LR).

9 Dual solution approach (DS) The stochastic gradient is expressed as using the set of solutions of the dual problem

10 Finite difference (FD) approach In this approach the each i th component of the stochastic gradient is computed as: is the vector with zero components except i th one, equal to 1, is certain small value

11 Simulated perturbation stochastic approximation (SPSA) where is the random vector, which components obtain values 1 or -1 with probabilities p=0.5, is some small value (Spall (2003))

12 Likelihood ratio (LR) approach Rubinstein, Shapiro (1993), Sakalauskas (2002)

13 Methods for stochastic differentiation have been explored with testing functions here Numerical study of stochastic gradient estimators (1)

14 Numerical study of stochastic gradient estimators (2) Stochastic gradient estimators from samples of size (number of scenarios) N was computed at the known optimum point X (i.e. ) for test functions, depending on n parameters. This repeated 400 times and the corresponding sample of Hotelling statistics was analyzed according to and criteria

15 criteria on variable number n and Monte Carlo sample size N (critical value 0,46) Nn Nn 501002005001000 20.300.240.100.080.04 30.370.120.090.060.04 40.19 0.130.080.04 50.750.130.120.080.06 61.530.340.10 0.08 71.560.390.130.080.09 81.810.420.270.180.10 94.180.460.260.200.12 108.120.560.530.250.17

16 criteria on variable number n and Monte Carlo sample size N (critical value 2,49) N n 501002005001000 22.571.140.660.650.42 32.780.820.650.600.27 43.751.170.790.530.31 54.341.460.850.640.36 68.312.340.79 0.76 78.142.721.040.520.45 810.222.551.870.890.52 920.862.591.571.410.78 1040.573.693.511.560.98

17 Sample size, N 10000,925,61 15000,764,15 20000,553,63 21000,682,84 22000,231,28 25000,191,14 30000,120,66 Statistical criteria on Monte Carlo sample size N for number of variable n=40 (critical values 0,46 ir 2,49)

18 Imties tūris, N 10004,4223,11 20001,316,46 30001,176,05 33000,462,42 35000,221,25 40000,090,56 Statistical criteria on Monte Carlo sample size N for number of variable n=60 (critical values 0,46 ir 2,49)

19 Imties tūris, N 100015,5383,26 20005,3927,67 50000,793,97 60000,271,48 70000,130,68 100000,070,39 Statistical criteria on Monte Carlo sample size N for number of variable n=80 (critical values 0,46 ir 2,49)

20 Conclusion: T 2 -statistics distribution may be approximated by Fisher law, when number of scenarios: Variable number, n Number of scenarios, N min (Monte Carlo sample size) 201000 402200 603300 1006000 Numerical study of stochastic gradient estimators (8)

21 Frequency of optimality hypothesis on the distance to optimum (n=2)

22 Frequency of optimality hypothesis on the distance to optimum (n=10)

23 Frequency of optimality hypothesis on the distance to optimum (n=20)

24 Frequency of optimality hypothesis on the distance to optimum (n=50)

25 Frequency of optimality hypothesis on the distance to optimum (n=100)

26 Conclusion: stochastic differentiation by Dual Solution and Finite Difference approaches enables us to reliably estimate the stochastic gradient, when:. SPSA and Likelihood Ratio works when Numerical study of stochastic gradient estimators (14)

27 Gradient search procedure Let some initial point be chosen, the random sample of a certain initial size N 0 be generated at this point, and Monte-Carlo estimators be computed. The iterative stochastic procedure of gradient search is: where the projection of to ε - feasible set:

28 The rule to choose number of scenarios We propose a following rule to regulate number of scenarios: Thus, the iterative stochastic search is performed until statistical criteria don’t contradict to optimality conditions

29 Linear convergence Under some conditions on finiteness and smooth differentiability of the objective function the proposed algorithm converges a.s. to the stationary point: with linear rate where K, L, C, l are some constants (Sakalauskas (2002), (2004))

30 Linear Convergence Since the Monte-Carlo sample size increases with geometric progression rate it follows: Conclusion: the approach proposed enables us to solve SP problems by computing a finite number times of expected objective function

31 Numerical study of stochastic optimization algorithm Test problems have been solved from the Data Base of two-stage stochastic linear optimisation problems: http://www.math.bme.hu/~deak/twostage/ l1/20x20.1/. Dimensionality of the tasks from n=20 to n=80 (30 to 120 at the second stage) All solutions given in data base are achieved and in a number of that we succeeded to improve the known decisions, especially for large number of variables

32 Two stage stochastic programming problem (n=20) The estimate of the optimal value of the objective function given in the database is 182.94234  0.066 (improved to 182.59248  0.033 ) N 0 =N min =100, N max =10000 Maximal number of iterations, generation of trials was broken when the estimated confidence interval of the objective function exceeds admissible value. Initial data as follows : Solution repeated 500 times

33 Frequency of stopping under number of iterations and admissible confidence interval

34 Change of the objective function under number of iterations and admissible interval

35 Change of confidence interval under number of iterations and admissible interval

36 Change of the Hotelling statistics under admissible interval

37 Change of the Monte-Carlo sample size under number of iterations and admissible interval

38 Ratio under admissible interval (1)

39 AccuracyObjective Function 0.1 182.610120.14 0.2 182.624819.73 0.5 182.718619.46 1 182.947519.43 Ratio under admissible interval (2)

40 Two-stage SP problem first stage: 80 variables, 40 constraints second stage: 80 variables, 120 constraints DB given solution 649.604  0.053. Solution by developed algorithm: 646.444  0.999. Solving DB Test Problems (1)

41 Two-stage SP problem first stage: 80 variables, 40 constraints second stage: 80 variables, 120 constraints DB given solution 6656.637  0.814. Solution by developed algorithm: 6648.548  0.999. Solving DB Test Problems (2)

42 Two-stage SP problem first stage: 80 variables, 40 constraints second stage: 80 variables, 120 constraints DB given solution 586.329  0.327. Solution by developed algorithm: 475.012  0.999. Solving DB Test Problems (3)

43 Comparison with Benders decomposition

44 Conclusions The stochastic iterative method has been developed to solve the SLP problems by a finite sequence of Monte- Carlo sampling estimators The approach presented is reasoned by the statistical termination procedure and the adaptive regulation of size of Monte-Carlo samples The computation results show the approach developed provides estimators for a reliable solving and testing of optimality hypothesis in a wide range of dimensionality of SLP problems (2<n<100). The approach developed enables us generate almost unbounded number of scenarios and solve SLP problems with admissible accuracy Total volume of computations solving SLP exceeds only several times the volume of scenarios needed to evaluate one value of the expected objective function

45 References Rubinstein, R, and Shapiro, A. (1993). Discrete events systems: sensitivity analysis and stochastic optimization by the score function method. Wiley & Sons, N.Y. Shapiro, A., and Homem-de-Mello, T. (1998). A simulation-based approach to two-stage stochastic programming with recourse. Mathematical Programming, 81, pp. 301-325. Sakalauskas, L. (2002). Nonlinear stochastic programming by Monte-Carlo estimators. European Journal on Operational Research, 137, 558-573. Spall G. (2003) Simultaneous Perturbation Stochastic Approximation. J.Wiley&Sons Sakalauskas, L. (2004). Application of the Monte-Carlo method to nonlinear stochastic optimization with linear constraints. Informatica, 15(2), 271-282. Sakalauskas L. (2006) Towards implementable nonlinear stochastic programming. In Eds K.Marti et al. Coping with uncertainty, Springer Verlag

46 Announcements Welcome to the EURO Mini Conference “Continuous Optimization and Knowledge Based Technologies (EUROPT-2008)” May 20-23, 2008, Neringa, Lithuania http://www.mii.lt/europt-2008

47


Download ppt "Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania E-mail:"

Similar presentations


Ads by Google