Presentation is loading. Please wait.

Presentation is loading. Please wait.

Scalable Stochastic Programming Cosmin Petra and Mihai Anitescu Mathematics and Computer Science Division Argonne National Laboratory Informs Computing.

Similar presentations


Presentation on theme: "Scalable Stochastic Programming Cosmin Petra and Mihai Anitescu Mathematics and Computer Science Division Argonne National Laboratory Informs Computing."— Presentation transcript:

1 Scalable Stochastic Programming Cosmin Petra and Mihai Anitescu Mathematics and Computer Science Division Argonne National Laboratory Informs Computing Society Conference Monterey, California January, 2011 petra@mcs.anl.gov

2 Motivation  Sources of uncertainty in complex energy systems –Weather –Consumer Demand –Market prices  Applications @Argonne – Anitescu, Constantinescu, Zavala –Stochastic Unit Commitment with Wind Power Generation –Energy management of Co-generation –Economic Optimization of a Building Energy System 2

3 Stochastic Unit Commitment with Wind Power  Wind Forecast – WRF(Weather Research and Forecasting) Model –Real-time grid-nested 24h simulation –30 samples require 1h on 500 CPUs (Jazz@Argonne) 3 Slide courtesy of V. Zavala & E. Constantinescu Zavala’s SA2 talk

4 Optimization under Uncertainty  Two-stage stochastic programming with recourse (“here-and-now”)  4 subj. to. continuous discrete Sampling Inference Analysis M samples Sample average approximation (SAA) subj. to.

5 Linear Algebra of Primal-Dual Interior-Point Methods 5 subj. to. Min Convex quadratic problem IPM Linear System Two-stage SP arrow-shaped linear system (via a permutation) Multi-stage SP nested

6 6 The Direct Schur Complement Method (DSC)  Uses the arrow shape of H 1.Implicit factorization 2. Solving Hz=r 2.1. Back substitution 2.2. Diagonal Solve 2.3. Forward substitution

7 Parallelizing DSC – 1. Factorization phase 7 2. Backsolve Process 1 Process 2 Process p Process 1 Factorization of the 1 st stage Schur complement matrix = BOTTLENECK

8 Parallelizing DSC – 2. Backsolve 8 Process 1 Process 2 Process p Process 1 Process 2 Process p 1 st stage backsolve = BOTTLENECK 1.Factorization

9 Scalability of DSC 9 Unit commitment 76.7% efficiency but not always the case Large number of 1 st stage variables: 38.6% efficiency on Fusion @ Argonne

10 BOTTLENECK SOLUTION 1: STOCHASTIC PRECONDITIONER 10

11 Preconditioned Schur Complement (PSC) 11 (separate process) REMOVES the factorization bottleneck Slightly larger backsolve bottleneck

12 The Stochastic Preconditioner  The exact structure of C is  IID subset of n scenarios:  The stochastic preconditioner (Petra & Anitescu, 2010)  For C use the constraint preconditioner (Keller et. al., 2000) 12

13 The “Ugly” Unit Commitment Problem 13  DSC on P processes vs PSC on P+1 process Optimal use of PSC – linear scaling Factorization of the preconditioner can not be hidden anymore. 120 scenarios

14 Quality of the Stochastic Preconditioner  “Exponentially” better preconditioning (Petra & Anitescu 2010)  Proof: Hoeffding inequality  Assumptions on the problem’s random data 1.Boundedness 2.Uniform full rank of and 14 not restrictive

15 Quality of the Constraint Preconditioner  has an eigenvalue 1 with order of multiplicity.  The rest of the eigenvalues satisfy  Proof: based on Bergamaschi et. al., 2004. 15

16 Performance of the preconditioner  Eigenvalues clustering & Krylov iterations  Affected by the well-known ill-conditioning of IPMs. 16

17 SOLUTION 2: PARALELLIZATION OF STAGE 1 LINEAR ALGEBRA 17

18 Parallelizing the 1 st stage linear algebra  We distribute the 1 st stage Schur complement system.  C is treated as dense.  Alternative to PSC for problems with large number of 1 st stage variables.  Removes the memory bottleneck of PSC and DSC.  We investigated ScaLapack, Elemental (successor of PLAPACK) –None have a solver for symmetric indefinite matrices (Bunch-Kaufman); –LU or Cholesky only. –So we had to think of modifying either. 18 dense symm. pos. def., sparse full rank.

19 ScaLapack (ORNL) 19  Classical block distribution of the matrix  Blocked “down-looking” Cholesky - algorithmic blocks  Size of algorithmic block = size of distribution block!  For cache-performance - large algorithmic blocks  For good load balancing - small distribution blocks  Must trade off cache-performance for load balancing  Communication: basic MPI calls  Inflexible in working with sub-blocks

20 Elemental (UT Austin) 20  Unconventional “elemental” distribution: blocks of size 1.  Size of algorithmic block size of distribution block  Both cache-performance (large alg. blocks) and load balancing (distrib. blocks of size 1)  Communication  More sophisticated MPI calls  Overhead O(log(sqrt(p))), p is the number of processors.  Sub-blocks friendly  Better performance in a hybrid approach, MPI+SMP, than ScaLapack

21 Cholesky-based -like factorization   Can be viewed as an “implicit” normal equations approach.  In-place implementation inside Elemental: no extra memory needed.  Idea: modify the Cholesky factorization, by changing the sign after processing p columns.  It is much easier to do in Elemental, since this distributes elements, not blocks.  Twice as fast as LU  Works for more general saddle-point linear systems, i.e., pos. semi-def. (2,2) block. 21

22 Distributing the 1 st stage Schur complement matrix  All processors contribute to all of the elements of the (1,1) dense block  A large amount of inter-process communication occurs.  Possibly more costly than the factorization itself.  Solution: use buffer to reduce the number of messages when doing a Reduce_scatter.  approach also reduces the communication by half – only need to send lower triangle. 22

23 Reduce operations 23  Streamlined copying procedure - Lubin and Petra (2010)  Loop over continuous memory and copy elements in send buffer  Avoids divisions and modulus ops needed to compute the positions  “Symmetric” reduce for  Only lower triangle is reduced  Fixed buffer size  A variable number of columns reduced.  Effectively halves the communication (both data & # of MPI calls).

24 Large-scale performance 24  First-stage linear algebra: ScaLapack (LU), Elemental(LU), and  Strong scaling of PIPS with and  90.1% from 64 to 1024 cores  75.4% from 64 to 2048 cores  > 4,000 scenarios SAA problem: 1 st stage variables: 82,000 Total #: 189 million Thermal units: 1,000 Wind farms: 1,200

25 Concluding remarks  PIPS – parallel interior-point solver for stochastic SAA problems –Largest SAA prob. 189 Mil vars = 82k 1 st -stage vars + 4k scens * 47k 2 nd -stage vars 2048 cores  Specialized linear algebra layer –Small-sized 1 st -stage subproblems  DSC –Medium-sized 1 st -stage  PSC –Large-sized 1 st -stage  Distributed SC  Current work: Scenario parallelization in a hybrid programming model MPI+SMP 25

26 Thank you for your attention! Questions? 26


Download ppt "Scalable Stochastic Programming Cosmin Petra and Mihai Anitescu Mathematics and Computer Science Division Argonne National Laboratory Informs Computing."

Similar presentations


Ads by Google