Scalable Stochastic Programming Cosmin Petra and Mihai Anitescu Mathematics and Computer Science Division Argonne National Laboratory Informs Computing.

Slides:



Advertisements
Similar presentations
Dense Linear Algebra (Data Distributions) Sathish Vadhiyar.
Advertisements

Load Balancing Parallel Applications on Heterogeneous Platforms.
Accelerated, Parallel and PROXimal coordinate descent IPAM February 2014 APPROX Peter Richtárik (Joint work with Olivier Fercoq - arXiv: )
A NOVEL APPROACH TO SOLVING LARGE-SCALE LINEAR SYSTEMS Ken Habgood, Itamar Arel Department of Electrical Engineering & Computer Science GABRIEL CRAMER.
A Discrete Adjoint-Based Approach for Optimization Problems on 3D Unstructured Meshes Dimitri J. Mavriplis Department of Mechanical Engineering University.
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Eigenvalue and eigenvectors  A x = λ x  Quantum mechanics (Schrödinger equation)  Quantum chemistry  Principal component analysis (in data mining)
MATH 685/ CSI 700/ OR 682 Lecture Notes
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Numerical Parallel Algorithms for Large-Scale Nanoelectronics Simulations using NESSIE Eric Polizzi, Ahmed Sameh Department of Computer Sciences, Purdue.
Numerical Algorithms Matrix multiplication
Applications of Stochastic Programming in the Energy Industry Chonawee Supatgiat Research Group Enron Corp. INFORMS Houston Chapter Meeting August 2, 2001.
OpenFOAM on a GPU-based Heterogeneous Cluster
Chapter 9 Gauss Elimination The Islamic University of Gaza
Stochastic Optimization of Complex Energy Systems on High-Performance Computers Cosmin G. Petra Mathematics and Computer Science Division Argonne National.
Multilevel Incomplete Factorizations for Non-Linear FE problems in Geomechanics DMMMSA – University of Padova Department of Mathematical Methods and Models.
1cs542g-term Notes  Assignment 1 is out (questions?)
Scalable Multi-Stage Stochastic Programming Cosmin Petra and Mihai Anitescu Mathematics and Computer Science Division Argonne National Laboratory DOE Applied.
Linear Algebraic Equations
Module C9 Simulation Concepts. NEED FOR SIMULATION Mathematical models we have studied thus far have “closed form” solutions –Obtained from formulas --
Thomas algorithm to solve tridiagonal matrices
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
Higher-order Confidence Intervals for Stochastic Programming using Bootstrapping Cosmin G. Petra Joint work with Mihai Anitescu Mathematics and Computer.
Improved results for a memory allocation problem Rob van Stee University of Karlsruhe Germany Leah Epstein University of Haifa Israel WADS 2007 WAOA 2007.
On parallelizing dual decomposition in stochastic integer programming
1 Capacity Planning under Uncertainty Capacity Planning under Uncertainty CHE: 5480 Economic Decision Making in the Process Industry Prof. Miguel Bagajewicz.
Scalable Stochastic Programming Cosmin G. Petra Mathematics and Computer Science Division Argonne National Laboratory Joint work with.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Optimization Under Uncertainty: Structure-Exploiting Algorithms Victor M. Zavala Assistant Computational Mathematician Mathematics and Computer Science.
An approach for solving the Helmholtz Equation on heterogeneous platforms An approach for solving the Helmholtz Equation on heterogeneous platforms G.
Calculating Discrete Logarithms John Hawley Nicolette Nicolosi Ryan Rivard.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Scalable Stochastic Programming Cosmin G. Petra Mathematics and Computer Science Division Argonne National Laboratory Joint work with.
Solving Scalar Linear Systems Iterative approach Lecture 15 MA/CS 471 Fall 2003.
Scalabilities Issues in Sparse Factorization and Triangular Solution Sherry Li Lawrence Berkeley National Laboratory Sparse Days, CERFACS, June 23-24,
High Performance Solvers for Semidefinite Programs
Scalable Multi-Stage Stochastic Programming
1 Using the PETSc Parallel Software library in Developing MPP Software for Calculating Exact Cumulative Reaction Probabilities for Large Systems (M. Minkoff.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY 1 Parallel Solution of the 3-D Laplace Equation Using a Symmetric-Galerkin Boundary Integral.
ParCFD Parallel computation of pollutant dispersion in industrial sites Julien Montagnier Marc Buffat David Guibert.
Stochastic optimization of energy systems Cosmin Petra Argonne National Laboratory.
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
1 Incorporating Iterative Refinement with Sparse Cholesky April 2007 Doron Pearl.
Distributed Nonnegative Matrix Factorization for Web- Scale Dyadic Data Analysis on MapReduce Challenge : the scalability of available tools Definition.
1 Efficient Parallel Software for Large-Scale Semidefinite Programs Makoto Tokyo-Tech Katsuki Chuo University MSC Yokohama.
INFOMRS Charlotte1 Parallel Computation for SDPs Focusing on the Sparsity of Schur Complements Matrices Makoto Tokyo Tech Katsuki Fujisawa.
Computing Simulation in Orders Based Transparent Parallelizing Pavlenko Vitaliy Danilovich, Odessa National Polytechnic University Burdeinyi Viktor Viktorovych,
On implicit-factorization block preconditioners Sue Dollar 1,2, Nick Gould 3, Wil Schilders 2,4 and Andy Wathen 1 1 Oxford University Computing Laboratory,
Overview of Optimization in Ag Economics Lecture 2.
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
How the Experts Algorithm Can Help Solve LPs Online Marco Molinaro TU Delft Anupam Gupta Carnegie Mellon University.
October 2008 Integrated Predictive Simulation System for Earthquake and Tsunami Disaster CREST/Japan Science and Technology Agency (JST)
Chapter 9 Gauss Elimination The Islamic University of Gaza
Dense Linear Algebra Sathish Vadhiyar. Gaussian Elimination - Review Version 1 for each column i zero it out below the diagonal by adding multiples of.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
1 Parallel Software for SemiDefinite Programming with Sparse Schur Complement Matrix Makoto Tokyo-Tech Katsuki Chuo University Mituhiro.
Consider Preconditioning – Basic Principles Basic Idea: is to use Krylov subspace method (CG, GMRES, MINRES …) on a modified system such as The matrix.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
Using Neumann Series to Solve Inverse Problems in Imaging Christopher Kumar Anand.
Divide and Conquer Algorithms Sathish Vadhiyar. Introduction  One of the important parallel algorithm models  The idea is to decompose the problem into.
LLNL-PRES This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
A Scalable Parallel Preconditioned Sparse Linear System Solver Murat ManguoğluMiddle East Technical University, Turkey Joint work with: Ahmed Sameh Purdue.
Parallel Direct Methods for Sparse Linear Systems
Block Low Rank Approximations in LS-DYNA
A computational loop k k Integration Newton Iteration
Meros: Software for Block Preconditioning the Navier-Stokes Equations
P A R A L L E L C O M P U T I N G L A B O R A T O R Y
Dense Linear Algebra (Data Distributions)
A computational loop k k Integration Newton Iteration
Presentation transcript:

Scalable Stochastic Programming Cosmin Petra and Mihai Anitescu Mathematics and Computer Science Division Argonne National Laboratory Informs Computing Society Conference Monterey, California January, 2011

Motivation  Sources of uncertainty in complex energy systems –Weather –Consumer Demand –Market prices  – Anitescu, Constantinescu, Zavala –Stochastic Unit Commitment with Wind Power Generation –Energy management of Co-generation –Economic Optimization of a Building Energy System 2

Stochastic Unit Commitment with Wind Power  Wind Forecast – WRF(Weather Research and Forecasting) Model –Real-time grid-nested 24h simulation –30 samples require 1h on 500 CPUs 3 Slide courtesy of V. Zavala & E. Constantinescu Zavala’s SA2 talk

Optimization under Uncertainty  Two-stage stochastic programming with recourse (“here-and-now”)  4 subj. to. continuous discrete Sampling Inference Analysis M samples Sample average approximation (SAA) subj. to.

Linear Algebra of Primal-Dual Interior-Point Methods 5 subj. to. Min Convex quadratic problem IPM Linear System Two-stage SP arrow-shaped linear system (via a permutation) Multi-stage SP nested

6 The Direct Schur Complement Method (DSC)  Uses the arrow shape of H 1.Implicit factorization 2. Solving Hz=r 2.1. Back substitution 2.2. Diagonal Solve 2.3. Forward substitution

Parallelizing DSC – 1. Factorization phase 7 2. Backsolve Process 1 Process 2 Process p Process 1 Factorization of the 1 st stage Schur complement matrix = BOTTLENECK

Parallelizing DSC – 2. Backsolve 8 Process 1 Process 2 Process p Process 1 Process 2 Process p 1 st stage backsolve = BOTTLENECK 1.Factorization

Scalability of DSC 9 Unit commitment 76.7% efficiency but not always the case Large number of 1 st stage variables: 38.6% efficiency on Argonne

BOTTLENECK SOLUTION 1: STOCHASTIC PRECONDITIONER 10

Preconditioned Schur Complement (PSC) 11 (separate process) REMOVES the factorization bottleneck Slightly larger backsolve bottleneck

The Stochastic Preconditioner  The exact structure of C is  IID subset of n scenarios:  The stochastic preconditioner (Petra & Anitescu, 2010)  For C use the constraint preconditioner (Keller et. al., 2000) 12

The “Ugly” Unit Commitment Problem 13  DSC on P processes vs PSC on P+1 process Optimal use of PSC – linear scaling Factorization of the preconditioner can not be hidden anymore. 120 scenarios

Quality of the Stochastic Preconditioner  “Exponentially” better preconditioning (Petra & Anitescu 2010)  Proof: Hoeffding inequality  Assumptions on the problem’s random data 1.Boundedness 2.Uniform full rank of and 14 not restrictive

Quality of the Constraint Preconditioner  has an eigenvalue 1 with order of multiplicity.  The rest of the eigenvalues satisfy  Proof: based on Bergamaschi et. al.,

Performance of the preconditioner  Eigenvalues clustering & Krylov iterations  Affected by the well-known ill-conditioning of IPMs. 16

SOLUTION 2: PARALELLIZATION OF STAGE 1 LINEAR ALGEBRA 17

Parallelizing the 1 st stage linear algebra  We distribute the 1 st stage Schur complement system.  C is treated as dense.  Alternative to PSC for problems with large number of 1 st stage variables.  Removes the memory bottleneck of PSC and DSC.  We investigated ScaLapack, Elemental (successor of PLAPACK) –None have a solver for symmetric indefinite matrices (Bunch-Kaufman); –LU or Cholesky only. –So we had to think of modifying either. 18 dense symm. pos. def., sparse full rank.

ScaLapack (ORNL) 19  Classical block distribution of the matrix  Blocked “down-looking” Cholesky - algorithmic blocks  Size of algorithmic block = size of distribution block!  For cache-performance - large algorithmic blocks  For good load balancing - small distribution blocks  Must trade off cache-performance for load balancing  Communication: basic MPI calls  Inflexible in working with sub-blocks

Elemental (UT Austin) 20  Unconventional “elemental” distribution: blocks of size 1.  Size of algorithmic block size of distribution block  Both cache-performance (large alg. blocks) and load balancing (distrib. blocks of size 1)  Communication  More sophisticated MPI calls  Overhead O(log(sqrt(p))), p is the number of processors.  Sub-blocks friendly  Better performance in a hybrid approach, MPI+SMP, than ScaLapack

Cholesky-based -like factorization   Can be viewed as an “implicit” normal equations approach.  In-place implementation inside Elemental: no extra memory needed.  Idea: modify the Cholesky factorization, by changing the sign after processing p columns.  It is much easier to do in Elemental, since this distributes elements, not blocks.  Twice as fast as LU  Works for more general saddle-point linear systems, i.e., pos. semi-def. (2,2) block. 21

Distributing the 1 st stage Schur complement matrix  All processors contribute to all of the elements of the (1,1) dense block  A large amount of inter-process communication occurs.  Possibly more costly than the factorization itself.  Solution: use buffer to reduce the number of messages when doing a Reduce_scatter.  approach also reduces the communication by half – only need to send lower triangle. 22

Reduce operations 23  Streamlined copying procedure - Lubin and Petra (2010)  Loop over continuous memory and copy elements in send buffer  Avoids divisions and modulus ops needed to compute the positions  “Symmetric” reduce for  Only lower triangle is reduced  Fixed buffer size  A variable number of columns reduced.  Effectively halves the communication (both data & # of MPI calls).

Large-scale performance 24  First-stage linear algebra: ScaLapack (LU), Elemental(LU), and  Strong scaling of PIPS with and  90.1% from 64 to 1024 cores  75.4% from 64 to 2048 cores  > 4,000 scenarios SAA problem: 1 st stage variables: 82,000 Total #: 189 million Thermal units: 1,000 Wind farms: 1,200

Concluding remarks  PIPS – parallel interior-point solver for stochastic SAA problems –Largest SAA prob. 189 Mil vars = 82k 1 st -stage vars + 4k scens * 47k 2 nd -stage vars 2048 cores  Specialized linear algebra layer –Small-sized 1 st -stage subproblems  DSC –Medium-sized 1 st -stage  PSC –Large-sized 1 st -stage  Distributed SC  Current work: Scenario parallelization in a hybrid programming model MPI+SMP 25

Thank you for your attention! Questions? 26