Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data-Driven Time-Parallelization in the AFM Simulation of Proteins L. Ji, H. Nymeyer, A. Srinivasan, and Y. Yu Florida State University

Similar presentations


Presentation on theme: "Data-Driven Time-Parallelization in the AFM Simulation of Proteins L. Ji, H. Nymeyer, A. Srinivasan, and Y. Yu Florida State University"— Presentation transcript:

1 Data-Driven Time-Parallelization in the AFM Simulation of Proteins L. Ji, H. Nymeyer, A. Srinivasan, and Y. Yu Florida State University http://www.cs.fsu.edu/~asriniva Aim: Simulate for long time spans Solution features: Use data from prior simulations to parallelize the time domain Acknowledgments: NSF, ORNL, NERSC, NCSA

2 Outline Background –Limitations of Conventional Parallelization Time Parallelization –Other Time Parallelization Approaches –Data-Driven Time Parallelization Nano-Mechanics Application Time Parallelization of AFM Simulation of Proteins –Prediction –Experimental Results Scaled to an order of magnitude larger number of processors when combined with conventional parallelization Conclusions and Future Work

3 Background Molecular dynamics –In each time step, forces of atoms on each other modeled using some potential –After force is computed, update positions –Repeat for desired number of time steps Time steps size ~ 10 –15 seconds, due to physical and numerical considerations –Desired time range is much larger A million time steps are required to reach 10 -9 s ~ 500 hours of computing for ~ 40K atoms using GROMACS MD uses unrealistically large pulling speed –1 to 10 m/s instead of 10 -7 to10 -5 m/s

4 Limitations of Conventional Parallelization Results on IBM Blue Gene –Does not scale efficiently beyond 10 ms/iteration If we want to simulate to a ms –Time step 1 fs  10 12 iterations  10 10 s ≈ 300 years If we scaled to 10  s per iteration –4 months of computing time NAMD, 327K atom ATPase PME, IPDPS 2006 NAMD, 92K atom ApoA1 PME, IPDPS 2006 IBM Blue Matter, 43K Rhodopsin, Tech Report 2005 Desmond, 92K atom ApoA1, SC 2006

5 Time Parallelization Other Time Parallelization Approaches –Dynamic Iterations/ Waveform Relaxation Slow convergence –Parareal Method Related to shooting methods Not shown effective in realistic settings Data-Driven Time-Parallelization –Nano-Mechanics Application Tensile test on a Carbon Nanotube –Achieved granularity of 13.6  s/iteration in one simulation

6 Other Time Parallelization Approaches Special case: Picard iterations –Ex: dy/dt = y, y(0) = 1 becomes dy n+1 /dt = y n (t), y 0 (t) = 1 In general –dy/dt = f(y,t), y(0) = y 0 becomes dy n+1 /dt = g(y n, y n+1, t), y 0 (t) = y 0 g(u, u, t) = f(u, t) g(y n, y n+1, t) = f(y n, t): Picard g(y n, y n+1, t) = f(y n+1, t): Converges in 1 iteration –Jacobi, Gauss-Seidel, and SOR versions of g defined Many improvements –Ex: DIRM combines above with reduced order modeling Exact N = 1 N = 2 N = 3 N = 4 Waveform Relaxation Variants

7 Data-Driven Time Parallelization Each processor simulates a different time interval Initial state is obtained by prediction, using prior data (except for processor 0) Verify if prediction for end state is close to that computed by MD Prediction is based on dynamically determining a relationship between the current simulation and those in a database of prior results If time interval is sufficiently large, then communication overhead is small

8 Nano-Mechanics Application Carbon Nanotube Tensile Test Pull the CNT Determine stress-strain response and yield strain (when CNT starts breaking) using MD Use dimensionality reduction for prediction u 1 (blue) and u 2 (red) for z u 1 (green) for x is not “significant” Red line: Ideal speedup Blue: v = 0.1m/s Green: v = 1m/s, using v = 10m/s Blue: Exact 450K Red: 200 processors

9 Problems with multiple time-scales Fine-scale computations (such as MD) are more accurate, but more time consuming –Much of the details at the finer scale are unimportant, but some are A simple schematic of multiple time scales

10 Time-Parallelization of AFM Simulation of Proteins Example System: Muscle Protein - Titin –Around 40K atoms, mostly water –Na + and Cl - added for charge neutrality –NVT conditions, Langevin thermostat, 400K –Force constant on springs: 400kJ/(mol  nm 2 ) –GROMACS used for MD simulations

11 Verification of prediction Definition of equivalence of two states –Atoms vibrate around their mean position –Consider states equivalent if differences are within the normal range of fluctuations Mean positionDisplacement (from mean) Differences between trajectories that differ only due to the random number sequence

12 Prediction Use prior results with higher velocity –Trajectories with different random number sequences –Predict based on prior result closest to current states Use only the last verified state Use several recent verified states Fit parameters to the log-Weibull distribution (1/b) e (a-x)/b-e (a-x)/b Location: a = 0.159 Scale: b = 0.0242

13 Speedup Speedup on Xeon/Myrinet cluster at NCSASpeedup with combined space (8-way) - time parallelization One time interval is 10K time steps -- ~5 hours sequential time The parallel overheads, excluding prediction errors, are relatively insignificant Above results use last verified state to choose prior run Using several verified states parallelized almost perfectly on 32 processors

14 Validation Spatially parallel Time parallel Mean (spatial), time parallel Experimental data

15 Typical Differences RMSD Solid: Between exact and a time parallel runs Dashed: Between conventional runs using different random number sequences Force Dashed: Time parallel runs Solid: Conventional runs

16 Conclusions and Future Work Conclusions –Data-driven time parallelization promises an order of magnitude improvement in speed when combined with conventional parallelization Future Work –Better prediction –Satisfy detailed balance


Download ppt "Data-Driven Time-Parallelization in the AFM Simulation of Proteins L. Ji, H. Nymeyer, A. Srinivasan, and Y. Yu Florida State University"

Similar presentations


Ads by Google