1 The Performance Analysis of Molecular dynamics RAD GTPase with AMBER application on Cluster computing environtment. The Performance Analysis of Molecular.

Slides:



Advertisements
Similar presentations
1 The Performance Analysis of Molecular dynamics RAD GTPase with AMBER application on Cluster computing environtment. The Performance Analysis of Molecular.
Advertisements

SCARF Duncan Tooke RAL HPCSG. Overview What is SCARF? Hardware & OS Management Software Users Future.
Molecular Dynamic analysis with Gromacs: Part of Indonesian Herbal Farmacological activities screening in Silico study On a Cluster computing environment.
Software Demonstration and Course Description P.N. Lai.
Parallel Computation of the 2D Laminar Axisymmetric Coflow Nonpremixed Flames Qingan Andy Zhang PhD Candidate Department of Mechanical and Industrial Engineering.
Tutorial Moshe Goldstein Fritz Haber Research Center, Hebrew U. February 13, 2012.
The Protein Folding Problem David van der Spoel Dept. of Cell & Mol. Biology Uppsala, Sweden
AMBER. AMBER 7 What is AMBER? –A collective name for a suite of programs that allow users to carry out molecular dynamic simulations. –And a set of molecular.
Bioinformatics Needs for the post-genomic era Dr. Erik Bongcam-Rudloff The Linnaeus Centre for Bioinformatics.
Case Studies Class 5. Computational Chemistry Structure of molecules and their reactivities Two major areas –molecular mechanics –electronic structure.
Outline Introduction Image Registration High Performance Computing Desired Testing Methodology Reviewed Registration Methods Preliminary Results Future.
1 Parallel multi-grid summation for the N-body problem Jesús A. Izaguirre with Thierry Matthey Department of Computer Science and Engineering University.
SSS Software Update Ian Buck Mattan Erez August 2002.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Parallel Performance of Hierarchical Multipole Algorithms for Inductance Extraction Ananth Grama, Purdue University Vivek Sarin, Texas A&M University Hemant.
Operational computing environment at EARS Jure Jerman Meteorological Office Environmental Agency of Slovenia (EARS)
Crowdsourcing Predictors of Behavioral Outcomes. Abstract Generating models from large data sets—and deter¬mining which subsets of data to mine—is becoming.
© 2008 The MathWorks, Inc. ® ® Parallel Computing with MATLAB ® Silvina Grad-Freilich Manager, Parallel Computing Marketing
RUP Implementation and Testing
PuReMD: Purdue Reactive Molecular Dynamics Package Hasan Metin Aktulga and Ananth Grama Purdue University TST Meeting,May 13-14, 2010.
การติดตั้งและทดสอบการทำคลัสเต อร์เสมือนบน Xen, ROCKS, และไท ยกริด Roll Implementation of Virtualization Clusters based on Xen, ROCKS, and ThaiGrid Roll.
A Metadata Based Approach For Supporting Subsetting Queries Over Parallel HDF5 Datasets Vignesh Santhanagopalan Graduate Student Department Of CSE.
Cluster-based SNP Calling on Large Scale Genome Sequencing Data Mucahid KutluGagan Agrawal Department of Computer Science and Engineering The Ohio State.
Development of the Graphical User Interface and Improvement and Streamlining of NYMTC's Best Practice Model Jim Lam, Andres Rabinowicz, Srini Sundaram,
Docking with Autodock and Molecular Dynamic analysis with Gromacs: Part of Indonesian Herbal Farmacological activities screening in Silico On a Cluster.
Compiler BE Panel IDC HPC User Forum April 2009 Don Kretsch Director, Sun Developer Tools Sun Microsystems.
Protein Molecule Simulation on the Grid G-USE in ProSim Project Tamas Kiss Joint EGGE and EDGeS Summer School.
CMAQ Runtime Performance as Affected by Number of Processors and NFS Writes Patricia A. Bresnahan, a * Ahmed Ibrahim b, Jesse Bash a and David Miller a.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
E-science grid facility for Europe and Latin America E2GRIS1 André A. S. T. Ribeiro – UFRJ (Brazil) Itacuruça (Brazil), 2-15 November 2008.
Experiences with the Globus Toolkit on AIX and deploying the Large Scale Air Pollution Model as a grid service Ashish Thandavan Advanced Computing and.
1 Scripting Workflows with the Application Hosting Environment Stefan Zasada University College London.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
Mehmet Can Kurt, The Ohio State University Gagan Agrawal, The Ohio State University DISC: A Domain-Interaction Based Programming Model With Support for.
An FPGA Implementation of the Ewald Direct Space and Lennard-Jones Compute Engines By: David Chui Supervisor: Professor P. Chow.
Overcoming Scaling Challenges in Bio-molecular Simulations Abhinav Bhatelé Sameer Kumar Chao Mei James C. Phillips Gengbin Zheng Laxmikant V. Kalé.
Using a Grid Enabled, Virtual Screening Platform to Discover Unique Inhibitors for SSH-2 Phillip Pham University of California, San Diego.
A Technical Introduction to the MD-OPEP Simulation Tools
LOGO Development of the distributed computing system for the MPD at the NICA collider, analytical estimations Mathematical Modeling and Computational Physics.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
MSE Presentation 1 Lakshmikanth Ganti
Computational Research in the Battelle Center for Mathmatical medicine.
MSc in High Performance Computing Computational Chemistry Module Parallel Molecular Dynamics (i) Bill Smith CCLRC Daresbury Laboratory
SAN DIEGO SUPERCOMPUTER CENTER Advanced User Support Project Overview Thomas E. Cheatham III University of Utah Jan 14th 2010 By Ross C. Walker.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
OPTIMIZATION OF DIESEL INJECTION USING GRID COMPUTING Miguel Caballer Universidad Politécnica de Valencia.
A Pattern Language for Parallel Programming Beverly Sanders University of Florida.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
PuReMD Design Initialization – neighbor-list, bond-list, hydrogenbond-list and Coefficients of QEq matrix Bonded interactions – Bond-order, bond-energy,
VIEWS b.ppt-1 Managing Intelligent Decision Support Networks in Biosurveillance PHIN 2008, Session G1, August 27, 2008 Mohammad Hashemian, MS, Zaruhi.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
Computational Techniques for Efficient Carbon Nanotube Simulation
An Open Source Project Commonly Used for Processing Big Data Sets
Parallel Programming By J. H. Wang May 2, 2017.
Implementing Simplified Molecular Dynamics Simulation in Different Parallel Paradigms Chao Mei April 27th, 2006 CS498LVK.
MapReduce: Data Distribution for Reduce
Development of the Nanoconfinement Science Gateway
Hybrid Programming with OpenMP and MPI
Computational Techniques for Efficient Carbon Nanotube Simulation
Overview of Workflows: Why Use Them?
Parallel Programming in C with MPI and OpenMP
Parallel computing in Computational chemistry
Mr.Halavath Ramesh 16-MCH-001 Dept. of Chemistry Loyola College University of Madras-Chennai.
Mr.Halavath Ramesh 16-MCH-001 Dept. of Chemistry Loyola College University of Madras-Chennai.
Mr.Halavath Ramesh 16-MCH-001 Dept. of Chemistry Loyola College University of Madras-Chennai.
Mr.Halavath Ramesh 16-MCH-001 Dept. of Chemistry Loyola College University of Madras-Chennai.
Presentation transcript:

1 The Performance Analysis of Molecular dynamics RAD GTPase with AMBER application on Cluster computing environtment. The Performance Analysis of Molecular dynamics RAD GTPase with AMBER application on Cluster computing environtment. Universitas Indonesia Heru Suhartanto, Arry Yanuar Toni Dermawan

2 Molecular Dynamics Simulation MD simulation on virus H5N1 [3] Computer Simulation Techniques Molecular Dynamic Simulation 2 Fakultas Ilmu Komputer Universitas Indonesia

3 “MD simulation : computational tools used to describe the position, speed an and orientation of molecules at a certain time” Ashlie Martini [4] “MD simulation : computational tools used to describe the position, speed an and orientation of molecules at a certain time” Ashlie Martini [4] 3 Fakultas Ilmu Komputer Universitas Indonesia

4 MD simulation purposes/benefits: Studying structure and properties of molecule Protein folding Drug design Sumber gambar: [5], [6], [7] 4 Fakultas Ilmu Komputer Universitas Indonesia

5 Challenges in MD simulation Challenges in MD simulation 5 Fakultas Ilmu Komputer Universitas Indonesia O(N 2 ) time complexity Timesteps (simulation time)

6 Focus of the experiment 6 Fakultas Ilmu Komputer Universitas Indonesia Study the effect of MD simulation timestep on the executing / processing time; Study the effect of in vacum and implicit solvent technique with generalied Born (GB) model on the executing / processing time; Study (scalability) how the number of processors improve executing / processing time; Study how the output file grows as the timesteps increase.

7 Scope of the experiments 7 Fakultas Ilmu Komputer Universitas Indonesia Preparation and simulation with AMBER packages Performance is based on the execution time of the MD simulation No parameter optimization for the MD simulation

8 Molecular Dynamics basic process [4] 8 Fakultas Ilmu Komputer Universitas Indonesia

9 Flow of data in AMBER [8]

10 Flows in AMBER [8]  Preparatory program  LEaP is the primary program to create a new system in Amber, or to modify old systems. It combines the functionality of prep, link, edit, and parm from earlier versions.  ANTECHAMBER is the main program from the Antechamber suite. If your system contains more than just standard nucleic acids or proteins, this may help you prepare the input for LEaP.

11 Flows in AMBER [8]  Simulation SANDER is the basic energy minimizer and molecular dynamics program. This program relaxes the structure by iteratively moving the atoms down the energy gradient until a sufficiently low average gradient is obtained. SANDER is the basic energy minimizer and molecular dynamics program. This program relaxes the structure by iteratively moving the atoms down the energy gradient until a sufficiently low average gradient is obtained. PMEMD is a version of sander that is optimized for speed and for parallel scaling. The name stands for "Particle Mesh Ewald Molecular Dynamics," but this code can now also carry out generalized Born simulations. PMEMD is a version of sander that is optimized for speed and for parallel scaling. The name stands for "Particle Mesh Ewald Molecular Dynamics," but this code can now also carry out generalized Born simulations.

12 Flows in AMBER [8]  Analysis PTRAJ is a general purpose utility for analyzing and processing trajectory or coordinate files created from MD simulations PTRAJ is a general purpose utility for analyzing and processing trajectory or coordinate files created from MD simulations MM-PBSA is a script that automates energy analysis of snapshots from a molecular dynamics simulation using ideas generated from continuum solvent models. MM-PBSA is a script that automates energy analysis of snapshots from a molecular dynamics simulation using ideas generated from continuum solvent models.

13 RAD (Ras Associated with Diabetes) is a family of RGK small GTPase located inside human body with diabetes type 2. The crystal form of Rad GTPase has resolution of 1,8 angstrom. The crystal form of RAD GTPase is stored in d Protein Data Bank (PDB) file. Ref: A. Yanuar, S. Sakurai, K. Kitano, Hakoshima, dan Toshio, “Crystal structure of human rad gtpase of the rgk-family,” Genes to Cells, vol. 11, no. 8, pp , Agustus 2006 The RAD GTPase Protein

14 RAD GTPase Protein 14 Fakultas Ilmu Komputer Universitas Indonesia Reading from PDB with NOC: The leap.log reading: number of atom 2529

15 Parallel approach in MD simulation 15 Fakultas Ilmu Komputer Universitas Indonesia  Algorithms for fungsi force: data replication data replication Data distribution Data distribution  Data decomposition Particle decomposition Particle decomposition Force decomposition Force decomposition Domain decomposition Domain decomposition Interaction decomposition Interaction decomposition

16 Parallel implementation in AMBER 16 Fakultas Ilmu Komputer Universitas Indonesia Atoms are distributed among available processors (Np) Each Execution nodes / processors compute force function Updating position, computing parsial force, ect. Write to output files

17 Hastinapura Cluster Nama Node Head Node Worker Nodes Storage Node Arsitektur Sun Fire X Prosesor AMD Opteron 2.2 GHz (Dual Core) Dual Intel Xeon 2.8 GHz (HT) RAM 2 GB RAM 1 GB RAM 2 GB RAM Harddisk 80 GB 3 x 320 GB 17 Fakultas Ilmu Komputer Universitas Indonesia

18 Softwares Hastinapura Cluster 18 Fakultas Ilmu Komputer Universitas IndonesiaFunctions Applications (versi) 1compilers gcc (3.3.5); g++ (3.3.5, GCC); g77 (3.3.5, GNU Fortran); g95 (0.91, GCC 4.0.3) 2 Aplikasi MPI 1 MPICH (1.2.7p1, Release date: 2005/11/04 11:54:51) 3 Operating system Debian/Linux OS (3.1 “Sarge”) 4 Resource management Globus Toolkit [2] (4.0.3) 5 Job scheduler Sun Grid Engine (SGE) (6.1u2)

19 Experiment results Fakultas Ilmu Komputer Universitas Indonesia

20 Execution time with In Vacuum Waktu simulasi (ps) Jumlah prosesor , , , , , , , , , , , , , , , ,870 Fakultas Ilmu Komputer Universitas Indonesia

21 Fakultas Ilmu Komputer Universitas Indonesia Execution time for In Vacuum

22 Execution time for Implicit Solvent with GB Model Waktu simulasi (ps) Jumlah prosesor , , , , , , , , , , , , , , , ,260 Fakultas Ilmu Komputer Universitas Indonesia

23 Fakultas Ilmu Komputer Universitas Indonesia Execution time for Implicit Solven with GB Model

24 Fakultas Ilmu Komputer Universitas Indonesia Execution time comparison between In Vacuum and Implicit Solvent with GB model

25 Fakultas Ilmu Komputer Universitas Indonesia The effect of Prosesor number on MD simulation with In Vacuum

26 Fakultas Ilmu Komputer Universitas Indonesia The effect of processors number at MD simulation with Implicit Solvent dengan Model GB

27 Output file sizes as the simulation time grows – in vacum

28 Output file sizes as the simulation time grows – Implicit solvent with GB model

29 Problems encountered  Electrical supplies instabilities.  Some nodes are not functioning during one or two experiments  Another cluster with head node functions also as worker node: some nodes are not functioning / downs during some experiments. 29 Fakultas Ilmu Komputer Universitas Indonesia

30 References [1] [4] A. Martini, “Lecture 2: Potential Energy Functions”, 2010, [Online]. Tersedia di: [Diakses pada 18 Juni 2010]. [5] [6] in_folding(1).jpg [7] 56/ /ncontent [8] D. A. Case et al., “AMBER 10”, University of California, San Francisco, 2008, [Online]. Tersedia di: book/amber-10-users-manual/ [Diakses pada 11 Juni 2010]. 30 Fakultas Ilmu Komputer Universitas Indonesia