National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Porting, Benchmarking, and Optimizing Computational Material.

Slides:



Advertisements
Similar presentations
Tesla CUDA GPU Accelerates Computing The Right Processor for the Right Task.
Advertisements

KAUST Supercomputing Laboratory Orientation Workshop October 13, 2009.
Xushan Zhao, Yang Chen Application of ab initio In Zr-alloys for Nuclear Power Stations General Research Institute for Non- Ferrous metals of Beijing September.
What is MCSR? Who is MCSR? What Does MCSR Do? Who Does MCSR Serve?
GridChem and ParamChem: Science Gateways for Computational Chemistry (and More) Marlon Pierce, Suresh Marru Indiana University Sudhakar Pamidighantam NCSA.
Life and Health Sciences Summary Report. “Bench to Bedside” coverage Participants with very broad spectrum of expertise bridging all scales –From molecule.
SAN DIEGO SUPERCOMPUTER CENTER Advanced User Support Project Outline October 9th 2008 Ross C. Walker.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
Large-Scale Density Functional Calculations James E. Raynolds, College of Nanoscale Science and Engineering Lenore R. Mullin, College of Computing and.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
High Performance Computing with cloud Xu Tong. About the topic Why HPC(high performance computing) used on cloud What’s the difference between cloud and.
Scheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications Sep 2009 High Performance Interconnects for Distributed Computing (HPI-DC)
Using VASP on Ranger Hang Liu. About this work and talk – A part of an AUS project for VASP users from UCSB computational material science group led by.
Institutional Research Computing at WSU: Implementing a community-based approach Exploratory Workshop on the Role of High-Performance Computing in the.
International conference “Modern problems of computational mathematics and mathematical modeling”, dedicated to the 90th anniversary of academician G.I.Marchuk.
点击此处添加幻灯主标题 点击此处添加幻灯副标题 Overview of NSCCSZ HPC and Cloud Computing Services in South China Region Dr. Huang Xiaohui Marketing Developing Department.
Joshua Alexander University of Oklahoma – IT/OSCER ACI-REF Virtual Residency Workshop Monday June 1, 2015 Deploying Community Codes.
1 Babak Behzad, Yan Liu 1,2,4, Eric Shook 1,2, Michael P. Finn 5, David M. Mattli 5 and Shaowen Wang 1,2,3,4 Babak Behzad 1,3, Yan Liu 1,2,4, Eric Shook.
1 First-Principles Molecular Dynamics for Petascale Computers François Gygi Dept of Applied Science, UC Davis
Statistical Performance Analysis for Scientific Applications Presentation at the XSEDE14 Conference Atlanta, GA Fei Xing Haihang You Charng-Da Lu July.
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
National Center for Supercomputing Applications GridChem: Integrated Cyber Infrastructure for Computational Chemistry Sudhakar.
Computing Environment in Chinese Academy of Sciences Dr. Xue-bin Dr. Zhonghua Supercomputing Center Computer Network.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Sobolev Showcase Computational Mathematics and Imaging Lab.
1 Preparing Your Application for TeraGrid Beyond 2010 TG09 Tutorial June 22, 2009.
Technology + Process SDCI HPC Improvement: High-Productivity Performance Engineering (Tools, Methods, Training) for NSF HPC Applications Rick Kufrin *,
HDF Converting between HDF4 and HDF5 MuQun Yang, Robert E. McGrath, Mike Folk National Center for Supercomputing Applications University of Illinois,
Crystal Ball Panel ORNL Heterogeneous Distributed Computing Research Al Geist ORNL March 6, 2003 SOS 7.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
HEA 11th November1 Irish Centre for High-End Computing Bringing capability computing to the Irish research community Andy Shearer Director ICHEC.
Advanced User Support Amit Majumdar 5/7/09. Outline  Three categories of AUS  Update on Operational Activities  AUS.ASTA  AUS.ASP  AUS.ASEOT.
Software Overview Environment, libraries, debuggers, programming tools and applications Jonathan Carter NUG Training 3 Oct 2005.
HPCMP Benchmarking Update Cray Henry April 2008 Department of Defense High Performance Computing Modernization Program.
The CESCA Consortium Alfred Gil, HPC Support Scientist Barcelona, December 4th 2013 GENIUS kick off meeting.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
Advanced User Support -Update Amit Majumdar SDSC.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
What is MCSR? Who is MCSR? What Does MCSR Do? Who Does MCSR Serve? What Kinds of Accounts? Why Does Mississippi Need Supercomputers? What Kinds of Research?
Preliminary CPMD Benchmarks On Ranger, Pople, and Abe TG AUS Materials Science Project Matt McKenzie LONI.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
Third-party software plan Zhengji Zhao NERSC User Services NERSC User Group Meeting September 19, 2007.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Advanced User Support for MPCUGLES code at University of Minnesota October 09,
Integration center of the cyberinfrastructure of NRC “KI” Dubna, 16 july 2012 V.E. Velikhov V.A. Ilyin E.A. Ryabinkin.
GridChem Sciene Gateway and Challenges in Distributed Services Sudhakar Pamidighantam NCSA, University of Illinois at Urbaba- Champaign
SAN DIEGO SUPERCOMPUTER CENTER Advanced User Support Project Overview Thomas E. Cheatham III University of Utah Jan 14th 2010 By Ross C. Walker.
Biswajit Santra Fritz Haber Institute of the Max Planck Society MONET.
Parallel I/O Performance Study and Optimizations with HDF5, A Scientific Data Package Christian Chilan, Kent Yang, Albert Cheng, Quincey Koziol, Leon Arber.
Intermediate Parallel Programming and Cluster Computing Workshop Oklahoma University, August 2010 Running, Using, and Maintaining a Cluster From a software.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA Shirley Moore CPS5401 Fall 2013 svmoore.pbworks.com November 12, 2012.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
HPC University Requirements Analysis Team Training Analysis Summary Meeting at PSC September Mary Ann Leung, Ph.D.
TeraGrid Advanced User Support (AUS) Amit Majumdar, SDSC Area Director – AUS TeraGrid Annual Review April 6-7,
Yang Chen (1), Xushan Zhao (1), Yuqin Liu (2), Maoyou Chu (1), Jianyun Shen (1) First-principles Calculation of Zr-alloys based on e-Infrastructure (1)General.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
UNIVERSITY OF CALIFORNIA, SAN DIEGO Triton Shared Computing Cluster Status Update Jim Hayes x25475.
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
HPC usage and software packages
Performance Technology for Scalable Parallel Systems
POTENTIAL EGEE APPLICATIONS IN THE CZECH REPUBLIC INITIAL IDEAS
Introduction to XSEDE Resources HPC Workshop 08/21/2017
Development of the Nanoconfinement Science Gateway
A Quick Introduction to the WebMO Computational Interface
Prof. Sanjay. V. Khare Department of Physics and Astronomy,
Presentation transcript:

National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Porting, Benchmarking, and Optimizing Computational Material Science Packages on TeraGrid Resources Dodi Heryadi Advanced Application Support Group

Imaginations unbound Top 15 Applications at NCSA based on the # of users Gaussian NAMD CHARMM VASP ABAQUS AMBER ENZO FLUENT MRBAYES GROMACS MOLPRO CACTUS GAMESS FLASH ANSYS [data collected from Oct 1 st 2006 – Dec 31 st 2006]

Imaginations unbound Software Packages commonly used in Bio/Molecular Science and Engineering Gaussian Gamess NWChem Molpro ADF Amber Gromacs CHARMM NAMD DLPOLY LAMMPS CPMD VASP Wien2k SIESTA Abinit CASTEP DMol3

Imaginations unbound Some of the Packages Used in Material Science and Engineering Community VASP CPMD Wien2k SIESTA Abinit CASTEP DMol3

Imaginations unbound Porting, Benchmarking, and Optimizing Computational Material Science packages on the TeraGrid Resources To assist users in selecting the best resources when applying for Allocations To assist users in increasing their productivity using TeraGrid resources

Imaginations unbound First package: VASP (Vienna ab-initio simulation package) a package to perform ab-initio quantum-mechanical molecular dynamics (MD) using pseudopotentials and a plane wave basis set large user base (over 20 research groups/PIs at NCSA alone) can scale to 1,024 processors ( restriction: licensed to individual research groups

Imaginations unbound (old) VASP Benchmarks on NCSA platforms cpus IBM p690 (copper) Xeon Cluster (tungsten) IA-64 Linux Cluster (mercury) 17,8406,1683,939 24,0263,9362,496 41,9512,0771,588 81,0391, Pure Ni, 3x3x3 supercell, 2x2x2 kpoints, 10 electrons GGA pseudopotential Wall Clock Time (in seconds)

Imaginations unbound Porting and Benchmarking VASP on abe and ranger Compilers: Intel 10.1 BLAS and LAPACK Libraries: Intel MKL MPI: MVAPICH Internal FFTW libraries

Imaginations unbound Preliminary Results Mn12-acetate Wall clock time (in seconds) # of coresAbe (ppn=8)Ranger (ppn=16) 1635,83619, ,54711, ,6537, ,6887, (job is still in the queue)7,997

Imaginations unbound Work to do Port and benchmark VASP (and other widely used Computational Material Science packages) on other TG resources Next: kraken Optimize VASP on abe, ranger, kraken, and other TG resources Performance analysis/tools to identify performance bottlenecks Selecting appropriate compiler options for optimal performance Using optimized Math libraries (e.g. Intel FFTW, SCALAPACK, AMD Math Libraries, etc.) Lonnie Crosby (NICS) and Yang Wang (PSC) will be involved in this effort

Imaginations unbound Preliminary Source Level Profiling of VASP on abe with perfsuite ( Module Summary Samples Self % Total % Module % 79.42% /cfs/scratch/users/dodi/vaspbench/perf/vasp % 99.25% /usr/local/mvapich p2patched-intel-ofed-1.2/lib/libmpich.so % 99.85% /usr/local/lib64/tls/libpthread so % % /usr/local/lib64/tls/libc so

Imaginations unbound Preliminary Source Level Profiling of VASP on abe with perfsuite ( File Summary Samples Self % Total % File % 58.69% ?? % 75.87% /u/ncsa/dodi/vaspnew/vasp.4.6/fft3dlib.f % 81.12% /u/ncsa/dodi/vaspnew/vasp.4.6/rmm-diis.f % 85.41% /u/ncsa/dodi/vaspnew/vasp.4.6/nonlr.f % 88.51% /u/ncsa/dodi/vaspnew/vasp.4.lib/dlexlib.f % 91.56% /u/ncsa/dodi/vaspnew/vasp.4.6/hamil.f % 93.86% /u/ncsa/dodi/vaspnew/vasp.4.6/fftmpi.f % 95.90% /u/ncsa/dodi/vaspnew/vasp.4.6/fftmpi_map.f % 96.65% /u/ncsa/dodi/vaspnew/vasp.4.6/dfast.f % 97.30% /u/ncsa/dodi/vaspnew/vasp.4.6/wave.f % 97.90% /u/ncsa/dodi/vaspnew/vasp.4.6/mpi.f % 98.50% /u/ncsa/dodi/vaspnew/vasp.4.6/subrot.f % 98.85% /u/ncsa/dodi/vaspnew/vasp.4.6/us.f90 6

Imaginations unbound Function Summary Samples Self % Total % Function % 10.39% fpassm % 20.08% M_LOOP % 26.77% ipassm % 33.22% __intel_new_memcpy % 38.46% eddrmm % 42.96% MPIDI_CH3I_MRAILI_Get_next_vbuf % 47.30% MPIDI_CH3I_SMP_pull_header % 51.50% MPIDI_CH3I_SMP_read_progress % 55.00% mkl_lapack_dlaebz % 58.09% length % 60.14% raccmu % 62.19% fftwav % 64.04% hamiltmu % 65.88% mkl_blas_mc_zhemv_nb % 67.68% mkl_blas_mc_zgemm_copyac % 69.43% MPIDI_CH3I_SMP_write_progress % 70.98% rpromu % 72.48% map_forward % 73.88% AY16_Loop_M % 75.07% A16X8_N4_Loop_M16

Imaginations unbound Next: More Detailed Performance Analysis mpiP ( -- Lightweight, Scalable MPI Profilinghttp://mpip.sourceforge.net/ 20% of the time spent on MPI TAU (Tuning and Analysis Utilities) (

Imaginations unbound Acknowledgement Rick Kufrin and Rui Liu, NCSA Dave McWilliams, NICS