HackLatt 20081 MILC Code Basics Carleton DeTar First presented at Edinburgh EPCC HackLatt 2008 Updated 2013.

Slides:



Advertisements
Similar presentations
I/O and the SciDAC Software API Robert Edwards U.S. SciDAC Software Coordinating Committee May 2, 2003.
Advertisements

Jacobi solver status Lucian Anton, Saif Mulla, Stef Salvini CCP_ASEARCH meeting October 8, 2013 Daresbury 1.
SciDAC Software Infrastructure for Lattice Gauge Theory
Lecture 1: basics of lattice QCD Peter Petreczky Lattice regularization and gauge symmetry : Wilson gauge action, fermion doubling Different fermion formulations.
Tutorial1: NEMO5 Technical Overview
Profiling your application with Intel VTune at NERSC
QDP++ and Chroma Robert Edwards Jefferson Lab
HackLatt MILC with SciDAC C Carleton DeTar HackLatt 2008.
KITPC Calculating with ILDG lattices Carleton DeTar KITPC 2009.
MILC Code Basics Carleton DeTar KITPC MILC Code Capabilities Molecular dynamics evolution –Staggered fermion actions (Asqtad, Fat7, HISQ,
HackLatt MILC Code Basics Carleton DeTar HackLatt 2008.
SW Architecture Review Steven Anastos Jose De Jesus Sean Stevenson.
1 ENERGY 211 / CME 211 Lecture 2 September 24, 2008.
SciDAC Software Infrastructure for Lattice Gauge Theory DOE Grant ’01 -- ’03 (-- ’05?) All Hands Meeting: FNAL Feb. 21, 2003 Richard C.Brower Quick Overview.
Guide To UNIX Using Linux Third Edition
I/O and the SciDAC Software API Robert Edwards U.S. SciDAC Software Coordinating Committee May 2, 2003.
NDT Tools Tutorial: How-To setup your own NDT server Rich Carlson Summer 04 Joint Tech July 19, 2004.
This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit
Spring 2014 SILICON VALLEY UNIVERSITY CONFIDENTIAL 1 Introduction to Embedded Systems Dr. Jerry Shiao, Silicon Valley University.
MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization.
1 LiveViz – What is it? Charm++ library Visualization tool Inspect your program’s current state Client runs on any machine (java) You code the image generation.
Input/Output Controller (IOC) Overview Andrew Johnson Computer Scientist, AES Controls Group.
Lecture 8: Caffe - CPU Optimization
Donald Stark National Center for Atmospheric Research (NCAR) The Developmental Testbed Center (DTC) Wednesday 29 June, 2011 GSI Fundamentals (1): Setup.
Donald Stark National Center for Atmospheric Research (NCAR) The Developmental Testbed Center (DTC) Thursday 13 August, 2015 Downloading and Building EnKF.
Lattice 2004Chris Maynard1 QCDml Tutorial How to mark up your configurations.
SciDAC Software Infrastructure for Lattice Gauge Theory Richard C. Brower Annual Progress Review JLab, May 14, 2007 Code distribution see
PVM. PVM - What Is It? F Stands for: Parallel Virtual Machine F A software tool used to create and execute concurrent or parallel applications. F Operates.
Dynamical Chirally Improved Quarks: First Results for Hadron MassesC.B. Lang : Dynamical Chirally Improved Quarks: First Results for Hadron Masses C. B.
QCD Project Overview Ying Zhang September 26, 2005.
Chroma I: A High Level View Bálint Joó Jefferson Lab, Newport News, VA given at HackLatt'06 NeSC, Edinburgh March 29, 2006.
SciDAC Software Infrastructure for Lattice Gauge Theory Richard C. Brower & Robert Edwards June 24, 2003.
N ATIONAL E NERGY R ESEARCH S CIENTIFIC C OMPUTING C ENTER 1 Porting from the Cray T3E to the IBM SP Jonathan Carter NERSC User Services.
Old Chapter 10: Programming Tools A Developer’s Candy Store.
1 What is a Kernel The kernel of any operating system is the core of all the system’s software. The only thing more fundamental than the kernel is the.
DDT Debugging Techniques Carlos Rosales Scaling to Petascale 2010 July 7, 2010.
® 2-2 Projects 2.1Projects Overview Bootable Projects and VxWorks Configuration Integrated Simulator Downloadable Projects Build Specifications.
11th VI-HPS Tuning Workshop, April 2013, MdS, Saclay1 Hands-on exercise: NPB-MZ-MPI / BT VI-HPS Team.
Chroma: An Application of the SciDAC QCD API(s) Bálint Joó School of Physics University of Edinburgh UKQCD Collaboration Soon to be moving to the JLAB.
SciDAC Software Infrastructure for Lattice Gauge Theory Richard C. Brower QCD Project Review May 24-25, 2005 Code distribution see
Strengthening deflation implementation for large scale LQCD inversions Claude Tadonki Mines ParisTech / LAL-CNRS-IN2P3 Review Meeting / PetaQCD LAL / Paris-Sud.
Lab 2 Parallel processing using NIOS II processors
Marking up lattice QCD configurations and ensembles for ILDG Metadata Working Group P.Coddington, B.Joo, C.Maynard, D.Pleiter, T.Yoshie Working group members.
Realized by: Massimo Di Pierro Presented at: ACAT 2000, Fermilab “Matrix Distributed Processing” for Lattice/Grid Parallel Computations.
Site Report on Physics Plans and ILDG Usage for US Balint Joo Jefferson Lab.
Debugging Ensemble Productions CAMTA Meeting 11 th November 2010 John Murray.
Tools – Ant-MakeEtc 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Tools 12 – Hamcrest 10/02/
USQCD regional grid Report to ILDG /28/09ILDG14, June 5, US Grid Usage  Growing usage of gauge configurations in ILDG file format.  Fermilab.
JL MONGE - Chimere software evolution 1 Chimere software evolution How to make a code : –Obscure –Less efficient –Usable by nobody but its author.
Probing TeV scale physics in precision UCN decays Rajan Gupta Theoretical Division Los Alamos National Lab Lattice 2013 Mainz, 30 July 2013 Superconducting.
CSCS-USI Summer School (Lugano, 8-19 July 2013)1 Hands-on exercise: NPB-MZ-MPI / BT VI-HPS Team.
QDP++ and Chroma Robert Edwards Jefferson Lab Collaborators: Balint Joo.
Installing and Running the WPS Michael Duda 2006 WRF-ARW Summer Tutorial.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
1 Updates to the QCDml and sharing quark propagators ILDG12 (May.23,2008) T. Yoshie for MDWG CCS,Tsukuba two revisions to QCDml  new actions overlap quark.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
1 ITCS4145 Parallel Programming B. Wilkinson March 23, hybrid-abw.ppt Hybrid Parallel Programming Introduction.
Building and Using Libraries CISC/QCSE 810. Libraries libraries exist for many common scientific tasks linear algebra – BLAS/ATLAS, LAPACK optimization.
OpenMP Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Simone Campanoni LLVM Simone Campanoni
Project Management – Part I
WORKSHOP 3 GSE IMPORT.
ILDG Implementation Status
Getting Started: Developing Code with Cloud9
Chroma: An Application of the SciDAC QCD API(s)
John D. McGregor Module 0 Session 2 Infrastructure and problem
Microsoft PowerPoint 2007 – Unit 2
Using OpenMP offloading in Charm++
Working in The IITJ HPC System
Outline Announcements Loading BLAS Loading LAPACK Add/drop by Monday
Presentation transcript:

HackLatt MILC Code Basics Carleton DeTar First presented at Edinburgh EPCC HackLatt 2008 Updated 2013

HackLatt MILC Code Capabilities Molecular dynamics evolution –Staggered fermion actions (Asqtad, Fat7, HISQ, etc) –Clover fermion action –Pure gauge –Schroedinger functional Hadron spectroscopy –Staggered mesons and baryons –Clover mesons and baryons –Mixed staggered/clover mesons –Static/light spectroscopy –Quarkonium spectroscopy (S and P-wave) Current matrix elements –Leptonic decay (fpi, fB, fD) –Semileptonic decay (heavy-light) Miscellaneous –Topological charge –Dirac matrix eigenvectors and eigenvalues –Nonperturbative renormalization of currents

HackLatt Two views of the code. Powerful and capable.

HackLatt Mathias Gruenewald: Temptation of St Anthony (1515) Or a path to confusion and misery.

HackLatt Supported File Formats Gauge configuration file formats –MILC, SciDAC (ILDG), NERSC, Fermilab Dirac propagator file formats –USQCD, Fermilab Staggered propagator file formats –USQCD, MILC, Fermilab

HackLatt Supported SciDAC C-Coded Packages QIO (I/O) QMP (Message passing) QLA (linear algebra – single processor) QDP/C (linear algebra – data parallel) QOPQDP (“Level 3” optimized) (More in the next session)

HackLatt Precision Global single or double precision Mixed precision in some applications Portability Any scalar machine Any MPP machine with MPI GPU, Intel Xeon Phi (somewhat)

HackLatt MILC Code Organization Application directories –With compilation targets Library directory –Linear algebra routines Shared procedures (“generic”) directories –Shared across applications

HackLatt MILC Code Organization Application directories: examples –cd ks_imp_dyn (application) make su3_rmd (target) (Asqtad R algorithm) make su3_spectrum (another target) (staggered spectroscopy) –cd ks_imp_rhmc (application) make su3_rhmc (Asqtad RHMC algorithm) make su3_rhmc_hisq (target) (HISQ algorithm) –cd clover_invert2 (application) make su3_clov (clover spectroscopy, etc.)

HackLatt MILC Code Organization Shared procedures directories: examples –generic (common to all applications) –generic_ks (common to staggered fermion applications) –generic_wilson (common to clover and Wilson fermion apps)

HackLatt Building the MILC Code Download source Unpack Configure Build Check

HackLatt Building the MILC Code Unpack –tar –xvzf milc_qcd a7.tar.gz

HackLatt Building the MILC Code Configure (crude old fashioned!) –Copy default Makefile to application directory cd ks_imp_rhmc cp../Makefile. –Edit (example in next slide) Makefile../libraries/Make_vanilla../include/config.h

HackLatt # # 2. Architecture # Compiling for a parallel machine? Uncomment this line. #MPP = true # # 3. Precision # 1 = single precision; 2 = double PRECISION = 1 # # 4. Compiler # Choices include mpicc cc gcc pgcc g++ ifeq ($(strip ${MPP}),true) CC = /usr/local/mvapich/bin/mpicc else CC = gcc endif Editing the Makefile -- examples

HackLatt Optimization Possibilities Build plain MILC version Build with SciDAC optimized QOP support –Requires installing the SciDAC packages. (See second tutorial and code home page). Build with GPU QUDA support –Requires installing QUDA and QUDA-MILC packages. (See the code home page) Build with OpenMP directives on loops –Very rudimentary and spotty. May need to edit some code by hand to insert OMP macros.

HackLatt # # 5. Compiler optimization level OPT = -O3 # # 6. Other compiler optimizations (depending on compiler) OCFLAGS = # Compiling with OpenMP? OMP = true # # 10. SciDAC package options WANTQOP = true # turns on all optimized QOPQDP modules # # 14. GPU/QUDA Options WANTQUDA = true # turns on possible QUDA selections WANT_FN_CG_GPU = true # turns QUDA CG WANT_FL_GPU = true # QUDA link fattening WANT_FF_GPU = true # QUDA fermion force term WANT_GF_GPU = true # QUDA gauge force ifeq ($(strip ${MPP}),true) CC = /usr/local/mvapich/bin/mpicc else CC = gcc endif Otimization choices controlled in the Makefile -- examples

HackLatt Building and Checking the Code Build (for example) –make su3_rmd Check single precision su3_rmd –make check “PROJS=su3_rmd” “PRECLIST=1” Check all targets in this directory –make check

HackLatt Running the code su3_rhmc outputfile su3_rhmc inputfile > outputfile su3_rhmc inputfile outputfile

HackLatt Sample parameter input (su3_rmd) prompt 0 nflavors1 2 nflavors2 1 nx 16 ny 16 nz 16 nt 64 iseed warms 0 trajecs 2 traj_between_meas 1 beta 6.76 mass mass2 0.5 u microcanonical_time_step 0.02 steps_per_trajectory 4 max_cg_iterations 300 max_cg_restarts 5 error_per_site error_for_propagator npbp_reps 1 prec_pbp 1 reload_serial../../binary_samples/lat.sample.l4444 save_serial_scidac lat.test.scidac molecular dynamics measurements lattice dimensions

HackLatt Summary The MILC code is versatile and portable I have given a brief overview of the code structure I have touched on the procedures for building and running the code

HackLatt Tutorial 1 Goals Run precompiled code Modify the input parameters Build a different target Modify the Makefile and build Download and unpack the code (if time)