Simulating Quarks and Gluons with Quantum Chromodynamics February 10, 2005. CS635 Parallel Computer Architecture. Mahantesh Halappanavar.

Slides:



Advertisements
Similar presentations
Nuclear Physics in the SciDAC Era Robert Edwards Jefferson Lab SciDAC 2009 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this.
Advertisements

What is QCD? Quantum ChromoDynamics is the theory of the strong force –the strong force describes the binding of quarks by gluons to make particles such.
1 Advancing Supercomputer Performance Through Interconnection Topology Synthesis Yi Zhu, Michael Taylor, Scott B. Baden and Chung-Kuan Cheng Department.
Richard Kenway Everything is a computer Richard Kenway.
Lattice QCD (INTRODUCTION) DUBNA WINTER SCHOOL 1-2 FEBRUARY 2005.
Lattice Spinor Gravity Lattice Spinor Gravity. Quantum gravity Quantum field theory Quantum field theory Functional integral formulation Functional integral.
Meshless Elasticity Model and Contact Mechanics-based Verification Technique Rifat Aras 1 Yuzhong Shen 1 Michel Audette 1 Stephane Bordas 2 1 Department.
Forward-Backward Correlations in Relativistic Heavy Ion Collisions Aaron Swindell, Morehouse College REU 2006: Cyclotron Institute, Texas A&M University.
Algorithms for Lattice Field Theory at Extreme Scales Rich Brower 1*, Ron Babich 1, James Brannick 2, Mike Clark 3, Saul Cohen 1, Balint Joo 4, Tony Kennedy.
1 BGL Photo (system) BlueGene/L IBM Journal of Research and Development, Vol. 49, No. 2-3.
CSE351/ IT351 Modeling and Simulation
1 Dong Lu, Peter A. Dinda Prescience Laboratory Computer Science Department Northwestern University Virtualized.
Finite Size Effects on Dilepton Properties in Relativistic Heavy Ion Collisions Trent Strong, Texas A&M University Advisors: Dr. Ralf Rapp, Dr. Hendrik.
Router Architectures An overview of router architectures.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Parallel Performance of Hierarchical Multipole Algorithms for Inductance Extraction Ananth Grama, Purdue University Vivek Sarin, Texas A&M University Hemant.
Discovering New Particles with Cloud Computing G. Fox and A. Szczepaniak postdocs: Peng Guo Vincent Mathieu (joint funding IUCRG/Jefferson Lab, VA) Geoffrey.
Computational Science jsusciencesimulation Principles of Scientific Simulation Spring Semester 2005 Geoffrey Fox Community.
A Study of The Applications of Matrices and R^(n) Projections By Corey Messonnier.
New States of Matter and RHIC Outstanding questions about strongly interacting matter: How does matter behave at very high temperature and/or density?
Martin Berzins (Steve Parker) What are the hard apps problems? How do the solutions get shared? What non-apps work is needed? Thanks to DOE for funding.
QCD Project Overview Ying Zhang September 26, 2005.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Lattice QCD in Nuclear Physics Robert Edwards Jefferson Lab CCP 2011 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.:
Non-commutative model of quark interactions V.V. Khruschov (Kurchatov Institute) The talk on the International Conference “New Trends in High-Energy Physics”,
Physics Steven Gottlieb, NCSA/Indiana University Lattice QCD: focus on one area I understand well. A central aim of calculations using lattice QCD is to.
R. Ryne, NUG mtg: Page 1 High Energy Physics Greenbook Presentation Robert D. Ryne Lawrence Berkeley National Laboratory NERSC User Group Meeting.
Measurement and Significant Figures
Excited State Spectroscopy using GPUs Robert Edwards Jefferson Lab TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Lattice QCD and the SciDAC-2 LQCD Computing Project Lattice QCD Workflow Workshop Fermilab, December 18, 2006 Don Holmgren,
High Energy Nuclear Physics and the Nature of Matter Outstanding questions about strongly interacting matter: How does matter behave at very high temperature.
Computational Aspects of Multi-scale Modeling Ahmed Sameh, Ananth Grama Computing Research Institute Purdue University.
Lattice QCD and GPU-s Robert Edwards, Theory Group Chip Watson, HPC & CIO Jie Chen & Balint Joo, HPC Jefferson Lab TexPoint fonts used in EMF. Read the.
SciDAC Software Infrastructure for Lattice Gauge Theory Richard C. Brower QCD Project Review May 24-25, 2005 Code distribution see
 Advanced Accelerator Simulation Panagiotis Spentzouris Fermilab Computing Division (member of the SciDAC AST project)
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
SURA BOT 11/5/02 Lattice QCD Stephen J Wallace. SURA BOT 11/5/02 Lattice.
General Discussion some general remarks some questions.
1 Lattice Quantum Chromodynamics 1- Literature : Lattice QCD, C. Davis Hep-ph/ Burcham and Jobes By Leila Joulaeizadeh 19 Oct
Javier Junquera Introduction to atomistic simulation methods in condensed matter Alberto García Pablo Ordejón.
What is QCD? Quantum ChromoDynamics is the theory of the strong force
A QCD Grid: 5 Easy Pieces? Richard Kenway University of Edinburgh.
+ Clusters Alternative to SMP as an approach to providing high performance and high availability Particularly attractive for server applications Defined.
Layali Rashid, Wessam M. Hassanein, and Moustafa A. Hammad*
HEP and NP SciDAC projects: Key ideas presented in the SciDAC II white papers Robert D. Ryne.
U.S. Department of Energy’s Office of Science Midrange Scientific Computing Requirements Jefferson Lab Robert Edwards October 21, 2008.
Circuit Simulation using Matrix Exponential Method Shih-Hung Weng, Quan Chen and Chung-Kuan Cheng CSE Department, UC San Diego, CA Contact:
Tackling I/O Issues 1 David Race 16 March 2010.
Multipole-Based Preconditioners for Sparse Linear Systems. Ananth Grama Purdue University. Supported by the National Science Foundation.
VU-Advanced Computer Architecture Lecture 1-Introduction 1 Advanced Computer Architecture CS 704 Advanced Computer Architecture Lecture 1.
CSCAPES Mission Research and development Provide load balancing and parallelization toolkits for petascale computation Develop advanced automatic differentiation.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
APE group Many-core platforms and HEP experiments computing XVII SuperB Workshop and Kick-off Meeting Elba, May 29-June 1,
High Performance Computing Activities at Fermilab James Amundson Breakout Session 5C: Computing February 11, 2015.
Computational Requirements
Parallel Plasma Equilibrium Reconstruction Using GPU
Baryons on the Lattice Robert Edwards Jefferson Lab Hadron 09
Electron Ion Collider New aspects of EIC experiment instrumentation and computing, as well as their possible impact on and context in society (B) COMPUTING.
Lattice QCD Computing Project Review
1888pressrelease - "Quantum Chromodynamics (QCD)"
Computer-Generated Force Acceleration using GPUs: Next Steps
QCD (Quantum ChromoDynamics)
Nuclear Physics Data Management Needs Bruce G. Gibbard
Particle Physics Theory
Supported by the National Science Foundation.
Steps in the Scientific Method
Emulating Massively Parallel (PetaFLOPS) Machines
Presentation transcript:

Simulating Quarks and Gluons with Quantum Chromodynamics February 10, CS635 Parallel Computer Architecture. Mahantesh Halappanavar.

Impact on Science LQCD will impact science at all scales. Major Goals: Verify the standard model (discover the limits) Determine properties of interacting matter under extreme conditions Understand internal structure of nucleons and other strongly interacting particles. Lattice QCD simulations are essential to research in all of these areas. Possible only by computation, results needed urgently to support the experimental work (like Relativistic Heavy Ion Collider - BNL)

Scientific Opportunities With sustained computational power of 100 Tflops/s (currently ~1Tflops) and improved lattice formulations, major advances in our understanding of internal structure of nucleons can be made. Pflops/s resources would enable study of the gluon structure of the nucleon, in addition to its quark structure. These calculations would significantly deepen our understanding of the standard model and therefore of the basic laws of physics.

Research Issues QCD is formulated in the four-dimensional space-time continuum and involves hundreds of millions of variables. Simulations need to be done at small distances which grows at approximately as the seventh power of the inverse of the lattice spacing. Up and down quarks have very small mass and therefore cannot be represented accurately (need Pflop/s computational power). Dirac operator: 70-90% of computations – sparse matrix & iterative techniques. Standard multilevel solver techniques to accelerate inversion cannot be used due to random nature of the nonzero elements of the Dirac operator. New algorithms needed for QCD at large densities and time dependent problems.

Resources Required: Need for special type of machines: Commercial cache based machine: 10-15% Specially designed: 35-50% Basic operation: multiplication of a three component vector of complex numbers, by a 3 X 3 matrix of complex numbers. Critical: relationship between data movement and floating point operations. Regular architectures would prove to be insufficient. Special machines at FNAL and JLab.

“More Science Per Dollar” First production three-dimensional mesh computer system in the world. Prototype 256-node 3-D Gigabit Ethernet mesh Linux cluster arranged in 4X8X8 (torus) configuration using Intel PRO/1000 MT Dual Port Server Adapters as the interconnect. Intel Xeon processor based node is wired point-to-point to six adjacent nodes using three Intel Cards, eliminating need for a switch. ~0.7 teraflops sustained, data rates approaching 500 MB/sec/node have been achieved.

Metrics of Success True success will be generation of new results with accuracies sufficient to advance current understanding of fundamental theory. Make precise tests on Standard Model and develop more encompassing theory than the Standard Model. All this is possible when the required computational resources will become available.

THANKS !!