Scalable Computational Methods in Quantum Field Theory Advisors: Hemmendinger, Reich, Hiller (UMD) Jason Slaunwhite Computer Science and Physics Senior.

Slides:



Advertisements
Similar presentations
Instructor Notes Lecture discusses parallel implementation of a simple embarrassingly parallel nbody algorithm We aim to provide some correspondence between.
Advertisements

Copyright 2011, Data Mining Research Laboratory Fast Sparse Matrix-Vector Multiplication on GPUs: Implications for Graph Mining Xintian Yang, Srinivasan.
What is QCD? Quantum ChromoDynamics is the theory of the strong force –the strong force describes the binding of quarks by gluons to make particles such.
Doing Very Big Calculations on Modest Size Computers Reducing the Cost of Exact Diagonalization Using Singular Value Decomposistion Marvin Weinstein, Assa.
OpenFOAM on a GPU-based Heterogeneous Cluster
A quick example calculating the column space and the nullspace of a matrix. Isabel K. Darcy Mathematics Department Applied Math and Computational Sciences.
Linear Transformations
Eigenvalues and Eigenvectors
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Sparse Matrix Algorithms CS 524 – High-Performance Computing.
AppxA_01fig_PChem.jpg Complex Numbers i. AppxA_02fig_PChem.jpg Complex Conjugate.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
1 Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as a i,j and elements of.
5 5.1 © 2012 Pearson Education, Inc. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Linear Algebra and Image Processing
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
ParAlign: with OpenMP Peter Reetz. Overview n Simple algorithm for finding un-gapped alignments n 4 to 5 times faster than Smith- Waterman algorithm &
Parallel Performance of Hierarchical Multipole Algorithms for Inductance Extraction Ananth Grama, Purdue University Vivek Sarin, Texas A&M University Hemant.
Basis Light-Front Quantization: a non-perturbative approach for quantum field theory Xingbo Zhao With Anton Ilderton, Heli Honkanen, Pieter Maris, James.
Mathematics for Computer Graphics (Appendix A) Won-Ki Jeong.
1 Design of an SIMD Multimicroprocessor for RCA GaAs Systolic Array Based on 4096 Node Processor Elements Adaptive signal processing is of crucial importance.
Day 1 Eigenvalues and Eigenvectors
6 1 Linear Transformations. 6 2 Hopfield Network Questions The network output is repeatedly multiplied by the weight matrix W. What is the effect of this.
Day 4 Differential Equations (option chapter). The number of rabbits in a population increases at a rate that is proportional to the number of rabbits.
AppxA_01fig_PChem.jpg Complex Numbers i. AppxA_02fig_PChem.jpg Complex Conjugate * - z* =(a, -b)
PDCS 2007 November 20, 2007 Accelerating the Complex Hessenberg QR Algorithm with the CSX600 Floating-Point Coprocessor Yusaku Yamamoto 1 Takafumi Miyata.
Domain Range definition: T is a linear transformation, EIGENVECTOR EIGENVALUE.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Nuclear structure and reactions Nicolas Michel University of Tennessee.
Manno, , © by Supercomputing Systems 1 1 COSMO - Dynamical Core Rewrite Approach, Rewrite and Status Tobias Gysi POMPA Workshop, Manno,
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Linear Algebra Diyako Ghaderyan 1 Contents:  Linear Equations in Linear Algebra  Matrix Algebra  Determinants  Vector Spaces  Eigenvalues.
1 Lattice Quantum Chromodynamics 1- Literature : Lattice QCD, C. Davis Hep-ph/ Burcham and Jobes By Leila Joulaeizadeh 19 Oct
© 2012 Autodesk A Fast Modal (Eigenvalue) Solver Based on Subspace and AMG Sam MurgieJames Herzing Research ManagerSimulation Evangelist.
Linear Algebra Diyako Ghaderyan 1 Contents:  Linear Equations in Linear Algebra  Matrix Algebra  Determinants  Vector Spaces  Eigenvalues.
5.1 Eigenvectors and Eigenvalues 5. Eigenvalues and Eigenvectors.
Signal & Weight Vector Spaces
Performance Surfaces.
James Brown, Tucker Carrington Jr. Computing vibrational energies with phase-space localized functions and an iterative eigensolver.
Two-dimensional SYM theory with fundamental mass and Chern-Simons terms * Uwe Trittmann Otterbein College OSAPS Spring Meeting at ONU, Ada April 25, 2009.
Investigations of QCD Glueballs Denver Whittington Anderson University Advisor: Dr. Adam Szczepaniak Indiana University Summer 2003.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Introduction to Flavor Physics in and beyond the Standard Model Enrico Lunghi References: The BaBar physics book,
Matrices and Linear Systems Roughly speaking, matrix is a rectangle array We shall discuss existence and uniqueness of solution for a system of linear.
A Parallel Hierarchical Solver for the Poisson Equation Seung Lee Deparment of Mechanical Engineering
Multipole-Based Preconditioners for Sparse Linear Systems. Ananth Grama Purdue University. Supported by the National Science Foundation.
Transformation methods - Examples
1 SYSTEM OF LINEAR EQUATIONS BASE OF VECTOR SPACE.
Review of Eigenvectors and Eigenvalues from CliffsNotes Online mining-the-Eigenvectors-of-a- Matrix.topicArticleId-20807,articleId-
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
WEAK DECAYS: ALL DRESSED UP
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
Parallel Iterative Solvers for Ill-Conditioned Problems with Reordering Kengo Nakajima Department of Earth & Planetary Science, The University of Tokyo.
Parallel Plasma Equilibrium Reconstruction Using GPU
Progress Report— 11/06 宗慶.
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Beams and Frames.
Parallelism in High-Performance Computing Applications
GENERAL VIEW OF KRATOS MULTIPHYSICS
Supported by the National Science Foundation.
Section 3.2 – Determinant of a SQUARE Matrix
Linear Transformations
Matrix Addition
Section 10.4 – Determinant of a SQUARE Matrix
Performance Surfaces.
Scalable Computational Methods in Quantum Field Theory
Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as ai,j and elements of B as.
Eigenvalues and Eigenvectors
Linear Transformations
Presentation transcript:

Scalable Computational Methods in Quantum Field Theory Advisors: Hemmendinger, Reich, Hiller (UMD) Jason Slaunwhite Computer Science and Physics Senior Project

Outline Context / Background Design Optimization –Compiler –Data Structures Parallel Summary

Context (1) Physical Model –Strong Force Yukawa Theory –Quantum field thoery Interactions = particle exchanges Gauge Bosons Eigenvalue Problem –Common ex: rotation –Form: Ax = x Matrix Vector Scalar QED picture not QCD, particle exchange Eigenvector Z xy-plane Rotation About z

Context (2) Formulation of Eigenvalue Problem –Discrete - Hiller –Basis Function Expansion - Slaunwhite Ax = x discrete y = f(x) y x y x y x y x + Basis Function Expansion y = G n (x) y = G m (x) f(x) = a*G n (x) + b* G m (x) + … Ax = x

Context (3) Is BFE a good method for solving the eigenvalue problem? –Is it scalable? Convergence of eigenvalues as w/ increasing # of functions Time dependence of computational methods convergence

Design (1) What does the program do? –Input Parameters –Calculate each independent matrix elements –Solve (Diagonalize the matrix) Structure Reflects Mathematics Input Calc Matrix Solve (Diagonalize) easy libraries

Design (2) Input Calc Matrix Diagonalize (solve) Integrate Level 1 Integrate Level 2 Kernel Level 3

Review Quantum Field Theory Model of the strong force Eigenvalue problem Programming work: calculate the matrix elements. How did I optimize it? Can it run in parallel? Ax = x Matrix Vector Scalar The program

Optimization - Compiler g Simple Adds compile time Very Effective! - Unoptimized - Optimized

Optimization – Data Structures Naïve approach Storage vs. Time –Precompute values outside of element iteration –Need organized way to index the values For each row For each col Calculate element Compute library Values Trade-off Compute library Values (naïve) … … Naïve smart

Optimization Results Slopes: red/yellow = 2.56 Slopes: yellow/green = 2.28 Slopes: red/green = 5.84 Key: -- naïve -- data structure + compiler

Parallel Design Matrix elements independent Split computation across many processors Ax = x =

Work in progress - Paralellization OpenMP libraries IBM SP – MSI –Slower processors, but more of them and more memory Work in progress From The IBM SP consists of 96 shared-memory nodes with a total of 376 processors and 616 GB of memory

Summary g The program Parallel? Ax = x