SciDAC Software Infrastructure for Lattice Gauge Theory DOE Grant ’01 -- ’03 (-- ’05?) All Hands Meeting: FNAL Feb. 21, 2003 Richard C.Brower Quick Overview.

Slides:



Advertisements
Similar presentations
I/O and the SciDAC Software API Robert Edwards U.S. SciDAC Software Coordinating Committee May 2, 2003.
Advertisements

SciDAC Software Infrastructure for Lattice Gauge Theory
Nuclear Physics in the SciDAC Era Robert Edwards Jefferson Lab SciDAC 2009 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this.
SALSA HPC Group School of Informatics and Computing Indiana University.
Lattice QCD Comes of Age y Richard C. Brower XLIst Rencontres de Moriond March QCD and Hadronic interactions at high energy.
1 Coven a Framework for High Performance Problem Solving Environments Nathan A. DeBardeleben Walter B. Ligon III Sourabh Pandit Dan C. Stanzione Jr. Parallel.
Data-Parallel Programming Model Basic uniform operations across lattice: C(x) = A(x)*B(x) Distribute problem grid across a machine grid Want API to hide.
QDP++ and Chroma Robert Edwards Jefferson Lab
Algorithms for Lattice Field Theory at Extreme Scales Rich Brower 1*, Ron Babich 1, James Brannick 2, Mike Clark 3, Saul Cohen 1, Balint Joo 4, Tony Kennedy.
SciDAC Software Infrastructure for Lattice Gauge Theory DOE meeting on Strategic Plan --- April 15, 2002 Software Co-ordinating Committee Rich Brower ---
HackLatt MILC with SciDAC C Carleton DeTar HackLatt 2008.
MILC Code Basics Carleton DeTar KITPC MILC Code Capabilities Molecular dynamics evolution –Staggered fermion actions (Asqtad, Fat7, HISQ,
3.5 Interprocess Communication
I.1 ii.2 iii.3 iv.4 1+1=. i.1 ii.2 iii.3 iv.4 1+1=
HackLatt MILC Code Basics Carleton DeTar HackLatt 2008.
Systems Practicum Tom Roeder CS sp. What is this class about? Apply the theory learned in 414  Synchronization, Networking, and many others C.
I.1 ii.2 iii.3 iv.4 1+1=. i.1 ii.2 iii.3 iv.4 1+1=
I/O and the SciDAC Software API Robert Edwards U.S. SciDAC Software Coordinating Committee May 2, 2003.
Slide 3-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 3 Operating System Organization.
9/13/20151 Threads ICS 240: Operating Systems –William Albritton Information and Computer Sciences Department at Leeward Community College –Original slides.
SciDAC Software Infrastructure for Lattice Gauge Theory Richard C. Brower Annual Progress Review JLab, May 14, 2007 Code distribution see
QCD Project Overview Ying Zhang September 26, 2005.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
Center for Programming Models for Scalable Parallel Computing: Project Meeting Report Libraries, Languages, and Execution Models for Terascale Applications.
DOE BER Climate Modeling PI Meeting, Potomac, Maryland, May 12-14, 2014 Funding for this study was provided by the US Department of Energy, BER Program.
SciDAC Software Infrastructure for Lattice Gauge Theory Richard C. Brower & Robert Edwards June 24, 2003.
I Copyright © 2004, Oracle. All rights reserved. Introduction Copyright © 2004, Oracle. All rights reserved.
JLab SciDAC Activities QCD-API design and other activities at JLab include: –Messaging passing design and code (Level 1) [Watson, Edwards] First implementation.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
An Overview of Berkeley Lab’s Linux Checkpoint/Restart (BLCR) Paul Hargrove with Jason Duell and Eric.
Lattice QCD and the SciDAC-2 LQCD Computing Project Lattice QCD Workflow Workshop Fermilab, December 18, 2006 Don Holmgren,
SALSA HPC Group School of Informatics and Computing Indiana University.
HackLatt MILC Code Basics Carleton DeTar First presented at Edinburgh EPCC HackLatt 2008 Updated 2013.
CE Operating Systems Lecture 3 Overview of OS functions and structure.
Lesson 1 Operating Systems, Part 1. Objectives Describe and list different operating systems Understand file extensions Manage files and folders.
CUDA-based Volume Rendering in IGT Nobuhiko Hata Benjamin Grauer.
Chroma: An Application of the SciDAC QCD API(s) Bálint Joó School of Physics University of Edinburgh UKQCD Collaboration Soon to be moving to the JLAB.
SciDAC Software Infrastructure for Lattice Gauge Theory Richard C. Brower QCD Project Review May 24-25, 2005 Code distribution see
Chapter 3 Operating System Organization
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
System Components ● There are three main protected modules of the System  The Hardware Abstraction Layer ● A virtual machine to configure all devices.
MPI: Portable Parallel Programming for Scientific Computing William Gropp Rusty Lusk Debbie Swider Rajeev Thakur.
HEP and NP SciDAC projects: Key ideas presented in the SciDAC II white papers Robert D. Ryne.
SensorWare: Distributed Services for Sensor Networks Rockwell Science Center and UCLA.
U.S. Department of Energy’s Office of Science Midrange Scientific Computing Requirements Jefferson Lab Robert Edwards October 21, 2008.
Published in ACM SIGPLAN, 2010 Heidi Pan MassachusettsInstitute of Technology Benjamin Hindman UC Berkeley Krste Asanovi´c UC Berkeley 1.
QDP++ and Chroma Robert Edwards Jefferson Lab Collaborators: Balint Joo.
Using Technology to Manage Information Chapter 13.
IWAVE++: a Framework for Imaging and Inversion based on Regular-Grid Finite Difference Modeling William W. Symes The Rice Inversion Project Department.
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
A.L. IV.4.1: Real-Time Large-Scale Simulation and Visualisation Simulation Technologies are seen as fundamental for the efficient design and operation.
LQCD Computing Project Overview
Chapter 3: Windows7 Part 5.
VisIt Project Overview
Project Management – Part I
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Section 6.2: Dot Product of Vectors
Miraj Kheni Authors: Toyotaro Suzumura, Koji Ueno
Design rationale and status of the org.glite.overlay component
MPI: Portable Parallel Programming for Scientific Computing
Do Now Solve the following systems by what is stated: Substitution
Parallel Algorithm Design
HPC Modeling of the Power Grid
Chapter 3: Windows7 Part 5.
Jun Doi Tokyo Research Laboratory IBM Research
Control Architecture for Flexible Production Systems
Parallel Analytic Systems
Chroma: An Application of the SciDAC QCD API(s)
Operating Systems (COL 331)
Presentation transcript:

SciDAC Software Infrastructure for Lattice Gauge Theory DOE Grant ’01 -- ’03 (-- ’05?) All Hands Meeting: FNAL Feb. 21, 2003 Richard C.Brower Quick Overview

Goal: Create a unified software environment that will enable the US lattice community to achieve very high efficiency on diverse multi- terascale hardware. TASKS: LIBRARIES: I. QCD Data Parallel API  QDP II. Optimize Message Passing  QMP III. Optimize QCD Linear Algebra  QLA IV. I/O, Data Files and Data Grid  QIO V. Opt. Physics Codes  CPS/MILC/LHPC/etc. VI. Execution Environment  unify BNL/FNAL/Jlab TASKS: LIBRARIES: I. QCD Data Parallel API  QDP II. Optimize Message Passing  QMP III. Optimize QCD Linear Algebra  QLA IV. I/O, Data Files and Data Grid  QIO V. Opt. Physics Codes  CPS/MILC/LHPC/etc. VI. Execution Environment  unify BNL/FNAL/Jlab

Participants in Software Project (partial list) * Software Coordinating Committee

QCD-API Level Structure Dirac Operators, CG routines, etc (Optimized Plugins for Critical Sections) Level 3 Data Parallel API:QCD Lattice Wide Ops (overlapping Algebra and Messaging) A = SHIFT(B, mu) * C; Global sums, etc QDP_XXX Level 2 QLA_XXX Level 1 QMP_XXX Linear Algebra API : SU(3), gamma algebra etc. Message Passing API Maps QCD Lattice onto Network I/O, data objects,.. I/O, data objects,.. (Runtime System/ Execution Env.) Lattice wide QCD types (Gauge Matrix, Fermion Vector,...)

Overview of Talks to Follow DeTar & Osborn –Level 1 Message Passing:  QMP –Level 1 Linear Algebra:  QLA –Level 2 Data Parallel Interface in C:  QDP Mawhinney -Performance of C over QMP -Level 3 Inverters Edwards –Level 2 Data Parallel Interface in C++:  QDP++ Brower –Near Future: I/O and Data Handling software  QIO –Schedule, Milestones and Tests in ’03 and beyond