© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Programming Massively Parallel Processors.

Slides:



Advertisements
Similar presentations
Scalable Multi-Cache Simulation Using GPUs Michael Moeng Sangyeun Cho Rami Melhem University of Pittsburgh.
Advertisements

Intro to GPU’s for Parallel Computing. Goals for Rest of Course Learn how to program massively parallel processors and achieve – high performance – functionality.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 14: Basic Parallel Programming Concepts.
GPGPU Introduction Alan Gray EPCC The University of Edinburgh.
An Effective GPU Implementation of Breadth-First Search Lijuan Luo, Martin Wong and Wen-mei Hwu Department of Electrical and Computer Engineering, UIUC.
GPUs. An enlarging peak performance advantage: –Calculation: 1 TFLOPS vs. 100 GFLOPS –Memory Bandwidth: GB/s vs GB/s –GPU in every PC and.
1 Threading Hardware in G80. 2 Sources Slides by ECE 498 AL : Programming Massively Parallel Processors : Wen-Mei Hwu John Nickolls, NVIDIA.
L13: Review for Midterm. Administrative Project proposals due Friday at 5PM (hard deadline) No makeup class Friday! March 23, Guest Lecture Austin Robison,
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Structuring Parallel Algorithms.
CUDA Programming Lei Zhou, Yafeng Yin, Yanzhi Ren, Hong Man, Yingying Chen.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Programming Massively Parallel Processors.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE408 / CS483 Applied Parallel Programming.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Programming Massively Parallel Processors.
To GPU Synchronize or Not GPU Synchronize? Wu-chun Feng and Shucai Xiao Department of Computer Science, Department of Electrical and Computer Engineering,
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30 – July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 7: Threading Hardware in G80.
CuMAPz: A Tool to Analyze Memory Access Patterns in CUDA
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Memory Hardware in G80.
UIUC CSL Global Technology Forum © NVIDIA Corporation 2007 Computing in Crisis: Challenges and Opportunities David B. Kirk.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 408, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Lecture.
Parallel Applications Parallel Hardware Parallel Software IT industry (Silicon Valley) Users Efficient Parallel CKY Parsing on GPUs Youngmin Yi (University.
Introduction to CUDA (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2012.
CS 395 Last Lecture Summary, Anti-summary, and Final Thoughts.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 1 Programming Massively Parallel Processors Lecture Slides for Chapter 1: Introduction.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, 2008 Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
© David Kirk/NVIDIA and Wen-mei W. Hwu Urbana, Illinois, August 18-22, 2008 VSCSE Summer School 2008 Accelerators for Science and Engineering Applications:
YOU LI SUPERVISOR: DR. CHU XIAOWEN CO-SUPERVISOR: PROF. LIU JIMING THURSDAY, MARCH 11, 2010 Speeding up k-Means by GPUs 1.
Programming Concepts in GPU Computing Dušan Gajić, University of Niš Programming Concepts in GPU Computing Dušan B. Gajić CIITLab, Dept. of Computer Science.
© David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al, University of Illinois, ECE408 Applied Parallel Programming Lecture 12 Parallel.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 CS 395 Winter 2014 Lecture 17 Introduction to Accelerator.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 12: Application Lessons When the tires.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 9: Memory Hardware in G80.
Fast Support Vector Machine Training and Classification on Graphics Processors Bryan Catanzaro Narayanan Sundaram Kurt Keutzer Parallel Computing Laboratory,
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Control Flow/ Thread Execution.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 CMPS 5433 Dr. Ranette Halverson Programming Massively.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Basic Parallel Programming Concepts Computational.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL1, University of Illinois, Urbana-Champaign ECE 498AL1 Programming Massively Parallel Processors.
1 Ceng 545 GPU Computing. Grading 2 Midterm Exam: 20% Homeworks: 40% Demo/knowledge: 25% Functionality: 40% Report: 35% Project: 40% Design Document:
CUDA Performance Considerations (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2011.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 18: Final Project Kickoff.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Threading Hardware in G80.
QCAdesigner – CUDA HPPS project
CUDA Performance Considerations (2 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2011.
1)Leverage raw computational power of GPU  Magnitude performance gains possible.
Introduction to CUDA (1 of n*) Patrick Cozzi University of Pennsylvania CIS Spring 2011 * Where n is 2 or 3.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 Final Project Notes.
© David Kirk/NVIDIA, Wen-mei W. Hwu, and John Stratton, ECE 498AL, University of Illinois, Urbana-Champaign 1 CUDA Lecture 7: Reductions and.
GPU Programming Shirley Moore CPS 5401 Fall 2013
© David Kirk/NVIDIA and Wen-mei W. Hwu Urbana, Illinois, August 10-14, VSCSE Summer School 2009 Many-core Processors for Science and Engineering.
© David Kirk/NVIDIA and Wen-mei W. Hwu, University of Illinois, Urbana-Champaign 1 CS/EE 217 GPU Architecture and Parallel Programming Project.
Efficient Parallel CKY Parsing on GPUs Youngmin Yi (University of Seoul) Chao-Yue Lai (UC Berkeley) Slav Petrov (Google Research) Kurt Keutzer (UC Berkeley)
© David Kirk/NVIDIA and Wen-mei W. Hwu University of Illinois, CS/EE 217 GPU Architecture and Parallel Programming Lecture 10 Reduction Trees.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Spring 2010 Lecture 13: Basic Parallel.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 15: Basic Parallel Programming Concepts.
An Integrated GPU Power and Performance Model (ISCA’10, June 19–23, 2010, Saint-Malo, France. International Symposium on Computer Architecture)
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2014.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Spring 2010 Programming Massively Parallel.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 Graphic Processing Processors (GPUs) Parallel.
Sunpyo Hong, Hyesoon Kim
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 Final Project Kickoff.
Introduction to Performance Tuning Chia-heng Tu PAS Lab Summer Workshop 2009 June 30,
1 ”MCUDA: An efficient implementation of CUDA kernels for multi-core CPUs” John A. Stratton, Sam S. Stone and Wen-mei W. Hwu Presentation for class TDT24,
Stencil-based Discrete Gradient Transform Using
A Dynamic Scheduling Framework for Emerging Heterogeneous Systems
© David Kirk/NVIDIA and Wen-mei W. Hwu,
Introduction to Heterogeneous Parallel Computing
Mattan Erez The University of Texas at Austin
6- General Purpose GPU Programming
Presentation transcript:

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Programming Massively Parallel Processors Lecture 14: Final Project Kickoff

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 2 Objective Building up your ability to translate parallel computing power into science and engineering breakthroughs –Identify applications whose computing structures are suitable for –These applications can be revolutionized by 100X more computing power –You have access to expertise needed to tackle these applications Develop algorithm patterns that can result in both better efficiency as well as better HW utilization –To share with the community of application developers

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 3 Future Science and Engineering Breakthroughs Hinge on Computing Computational Modeling Computational Chemistry Computational Medicine Computational Physics Computational Biology Computational Finance Computational Geoscience Image Processing

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 4 A major shift of paradigm In the 20 th Century, we were able to understand, design, and manufacture what we can measure –Physical instruments and computing systems allowed us to see farther, capture more, communicate better, understand natural processes, control artificial processes… In the 21 st Century, we are able to understand, design, create what we can compute –Computational models are allowing us to see even farther, going back and forth in time, relate better, test hypothesis that cannot be verified any other way, create safe artificial processes

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 5 Examples of Paradigm Shift 20 th Century Small mask patterns and short light waves Electronic microscope and Crystallography with computational image processing Anatomic imaging with computational image processing Teleconference 21 st Century Computational optical proximity correction Computational microscope with initial conditions from Crystallography Metabolic imaging sees disease before visible anatomic change Tele-emersion

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 6 Faster is not “just Faster” 2-3X faster is “just faster” –Do a little more, wait a little less –Doesn’t change how you work 5-10x faster is “significant” –Worth upgrading –Worth re-writing (parts of) the application 100x+ faster is “fundamentally different” –Worth considering a new platform –Worth re-architecting the application –Makes new applications possible –Drives “time to discovery” and creates fundamental changes in Science

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 7 How much computing power is enough? Each jump in computing power motivates new ways of computing –Many apps have approximations or omissions that arose from limitations in computing power –Every 100x jump in performance allows app developers to innovate –Example: graphics, medical imaging, physics simulation, etc. Application developers do not take us seriously until they see real results.

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 8 Need For Speed Seminars Science, Engineering, Consumer application domains! Slides and Video of each talk

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 9 Why didn’t this happen earlier? Computational experimentation is just reaching critical mass –Simulate large enough systems –Simulate long enough system time –Simulate enough details Computational instrumentation is also just reaching critical mass –Reaching high enough accuracy –Cover enough observations

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 10 A Great Opportunity for Many New massively parallel computing is enabling –Drastic reduction in “time to discovery” –1 st principle-based simulation at meaningful scale –New, 3 rd paradigm for research: computational experimentation The “democratization” of power to discover $2,000/Teraflop SPFP in personal computers today $5,000,000/Petaflops DPFP in clusters in two years HW cost will no longer be the main barrier for big science This is once-in-a-career opportunity for many!

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 11 The Pyramid of Parallel Programming Thousand-node systems with MPI-style programming, >100 TFLOPS, $M, allocated machine time (programmers numbered in hundreds) Hundred-core systems with CUDA- style programming, 1-5 TFLOPS, $K, machines widely availability (programmers numbered in 10s of thousands) Hundred-core systems with MatLab- style programming, GFLOPS, $K, machines widely available (programmers numbered in millions)

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 12 VMD/NAMD Molecular Dynamics 240X speedup Computational biology

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 13 15X with MATLAB CPU+GPU Pseudo-spectral simulation of 2D Isotropic turbulence Matlab: Language of Science

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 14 Spring 2007 Projects ApplicationDescriptionSourceKernel% time H.264 SPEC ‘06 version, change in guess vector 34, % LBM SPEC ‘06 version, change to single precision and print fewer reports 1,481285>99% RC5-72 Distributed.net RC5-72 challenge client code 1,979218>99% FEM Finite element modeling, simulation of 3D graded materials 1, % RPES Rye Polynomial Equation Solver, quantum chem, 2- electron repulsion 1, % PNS Petri Net simulation of a distributed system >99% SAXPY Single-precision implementation of saxpy, used in Linpack’s Gaussian elim. routine 95231>99% TRACF Two Point Angular Correlation Function % FDTD Finite-Difference Time Domain analysis of 2D electromagnetic wave propagation 1, % MRI-Q Computing a matrix Q, a scanner’s configuration in MRI reconstruction 49033>99%

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 15 Speedup of GPU-Accelerated Functions GeForce 8800 GTX vs. 2.2GHz Opteron  speedup in a kernel is typical, as long as the kernel can occupy enough parallel threads 25  to 400  speedup if the function’s data requirements and control flow suit the GPU and the application is optimized Keep in mind that the speedup also reflects how suitable the CPU is for executing the kernel

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 16 Student Parallel Programming Results App.Archit. BottleneckSimult. TKernel XApp X LBM Shared memory capacity 3, RC5-72 Registers 3, FEM Global memory bandwidth 4, RPES Instruction issue rate 4, PNS Global memory capacity 2, LINPACK Global memory bandwidth, CPU-GPU data transfer 12, TRACF Shared memory capacity 4, FDTD Global memory bandwidth 1, MRI-Q Instruction issue rate 8, [HKR HotChips-2007]

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 17 Magnetic Resonance Imaging 3D MRI image reconstruction from non-Cartesian scan data is very accurate, but compute-intensive >200  speedup in MRI-Q –CPU – Athlon with fast math library MRI code runs efficiently on the GeForce 8800 –High-floating point operation throughput, including trigonometric functions –Fast memory subsystems Larger register file Threads simultaneously load same value from constant memory Access coalescing to produce < 1 memory access per thread, per loop iteration

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 18 MRI Optimization Space Search Unroll Factor Execution Time Each curve corresponds to a tiling factor and a thread granularity. GPU Optimal 145 GFLOPS

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 19 H.264 Video Encoding (from SPEC) GPU kernel implements sum-of-absolute difference computation –Compute-intensive part of motion estimation –Compares many pairs of small images to estimate how closely they match –An optimized CPU version is 35% of execution time –GPU version limited by data movement to/from GPU, not compute Loop optimizations remove instruction overhead and redundant loads …and increase register pressure, reducing the number of threads that can run concurrently, exposing texture cache latency

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 20 Large and Discontinuous Spaces Search spaces can be huge! Exhaustive search with smaller-than usual data sets still takes significant time. More complex than matrix multiplication, but still relatively small (hundreds of lines of code) Many performance discontinuities To be presented at GPGPU ‘07 Sum of Absolute Differences

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 21 Analytical Models Help Reduce Search Space. Sum of Absolute Differences By selecting only Pareto-optimal points, we pruned the search space by 98% and still found the optimal configuration

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 22 LBM Fluid Simulation (from SPEC) Simulation of fluid flow in a volume divided into a grid –It’s a stencil computation: A cell’s state at time t+1 is computed from the cell and its neighbors at time t Synchronization is required after each timestep – achieved by running the kernel once per timestep Local memories on SMs are emptied after each kernel invocation –Entire data set moves in and out of SMs for every time step –High demand on bandwidth Reduce bandwidth usage with software- managed caching –Memory limits 200 grid cells/threads per SM –Not enough threads to completely cover global memory latency Flow through a cell (dark blue) is updated based on its flow and the flow in 18 neighboring cells (light blue).

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 23 Prevalent Performance Limits Some microarchitectural limits appear repeatedly across the benchmark suite: Global memory bandwidth saturation –Tasks with intrinsically low data reuse, e.g. vector-scalar addition or vector dot product –Computation with frequent global synchronization Converted to short-lived kernels with low data reuse Common in simulation programs Thread-level optimization vs. latency tolerance –Since hardware resources are divided among threads, low per-thread resource use is necessary to furnish enough simultaneously-active threads to tolerate long- latency operations –Making individual threads faster generally increases register and/or shared memory requirements –Optimizations trade off single-thread speed for exposed latency

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 24 What you will likely need to hit hard. Parallelism extraction requires global understanding –Most programmers only understand parts of an application Algorithms need to be re-designed –Programmers benefit from clear view of the algorithmic effect on parallelism Real but rare dependencies often need to be ignored –Error checking code, etc., parallel code is often not equivalent to sequential code Getting more than a small speedup over sequential code is very tricky –~20 versions typically experimented for each application to move away from architecture bottlenecks

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 25 Call to Action One page project description –Introduction: A one paragraph description of the significance of the application. –Description: A one to two paragraph brief description of what the application really does –Objective: A paragraph on what the mentor would like to accomplish with the student teams on the application. –Background : Outline the technical skills (particular type of Math, Physics, Chemstry courses) that one needs to understand and work on the application. –Resources: A list of web and traditional resources that students can draw for technical background, general information and building blocks. Give URL or ftp paths to any particular implementation and coding examples. –Contact Information: Name, , and perhaps phone number of the person who will be mentoring the teams working on the application. Two to three “workshop” lectures dedicated to presentation of project ideas to recruit teams/teammates (March 17-19).