Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Programming Multicore Processors Aamir Shafi High Performance Computing Lab

Similar presentations


Presentation on theme: "1 Programming Multicore Processors Aamir Shafi High Performance Computing Lab"— Presentation transcript:

1 1 Programming Multicore Processors Aamir Shafi High Performance Computing Lab http://hpc.seecs.nust.edu.pk

2 2 Serial Computation Traditionally, software has been written for serial computation: To be run on a single computer having a single Central Processing Unit (CPU) A problem is broken into a discrete series of instructions

3 Parallel Computation Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem: Also known as High Performance Computing (HPC) The prime focus of HPC is performance—the ability to solve biggest possible problems in the least possible time 3

4 Traditional Usage of Parallel Computing--Scientific Computing Traditionally parallel computing is used to solve challenging scientific problems by doing simulations: For this reason, it is also called “Scientific Computing”: Computational science 4

5 Emergence of Multi-core Processors In the last decade, performance of processors is not enhanced by increasing clock speed: Increasing clock speed directly increases power consumption Power is dissipated as heat, not practical to cool down processors Intel canceled a project to produce 4 GHz processor! This led to the emergence of multi-core processors: Performance is increased by increasing processing cores that run on lower clock speed: Implies better power usage 5 Disruptive Technology!

6 6 Moore’s Law is Alive and Well

7 7 Power Wall

8 Why Multi-core Processors Consume Lesser Power Dynamic power is proportional to V 2 fC Increasing frequency (f) also increases supply voltage (V): more than linear effect Increasing cores increases capacitance (C) but has only a linear effect 8

9 9 Software in the Multi-core Era The challenge has been thrown to the software industry: Parallelism is perhaps the answer The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software: http://www.gotw.ca/publications/concurrency-ddj.htm Some excerpts: The biggest sea change in software development since the OO revolution is knocking at the door, and its name is Concurrency This essentially means every software programmer will be a parallel programmer: The main motivation behind conducting this “Programming Multicore Processors” workshop

10 10 About the “Programming Multicore Processors” Workshop

11 11 Instructors This workshop will be taught by: Akbar Mehdi (http://hpc.seecs.nust.edu.pk/~akbar/):http://hpc.seecs.nust.edu.pk/~akbar/ Masters from Stanford University, USA NVIDIA CUDA API, POSIX Threads, Operating Systems, Algorithms Mohsan Jameel (http://hpc.seecs.nust.edu.pk/~mohsan/)http://hpc.seecs.nust.edu.pk/~mohsan/ Masters from KTH, Sweden Scientific Computing, Parallel Computing Languages, OpenMP

12 12 Course Contents … A little background on Parallel Computing Approaches

13 Parallel Hardware Three main classifications: Shared Memory Multi-processors: Symmetric Multi-Processors (SMP) Multi-core Processors Distributed Memory Multi-processors Massively Parallel Processors (MPP) Clusters: Commodity and custom clusters Hybrid Multi-processors: Mixture of shared and distributed memory technologies 13

14 14 First Type: Shared Memory Multi-processors All processors have access to shared memory: Notion of “Global Address Space”

15 15 Symmetric Multi-Processors (SMP) A SMP is a parallel processing system with a shared- everything approach: The term signifies that each processor shares the main memory and possibly the cache Typically a SMP can have 2 to 256 processors Also called Uniform Memory Access (UMA) Examples include AMD Athlon, AMD Opteron 200 and 2000 series, Intel XEON etc

16 16 Multi-core Processors

17 17 Second Type: Distributed Memory Each processor has its own local memory Processors communicate with each other by message passing on an interconnect

18 18 Cluster Computers A group of PCs or workstations or Macs (called nodes) connected to each other via a fast (and private) interconnect: Each node is an independent computer Each cluster has one head-node and multiple compute-nodes: Users logon to head-node and start parallel jobs on compute-nodes Two popular cluster classifications: Beowulf Clusters (http://www.beowulf.org)http://www.beowulf.org Rocks Clusters (http://www.rocksclusters.org)http://www.rocksclusters.org

19 19 Proc 6 Proc 0 Proc 1 Proc 3 Proc 2 Proc 4 Proc 5 Proc 7 Cluster Computer

20 20 Third Type: Hybrid Modern clusters have hybrid architecture: Distributed memory for inter-node (between nodes) communications Shared memory for intra-node (within a node) communications

21 21 SMP and Multi-core clusters Most modern commodity clusters have SMP and/or multi-core nodes: Processors not only communicate via interconnect, but shared memory programming is also required This trend is likely to continue: Even a new name “constellations” has been proposed

22 Classification of Parallel Computers 22 Parallel Hardware Shared Memory Hardware Distributed Memory Hardware SMPs Multicore Processors Clusters MPPs In this workshop, we will learn how to program shared memory parallel hardware … Parallel Hardware  Shared Memory Hardware  *

23 Writing Parallel Software There are mainly two approaches for writing parallel software The first approach is to use libraries (packages) written in already existing languages: Economical The second and more radical approach is to provide new languages: Parallel Computing has a history of novel parallel languages These languages provide high level parallelism constructs: 23

24 24 Shared Memory Languages and Libraries Designed to support parallel programming on shared memory platforms: OpenMP: Consists of a set of compiler directives, library routines, and environment variables The runtime uses fork-join model of parallel execution Cilk++: A design goal was to support asynchronous parallelism A set of keywords: cilk_for, cilk_spawn, cilk_sync … POSIX Threads (PThreads) Threads Building Blocks (TBB)

25 Distributed Memory Languages and Libraries Libraries: Message Passing Interface (MPI)—defacto standard PVM Languages: High Performance Fortran (HPF): Fortran M: HPJava: 25

26 26 Our Focus Shared Memory and Multi-core Processors Machines: Using POSIX Threads Using OpenMP Using Cilk++ (covered briefly) Disruptive Technology: Using Graphics Processing Units (GPUs) by NVIDIA for general-purpose computing

27 Day One 27 TimingsTopicPresenter 10:00 to 10:30 Introduction to multicore computing Aamir Shafi 10:30 to 11:30 Background discussion— review of processes, threads, and architecture. Speedup analysis Akbar Mehdi 11:30 to 11:45Break 11:45 to 12:55P Introduction to POSIX Threads Akbar Mehdi 12:55P to 1:25PPrayers break 1:25P to 2:30PPractical Session—Run hello world PThreads program, introduce Linux, top, Solaris. Also introduce the first coding assignment Akbar Mehdi

28 Day Two 28 TimingsTopicPresenter 10:00 to 11:00 POSIX Threads continued… Akbar Mehdi 11:00 to 12:55P Introduction to OpenMP Mohsan Jameel 12:55P to 1:25PPrayer Break 1:25P to 2:30POpenMP continued… + Lab session Mohsan Jameel

29 Day Three 29 TimingsTopicPresenter 10:00 to 12:00 Parallelizing the Image Processing Application using PThreads and OpenMP—Practical Session Akbar Mehdi and Mohsan Jameel 12:00 to 12:55PIntroduction to Intel Cilk++Aamir Shafi 12:55 to 1:25PPrayer Break 1:25P to 2:30P Introduction to NVIDIA CUDA Akbar Mehdi 2:30P to 2:35PConcluding RemarksAamir Shafi

30 Learning Objectives To become aware of the multicore revolution and its impact on the computer software industry To program multicore processors using POSIX Threads To program multicore processors using OpenMP and Cilk++ To program Graphics Processing Units (GPUs) for general purpose computation (using NVIDIA CUDA API) 30 You may download the tentative agenda from http://hpc.seecs.nust.edu.pk/~aamir/res/mc_agenda.pdf http://hpc.seecs.nust.edu.pk/~aamir/res/mc_agenda.pdf

31 Next Session Review of important and relevant Operating Systems and Computer Architecture concepts by Akbar Mehdi …. 31


Download ppt "1 Programming Multicore Processors Aamir Shafi High Performance Computing Lab"

Similar presentations


Ads by Google