Constructing a system with multiple computers or processors

Slides:



Advertisements
Similar presentations
Multiple Processor Systems
Advertisements

Parallel Processing with OpenMP
SE-292 High Performance Computing
Lecture 38: Chapter 7: Multiprocessors Today’s topic –Vector processors –GPUs –An example 1.
1 ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 28, 2011 GPUMemories.ppt GPU Memories These notes will introduce: The basic memory hierarchy.
Chapter 8-1 : Multiple Processor Systems Multiple Processor Systems Multiple Processor Systems Multiprocessor Hardware Multiprocessor Hardware UMA Multiprocessors.
Beowulf Supercomputer System Lee, Jung won CS843.
Taxanomy of parallel machines. Taxonomy of parallel machines Memory – Shared mem. – Distributed mem. Control – SIMD – MIMD.
Types of Parallel Computers
Chapter 1 Parallel Computers.
Parallel Computers Chapter 1
CSCI-455/522 Introduction to High Performance Computing Lecture 2.
Information Technology Center Introduction to High Performance Computing at KFUPM.
History of Distributed Systems Joseph Cordina
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Tuesday, September 12, 2006 Nothing is impossible for people who don't have to do it themselves. - Weiler.
Multiprocessors ELEC 6200: Computer Architecture and Design Instructor : Agrawal Name: Nam.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Fall 2008Introduction to Parallel Processing1 Introduction to Parallel Processing.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
1 Programming Multicore Processors Aamir Shafi High Performance Computing Lab
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster and powerful computers –shared memory model ( access nsec) –message passing.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Parallel Computing and Parallel Computers. Home work assignment 1. Write few paragraphs (max two page) about yourself. Currently what going on in your.
并行程序设计 Programming for parallel computing 张少强 QQ: ( 第一讲: 2011 年 9 月.
Grid Computing, B. Wilkinson, 20047a.1 Computational Grids.
Parallel Computer Architecture and Interconnect 1b.1.
1 BİL 542 Parallel Computing. 2 Parallel Programming Chapter 1.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
Multiple Processor Systems Chapter Multiprocessors 8.2 Multicomputers 8.3 Distributed systems.
ECE200 – Computer Organization Chapter 9 – Multiprocessors.
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster computers –shared memory model ( access nsec) –message passing multiprocessor.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
Background Computer System Architectures Computer System Software.
Constructing a system with multiple computers or processors 1 ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson. Jan 13, 2016.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Parallel and Distributed Programming: A Brief Introduction Kenjiro Taura.
CPU Central Processing Unit
Emergence of GPU systems for general purpose high performance computing ITCS 4145/5145 July 12, 2012 © Barry Wilkinson CUDAIntro.ppt.
GCSE Computing - The CPU
These slides are based on the book:
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
Parallel Computers Chapter 1.
Chapter 4: Multithreaded Programming
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
Overview Parallel Processing Pipelining
Course Outline Introduction in algorithms and applications
Assembly Language for Intel-Based Computers, 5th Edition
What happens inside a CPU?
CPU Central Processing Unit
What is Parallel and Distributed computing?
Dr. Barry Wilkinson © B. Wilkinson Modification date: Jan 9a, 2014
Chapter 17 Parallel Processing
Multiple Processor Systems
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Shared Memory Programming
Constructing a system with multiple computers or processors
Chapter 1 Introduction.
Chapter 4: Threads & Concurrency
Chapter 4 Multiprocessors
GCSE Computing - The CPU
Types of Parallel Computers
6- General Purpose GPU Programming
Multicore and GPU Programming
Cluster Computers.
Presentation transcript:

Constructing a system with multiple computers or processors ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013. slides1-b.ppt July 10, 2013

Conventional Computer Consists of a processor executing a program stored in a (main) memory: Each main memory location located by its address. Addresses start at 0 and extend to 2b - 1 when there are b bits (binary digits) in address. Main memory Instr uctions (to processor) Data (to or from processor) Processor

Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer

1. Shared Memory Multiprocessor System Natural way to extend single processor model - have multiple processors connected to multiple memory modules, such that each processor can access any memory module: Processors Processor-memory Interconnections Memory module One address space

Using a processor-memory bus as the interconnection network Example – Dual and Quad Processor Shared Memory Multiprocessors Processor Processor Processor Processor L1 cache L1 cache L1 cache L1 cache L2 Cache L2 Cache L2 Cache L2 Cache Bus interface Bus interface Bus interface Bus interface Processor/ memory b us coit-grid01 – coit-grid04 are of this form (dual processor servers with 8GB shared memory) Set of lines 100+ Memory controller Memory Shared memory

“Recent” innovation(since 2005) Dual-core and multi-core processors Two or more independent processors in one package Actually an old idea but not put into wide practice until recently because limits of making single processors faster principally caused by: Power dissipation (power wall) and clock frequency limitations Limits in parallelism within a single instruction stream Memory speed limitations (memory wall)

Single quad core shared memory multiprocessor Chip Processor L1 cache Processor “core” L2 Cache Memory controller Memory Shared memory

Multiple quad-core multiprocessors L1 cache L1 cache L1 cache L1 cache L1 cache L1 cache L1 cache L1 cache L1 cache L1 cache L1 cache L1 cache L2 Cache L2 Cache possible L3 cache Memory controller Memory Shared memory Examples coit-grid05.uncc.edu - four processors each quad core. All 16 cores have access to 64 GB shared main memory (thro multilevel caches) coit-grid09.uncc.edu - two processors, each 16 core, 3 levels of caches

Programming Shared Memory Multiprocessors Several possible ways – Usual approach is to use threads Threads - individual parallel sequences (threads), each thread having their own local variables but being able to access shared variables declared outside threads. 1. Low–level thread libraries - programmer calls thread routines to create and control the threads. Example Pthreads, Java threads. 2. Higher level library functions and preprocessor compiler directives. Example OpenMP - industry standard. Consists of library functions, compiler directives, and environment variables

Other programming alternatives Tasks Rather than program with threads, which are closely linked to the physical hardware, can program with parallel “tasks.” Promoted by Intel with their TBB (Thread Building Blocks) tools. Other programming alternatives Parallelizing compilers compiling regular sequential programs and making them parallel programs Special parallel languages (both not now common).

2. Distributed Memory Multicomputer Complete computers connected through an interconnection network: Many interconnection networks explored in 1970s and 1980s including 2- and 3-dimensional meshes, hypercubes, and multistage interconnection networks Interconnection network Messages Processor Local memory Computers

Networked Computers as a Computing Platform A network of computers became a very attractive alternative to expensive supercomputers and parallel computer systems for high-performance computing in early 1990s. Several early projects. Notable: Berkeley NOW (network of workstations) project. NASA Beowulf project.

Key advantages of using commodity networked computers: Very high performance workstations and PCs readily available at low cost. Latest processors can easily be incorporated into the system as they become available. Existing software can be used or modified.

Beowulf Clusters* A group of interconnected “commodity” computers achieving high performance with low cost. Typically using commodity interconnects - high speed Ethernet, and Linux OS. * Beowulf comes from name given by NASA Goddard Space Flight Center cluster project.

Cluster Interconnects Originally fast Ethernet on low cost clusters Gigabit Ethernet - easy upgrade path More specialized/higher performance interconnects available including Myrinet and Infiniband.

Dedicated cluster with a master node and compute nodes User Master node Compute nodes Dedicated Cluster Ethernet interface Switch External network Computers Local network

Software Tools for Clusters Based upon message passing programming model User-level libraries provided for explicitly specifying messages to be sent between executing processes on each computer . Use with regular programming languages (C, C++, ...). Can be quite difficult to program correctly as we shall see.

Using GPUs for high performance computing GPUs (graphics processing units) originally designed to speed up and support graphics operations Now used for high performance computing. GPUs have 100’s of processing cores and can provide orders of magnitude increase in execution speed. We will look at GPU devices and how to program them in the last few weeks of the course

GPU clusters Recent trend for clusters – incorporating GPUs for high performance. Many of the fastest computers in the world are GPU clusters UNC-C cluster used in course has three GPU servers: coit-grid06.uncc.edu (C2050 GPU) coit-grid07.uncc.edu (C2050 GPU) coit-grid08.uncc.edu (K20 GPU) K20x GPUs C2050 GPUs http://www.top500.org/

Next step Learn how to program multiprocessor systems We will start with a new pattern programming approach and later consider lower level tools.