EECC722 - Shaaban #1 lec # 3 Fall 2000 9-18-2000 Parallel Computer Architecture A parallel computer is a collection of processing elements that cooperate.

Slides:



Advertisements
Similar presentations
Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Advertisements

SE-292 High Performance Computing
ECE669 L3: Design Issues February 5, 2004 ECE 669 Parallel Computer Architecture Lecture 3 Design Issues.
Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
Programmability Issues
Resource Management §A resource can be a logical, such as a shared file, or physical, such as a CPU (a node of the distributed system). One of the functions.
Computer Organization and Architecture
Super computers Parallel Processing By: Lecturer \ Aisha Dawood.
Vector Processing. Vector Processors Combine vector operands (inputs) element by element to produce an output vector. Typical array-oriented operations.
1 Parallel Scientific Computing: Algorithms and Tools Lecture #3 APMA 2821A, Spring 2008 Instructors: George Em Karniadakis Leopold Grinberg.
CSCI 8150 Advanced Computer Architecture Hwang, Chapter 1 Parallel Computer Models 1.2 Multiprocessors and Multicomputers.
An Introduction To PARALLEL PROGRAMMING Ing. Andrea Marongiu
Slide 1 Parallel Computation Models Lecture 3 Lecture 4.
ECE669 L4: Parallel Applications February 10, 2004 ECE 669 Parallel Computer Architecture Lecture 4 Parallel Applications.
ECE669 L15: Mid-term Review March 25, 2004 ECE 669 Parallel Computer Architecture Lecture 15 Mid-term Review.
Parallel Programs Conditions of Parallelism: Data Dependence
Fundamental Design Issues for Parallel Architecture Todd C. Mowry CS 495 January 22, 2002.
CSCI 8150 Advanced Computer Architecture
EECC756 - Shaaban #1 lec # 2 Spring Parallel Architectures History Application Software System Software SIMD Message Passing Shared Memory.
Introduction What is Parallel Algorithms? Why Parallel Algorithms? Evolution and Convergence of Parallel Algorithms Fundamental Design Issues.
Chapter 17 Parallel Processing.
EECC756 - Shaaban #1 lec # 1 Spring Parallel Computer Architecture A parallel computer is a collection of processing elements that cooperate.
Parallel Processing Architectures Laxmi Narayan Bhuyan
EECC756 - Shaaban #1 lec # 2 Spring Parallel Architectures History Application Software System Software SIMD Message Passing Shared Memory.
EECC756 - Shaaban #1 lec # 1 Spring Parallel Computer Architecture A parallel computer is a collection of processing elements that cooperate.
Implications for Programming Models Todd C. Mowry CS 495 September 12, 2002.
EECC756 - Shaaban #1 lec # 3 Spring Parallel Programs Conditions of Parallelism:Conditions of Parallelism: –Data Dependence –Control Dependence.
ECE669 L6: Programming for Performance February 17, 2004 ECE 669 Parallel Computer Architecture Lecture 6 Programming for Performance.
Parallel Programming: Case Studies Todd C. Mowry CS 495 September 12, 2002.
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
Introduction to Parallel Processing Ch. 12, Pg
Parallel Architectures
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster and powerful computers –shared memory model ( access nsec) –message passing.
1 Interconnects Shared address space and message passing computers can be constructed by connecting processors and memory unit using a variety of interconnection.
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
1 Programming for Performance Reference: Chapter 3, Parallel Computer Architecture, Culler et.al.
CHAPTER 12 INTRODUCTION TO PARALLEL PROCESSING CS 147 Guy Wong page
1 What is a Multiprocessor? A collection of communicating processors View taken so far Goals: balance load, reduce inherent communication and extra work.
Chapter 6 Multiprocessor System. Introduction  Each processor in a multiprocessor system can be executing a different instruction at any time.  The.
An Overview of Parallel Computing. Hardware There are many varieties of parallel computing hardware and many different architectures The original classification.
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
Spring 2003CSE P5481 Issues in Multiprocessors Which programming model for interprocessor communication shared memory regular loads & stores message passing.
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Parallel Computing.
CS 484 Designing Parallel Algorithms Designing a parallel algorithm is not easy. There is no recipe or magical ingredient Except creativity We can benefit.
Lecture 3 : Performance of Parallel Programs Courtesy : MIT Prof. Amarasinghe and Dr. Rabbah’s course note.
Data Structures and Algorithms in Parallel Computing Lecture 1.
2016/1/5Part I1 Models of Parallel Processing. 2016/1/5Part I2 Parallel processors come in many different varieties. Thus, we often deal with abstract.
Processor Structure and Function Chapter8:. CPU Structure  CPU must:  Fetch instructions –Read instruction from memory  Interpret instructions –Instruction.
Outline Why this subject? What is High Performance Computing?
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Onlinedeeneislam.blogspot.com1 Design and Analysis of Algorithms Slide # 1 Download From
Parallel Computing Presented by Justin Reschke
CDA-5155 Computer Architecture Principles Fall 2000 Multiprocessor Architectures.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
COMP7330/7336 Advanced Parallel and Distributed Computing Data Parallelism Dr. Xiao Qin Auburn University
These slides are based on the book:
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
CMSC 611: Advanced Computer Architecture
CS5102 High Performance Computer Systems Thread-Level Parallelism
Distributed Processors
Parallel Computers Definition: “A parallel computer is a collection of processing elements that cooperate and communicate to solve large problems fast.”
Parallel Programming By J. H. Wang May 2, 2017.
Parallel Processing Architectures
Parallel Computer Architecture
Chapter 4 Multiprocessors
Presentation transcript:

EECC722 - Shaaban #1 lec # 3 Fall Parallel Computer Architecture A parallel computer is a collection of processing elements that cooperate to solve large problems fast Broad issues involved: –Resource Allocation: Number of processing elements (PEs). Computing power of each element. Amount of physical memory used. –Data access, Communication and Synchronization How the elements cooperate and communicate. How data is transmitted between processors. Abstractions and primitives for cooperation. –Performance and Scalability Performance enhancement of parallelism: Speedup. Scalabilty of performance to larger systems/problems.

EECC722 - Shaaban #2 lec # 3 Fall Exploiting Program Parallelism Instruction Loop Thread Process Levels of Parallelism Grain Size (instructions) K10K100K1M

EECC722 - Shaaban #3 lec # 3 Fall The Need And Feasibility of Parallel Computing Application demands: More computing cycles: –Scientific computing: CFD, Biology, Chemistry, Physics,... –General-purpose computing: Video, Graphics, CAD, Databases, Transaction Processing, Gaming… –Mainstream multithreaded programs, are similar to parallel programs Technology Trends –Number of transistors on chip growing rapidly –Clock rates expected to go up but only slowly Architecture Trends –Instruction-level parallelism is valuable but limited –Coarser-level parallelism, as in MPs, the most viable approach Economics: –Today’s microprocessors have multiprocessor support eliminating the need for designing expensive custom PEs –Lower parallel system cost. –Multiprocessor systems to offer a cost-effective replacement of uniprocessor systems in mainstream computing.

EECC722 - Shaaban #4 lec # 3 Fall Scientific Computing Demand

EECC722 - Shaaban #5 lec # 3 Fall Scientific Supercomputing Trends Proving ground and driver for innovative architecture and advanced techniques: –Market is much smaller relative to commercial segment –Dominated by vector machines starting in 70s –Meanwhile, microprocessors have made huge gains in floating-point performance High clock rates. Pipelined floating point units. Instruction-level parallelism. Effective use of caches. Large-scale multiprocessors replace vector supercomputers –Well under way already

EECC722 - Shaaban #6 lec # 3 Fall Raw Uniprocessor Performance: LINPACK

EECC722 - Shaaban #7 lec # 3 Fall Raw Parallel Performance: LINPACK

EECC722 - Shaaban #8 lec # 3 Fall Parallelism in Microprocessor VLSI Generations

EECC722 - Shaaban #9 lec # 3 Fall The Goal of Parallel Computing Goal of applications in using parallel machines: Speedup Speedup (p processors) = For a fixed problem size (input data set), performance = 1/time Speedup fixed problem (p processors) = Performance (p processors) Performance (1 processor) Time (1 processor) Time (p processors)

EECC722 - Shaaban #10 lec # 3 Fall Elements of Modern Computers Hardware Architecture Operating System Applications Software Computing Problems Problems Algorithms and Data Structures High-levelLanguages Performance Evaluation Evaluation Mapping Programming Binding(Compile,Load)

EECC722 - Shaaban #11 lec # 3 Fall Elements of Modern Computers 1 Computing Problems: –Numerical Computing: Science and technology numerical problems demand intensive integer and floating point computations. –Logical Reasoning: Artificial intelligence (AI) demand logic inferences and symbolic manipulations and large space searches. 2 Algorithms and Data Structures –Special algorithms and data structures are needed to specify the computations and communication present in computing problems. –Most numerical algorithms are deterministic using regular data structures. –Symbolic processing may use heuristics or non-deterministic searches. –Parallel algorithm development requires interdisciplinary interaction.

EECC722 - Shaaban #12 lec # 3 Fall Elements of Modern Computers 3 Hardware Resources –Processors, memory, and peripheral devices form the hardware core of a computer system. –Processor instruction set, processor connectivity, memory organization, influence the system architecture. 4 Operating Systems –Manages the allocation of resources to running processes. –Mapping to match algorithmic structures with hardware architecture and vice versa: processor scheduling, memory mapping, interprocessor communication. –Parallelism exploitation at: algorithm design, program writing, compilation, and run time.

EECC722 - Shaaban #13 lec # 3 Fall Elements of Modern Computers 5 System Software Support –Needed for the development of efficient programs in high- level languages (HLLs.) –Assemblers, loaders. –Portable parallel programming languages –User interfaces and tools. 6 Compiler Support –Preprocessor compiler: Sequential compiler and low-level library of the target parallel computer. –Precompiler: Some program flow analysis, dependence checking, limited optimizations for parallelism detection. –Parallelizing compiler: Can automatically detect parallelism in source code and transform sequential code into parallel constructs.

EECC722 - Shaaban #14 lec # 3 Fall Approaches to Parallel Programming Source code written in Source code written in concurrent dialects of C, C++ FORTRAN, LISP FORTRAN, LISP..Programmer Concurrency preserving compiler Concurrent object code Execution by Execution by runtime system Source code written in Source code written in sequential languages C, C++ FORTRAN, LISP FORTRAN, LISP..Programmer Parallelizing compiler compiler Parallel Parallel object code Execution by Execution by runtime system (a) Implicit (a) Implicit Parallelism Parallelism (b) Explicit (b) Explicit Parallelism Parallelism

EECC722 - Shaaban Evolution of Computer Architecture Scalar Sequential Lookahead I/E Overlap Functional Parallelism Multiple Func. Units Pipeline Implicit Vector Explicit Vector MIMDSIMD MultiprocessorMulticomputer Register-to -Register Memory-to -Memory Processor Array Associative Processor Massively Parallel Processors (MPPs) I/E: Instruction Fetch and Execute SIMD: Single Instruction stream over Multiple Data streams MIMD: Multiple Instruction streams over Multiple Data streams

EECC722 - Shaaban #16 lec # 3 Fall Parallel Architectures History Application Software System Software SIMD Message Passing Shared Memory Dataflow Systolic Arrays Architecture Historically, parallel architectures tied to programming models Divergent architectures, with no predictable pattern of growth.

EECC722 - Shaaban #17 lec # 3 Fall Programming Models Programming methodology used in coding applications Specifies communication and synchronization Examples: –Multiprogramming: No communication or synchronization at program level –Shared memory address space: –Message passing: Explicit point to point communication –Data parallel: More regimented, global actions on data Implemented with shared address space or message passing

EECC722 - Shaaban #18 lec # 3 Fall Flynn’s 1972 Classification of Computer Architecture Single Instruction stream over a Single Data stream (SISD): Conventional sequential machines. Single Instruction stream over Multiple Data streams (SIMD): Vector computers, array of synchronized processing elements. Multiple Instruction streams and a Single Data stream (MISD): Systolic arrays for pipelined execution. Multiple Instruction streams over Multiple Data streams (MIMD): Parallel computers: Shared memory multiprocessors. Multicomputers: Unshared distributed memory, message-passing used instead.

EECC722 - Shaaban #19 lec # 3 Fall Flynn’s Classification of Computer Architecture Fig. 1.3 page 12 in Advanced Computer Architecture: Parallelism, Scalability, Programmability, Hwang, 1993.

EECC722 - Shaaban #20 lec # 3 Fall Current Trends In Parallel Architectures The extension of “computer architecture” to support communication and cooperation: –OLD: Instruction Set Architecture –NEW: Communication Architecture Defines: –Critical abstractions, boundaries, and primitives (interfaces) –Organizational structures that implement interfaces (hardware or software) Compilers, libraries and OS are important bridges today

EECC722 - Shaaban #21 lec # 3 Fall Modern Parallel Architecture Layered Framework CAD MultiprogrammingShared address Message passing Data parallel DatabaseScientific modeling Parallel applications Programming models Communication abstraction User/system boundary Compilation or library Operating systems support Communication hardware Physical communication medium Hardware/software boundary

EECC722 - Shaaban #22 lec # 3 Fall Shared Address Space Parallel Architectures Any processor can directly reference any memory location –Communication occurs implicitly as result of loads and stores Convenient: –Location transparency –Similar programming model to time-sharing in uniprocessors Except processes run on different processors Good throughput on multiprogrammed workloads Naturally provided on a wide range of platforms –Wide range of scale: few to hundreds of processors Popularly known as shared memory machines or model –Ambiguous: Memory may be physically distributed among processors

EECC722 - Shaaban #23 lec # 3 Fall Shared Address Space (SAS) Model Process: virtual address space plus one or more threads of control Portions of address spaces of processes are shared Writes to shared address visible to other threads (in other processes too) Natural extension of the uniprocessor model: Conventional memory operations used for communication Special atomic operations needed for synchronization OS uses shared memory to coordinate processes

EECC722 - Shaaban #24 lec # 3 Fall Models of Shared-Memory Multiprocessors The Uniform Memory Access (UMA) Model: –The physical memory is shared by all processors. –All processors have equal access to all memory addresses. Distributed memory or Nonuniform Memory Access (NUMA) Model: –Shared memory is physically distributed locally among processors. The Cache-Only Memory Architecture (COMA) Model: –A special case of a NUMA machine where all distributed main memory is converted to caches. –No memory hierarchy at each processor.

EECC722 - Shaaban #25 lec # 3 Fall Models of Shared-Memory Multiprocessors M  MM Network P $ P $ P $  Network D P C D P C D P C Distributed memory or Nonuniform Memory Access (NUMA) Model Uniform Memory Access (UMA) Model Interconnect: Bus, Crossbar, Multistage network P: Processor M: Memory C: Cache D: Cache directory Cache-Only Memory Architecture (COMA)

EECC722 - Shaaban #26 lec # 3 Fall Uniform Memory Access Example: Intel Pentium Pro Quad All coherence and multiprocessing glue in processor module Highly integrated, targeted at high volume Low latency and bandwidth

EECC722 - Shaaban #27 lec # 3 Fall Uniform Memory Access Example: SUN Enterprise –16 cards of either type: processors + memory, or I/O –All memory accessed over bus, so symmetric –Higher bandwidth, higher latency bus

EECC722 - Shaaban #28 lec # 3 Fall Distributed Shared-Memory Multiprocessor System Example: Cray T3E Scale up to 1024 processors, 480MB/s links Memory controller generates communication requests for nonlocal references No hardware mechanism for coherence (SGI Origin etc. provide this)

EECC722 - Shaaban #29 lec # 3 Fall Message-Passing Multicomputers Comprised of multiple autonomous computers (nodes). Each node consists of a processor, local memory, attached storage and I/O peripherals. Programming model more removed from basic hardware operations Local memory is only accessible by local processors. A message passing network provides point-to-point static connections among the nodes. Inter-node communication is carried out by message passing through the static connection network Process communication achieved using a message-passing programming environment.

EECC722 - Shaaban #30 lec # 3 Fall Message-Passing Abstraction Send specifies buffer to be transmitted and receiving process Recv specifies sending process and application storage to receive into Memory to memory copy, but need to name processes Optional tag on send and matching rule on receive User process names local data and entities in process/tag space too In simplest form, the send/recv match achieves pairwise synch event Many overheads: copying, buffer management, protection Process PProcessQ Address Y AddressX SendX, Q, t ReceiveY, P, t Match Local process address space Local process address space

EECC722 - Shaaban #31 lec # 3 Fall Message-Passing Example: IBM SP-2 Made out of essentially complete RS6000 workstations Network interface integrated in I/O bus (bandwidth limited by I/O bus)

EECC722 - Shaaban #32 lec # 3 Fall Message-Passing Example: Intel Paragon

EECC722 - Shaaban #33 lec # 3 Fall Message-Passing Programming Tools Message-passing programming libraries include: –Message Passing Interface (MPI): Provides a standard for writing concurrent message-passing programs. MPI implementations include parallel libraries used by existing programming languages. –Parallel Virtual Machine (PVM): Enables a collection of heterogeneous computers to used as a coherent and flexible concurrent computational resource. PVM support software executes on each machine in a user- configurable pool, and provides a computational environment of concurrent applications. User programs written for example in C, Fortran or Java are provided access to PVM through the use of calls to PVM library routines.

EECC722 - Shaaban #34 lec # 3 Fall Data Parallel Systems SIMD in Flynn taxonomy Programming model –Operations performed in parallel on each element of data structure –Logically single thread of control, performs sequential or parallel steps –Conceptually, a processor is associated with each data element Architectural model –Array of many simple, cheap processors each with little memory Processors don’t sequence through instructions –Attached to a control processor that issues instructions –Specialized and general communication, cheap global synchronization Some recent machines: –Thinking Machines CM-1, CM-2 (and CM-5) –Maspar MP-1 and MP-2,

EECC722 - Shaaban #35 lec # 3 Fall Dataflow Architectures Represent computation as a graph of essential dependences –Logical processor at each node, activated by availability of operands –Message (tokens) carrying tag of next instruction sent to next processor –Tag compared with others in matching store; match fires execution

EECC722 - Shaaban #36 lec # 3 Fall Systolic Architectures Replace single processor with an array of regular processing elements Orchestrate data flow for high throughput with less memory access Different from pipelining –Nonlinear array structure, multidirection data flow, each PE may have (small) local instruction and data memory Different from SIMD: each PE may do something different Initial motivation: VLSI enables inexpensive special-purpose chips Represent algorithms directly by chips connected in regular pattern

EECC722 - Shaaban #37 lec # 3 Fall Parallel Programs Conditions of Parallelism:Conditions of Parallelism: –Data Dependence –Control Dependence –Resource Dependence –Bernstein’s Conditions Asymptotic Notations for Algorithm AnalysisAsymptotic Notations for Algorithm Analysis Parallel Random-Access Machine (PRAM) –Example: sum algorithm on P processor PRAM Network Model of Message-Passing MulticomputersNetwork Model of Message-Passing Multicomputers –Example: Asynchronous Matrix Vector Product on a Ring Levels of Parallelism in Program ExecutionLevels of Parallelism in Program Execution Hardware Vs. Software ParallelismHardware Vs. Software Parallelism Parallel Task Grain SizeParallel Task Grain Size Example Motivating Problems With high levels of concurrencyExample Motivating Problems With high levels of concurrency Limited Concurrency: Amdahl’s LawLimited Concurrency: Amdahl’s Law Parallel Performance Metrics: Degree of Parallelism (DOP)Parallel Performance Metrics: Degree of Parallelism (DOP) Concurrency ProfileConcurrency Profile Steps in Creating a Parallel Program:Steps in Creating a Parallel Program: –Decomposition, Assignment, Orchestration, Mapping –Program Partitioning Example –Static Multiprocessor Scheduling Example

EECC722 - Shaaban #38 lec # 3 Fall Conditions of Parallelism: Data Dependence 1 True Data or Flow Dependence: A statement S2 is data dependent on statement S1 if an execution path exists from S1 to S2 and if at least one output variable of S1 feeds in as an input operand used by S2 denoted by S1  S2 2 Antidependence: Statement S2 is antidependent on S1 if S2 follows S1 in program order and if the output of S2 overlaps the input of S1 denoted by S1  S2 3 Output dependence: Two statements are output dependent if they produce the same output variable denoted by S1   S2

EECC722 - Shaaban #39 lec # 3 Fall Conditions of Parallelism: Data Dependence 4 I/O dependence: Read and write are I/O statements. I/O dependence occurs not because the same variable is involved but because the same file is referenced by both I/O statements. 5 Unknown dependence: Subscript of a variable is subscribed (indirect addressing) The subscript does not contain the loop index. A variable appears more than once with subscripts having different coefficients of the loop variable. The subscript is nonlinear in the loop index variable.

EECC722 - Shaaban #40 lec # 3 Fall Data and I/O Dependence: Examples A - B - S1:Load R1,A S2:Add R2, R1 S3:Move R1, R3 S4:Store B, R1 S1:Read (4),A(I)/Read array A from tape unit 4/ S2:Rewind (4)/Rewind tape unit 4/ S3:Write (4), B(I)/Write array B into tape unit 4/ S4:Rewind (4)/Rewind tape unit 4/ S1 S3 S4 S2 Dependence graph S1 S3 I/O I/O dependence caused by accessing the same file by the read and write statements

EECC722 - Shaaban #41 lec # 3 Fall Conditions of Parallelism Control Dependence: –Order of execution cannot be determined before runtime due to conditional statements. Resource Dependence: –Concerned with conflicts in using shared resources including functional units (integer, floating point), memory areas, among parallel tasks. Bernstein’s Conditions: Two processes P 1, P 2 with input sets I 1, I 2 and output sets O 1, O 2 can execute in parallel (denoted by P 1 || P 2 ) if: I 1  O 2 =  I 2  O 1 =  O 1  O 2 = 

EECC722 - Shaaban #42 lec # 3 Fall Bernstein’s Conditions: An Example For the following instructions P 1, P 2, P 3, P 4, P 5 in program order and –Instructions are in program order –Each instruction requires one step to execute –Two adders are available P 1 : C = D x E P 2 : M = G + C P 3 : A = B + C P 4 : C = L + M P 5 : F = G  E Using Bernstein’s Conditions after checking statement pairs: P 1 || P 5, P 2 || P 3, P 2 || P 5, P 5 || P 3, P 4 || P 5 X P1P1 DE +3+3 P4P P3P P2P2 C BG L  P5P5 GE F AC X P1P1 DE +1+1 P2P P4P4  P5P5 G B F C +2+2 P3P3 A L E G C M Parallel execution in three steps assuming two adders are available per step Sequential execution Time X P1P1  P5P P2P2 P4P4 P3P3 Dependence graph: Data dependence (solid lines) Resource dependence (dashed lines)

EECC722 - Shaaban #43 lec # 3 Fall Asymptotic Notations for Algorithm Analysis Asymptotic analysis of computing time of an algorithm f(n) ignores constant execution factors and concentrates on determining the order of magnitude of algorithm performance.  Upper bound: Used in worst case analysis of algorithm performance. f(n) = O(g(n)) iff there exist two positive constants c and n 0 such that | f(n) |  c | g(n) | for all n > n 0  i.e. g(n) an upper bound on f(n) O(1) < O(log n) < O(n) < O(n log n) < O (n 2 ) < O(n 3 ) < O(2 n )

EECC722 - Shaaban #44 lec # 3 Fall Asymptotic Notations for Algorithm Analysis  Lower bound : Used in the analysis of the lower limit of algorithm performance f(n) =  (g(n)) if there exist positive constants c, n 0 such that | f(n) |  c | g(n) | for all n > n 0  i.e. g(n) is a lower bound on f(n)  Tight bound: Used in finding a tight limit on algorithm performance f(n) =  (g(n)) if there exist constant positive integers c 1, c 2, and n 0 such that c 1 | g(n) |  | f(n) |  c 2 | g(n) | for all n > n 0  i.e. g(n) is both an upper and lower bound on f(n)

EECC722 - Shaaban #45 lec # 3 Fall The Growth Rate of Common Computing Functions log n n n log n n 2 n 3 2 n

EECC722 - Shaaban #46 lec # 3 Fall Theoretical Models of Parallel Computers Parallel Random-Access Machine (PRAM): –n processor, global shared memory model. –Models idealized parallel computers with zero synchronization or memory access overhead. –Utilized parallel algorithm development and scalability and complexity analysis. PRAM variants: More realistic models than pure PRAM –EREW-PRAM: Simultaneous memory reads or writes to/from the same memory location are not allowed. –CREW-PRAM: Simultaneous memory writes to the same location is not allowed. –ERCW-PRAM: Simultaneous reads from the same memory location are not allowed. –CRCW-PRAM: Concurrent reads or writes to/from the same memory location are allowed.

EECC722 - Shaaban #47 lec # 3 Fall Example: sum algorithm on P processor PRAM Input: Array A of size n = 2 k in shared memory Initialized local variables: the order n, number of processors p = 2 q  n, the processor number s Output: The sum of the elements of A stored in shared memory begin 1. for j = 1 to l ( = n/p) do Set B(l(s - 1) + j): = A(l(s-1) + j) 2. for h = 1 to log n do 2.1 if (k- h - q  0) then for j = 2 k-h-q (s-1) + 1 to 2 k-h-q S do Set B(j): = B(2j -1) + B(2s) 2.2 else {if (s  2 k-h ) then Set B(s): = B(2s -1 ) + B(2s)} 3. if (s = 1) then set S: = B(1) end Running time analysis: Step 1: takes O(n/p) each processor executes n/p operations The hth of step 2 takes O(n / (2 h p)) since each processor has to perform (n / (2 h p)) Ø operations Step three takes O(1) Total Running time:

EECC722 - Shaaban #48 lec # 3 Fall Example: Sum Algorithm on P Processor PRAM Operation represented by a node is executed by the processor indicated below the node. B(6) =A(6) P3P3 B(5) =A(5) P3P3 P3P3 B(3) B(8) =A(8) P4P4 B(7) =A(7) P4P4 P4P4 B(4) B(2) =A(2) P1P1 B(1) =A(1) P1P1 P1P1 B(1) B(4) =A(4) P2P2 B(3) =A(3) P2P2 P2P2 B(2) P2P2 B(1) P1P1 P1P1 S= B(1) P1P1 For n = 8 p = 4 Processor allocation for computing the sum of 8 elements on 4 processor PRAM Time Unit

EECC722 - Shaaban #49 lec # 3 Fall The Power of The PRAM Model Well-developed techniques and algorithms to handle many computational problems exist for the PRAM model Removes algorithmic details regarding synchronization and communication, concentrating on the structural properties of the problem. Captures several important parameters of parallel computations. Operations performed in unit time, as well as processor allocation. The PRAM design paradigms are robust and many network algorithms can be directly derived from PRAM algorithms. It is possible to incorporate synchronization and communication into the shared-memory PRAM model.

EECC722 - Shaaban #50 lec # 3 Fall Performance of Parallel Algorithms Performance of a parallel algorithm is typically measured in terms of worst-case analysis. For problem Q with a PRAM algorithm that runs in time T(n) using P(n) processors, for an instance size of n: –The time-processor product C(n) = T(n). P(n) represents the cost of the parallel algorithm. –For P < P(n), each of the of the T(n) parallel steps is simulated in O(P(n)/p) substeps. Total simulation takes O(T(n)P(n)/p) –The following four measures of performance are asymptotically equivalent: P(n) processors and T(n) time C(n) = P(n)T(n) cost and T(n) time O(T(n)P(n)/p) time for any number of processors p < P(n) O(C(n)/p + T(n)) time for any number of processors.

EECC722 - Shaaban #51 lec # 3 Fall Network Model of Message-Passing Multicomputers A network of processors can viewed as a graph G (N,E) –Each node i  N represents a processor –Each edge (i,j)  E represents a two-way communication link between processors i and j. –Each processor is assumed to have its own local memory. –No shared memory is available. –Operation is synchronous or asynchronous(message passing). –Typical message-passing communication constructs: send(X,i) a copy of X is sent to processor P i, execution continues. receive(Y, j) execution suspended until the data from processor P j is received and stored in Y then execution resumes.

EECC722 - Shaaban #52 lec # 3 Fall Network Model of Multicomputers Routing is concerned with delivering each message from source to destination over the network. Additional important network topology parameters: –The network diameter is the maximum distance between any pair of nodes. –The maximum degree of any node in G Example: –Linear array: P processors P 1, …, P p are connected in a linear array where: Processor P i is connected to P i-1 and P i+1 if they exist. Diameter is p-1; maximum degree is 2 –A ring is a linear array of processors where processors P 1 and P p are directly connected.

EECC722 - Shaaban #53 lec # 3 Fall A Four-Dimensional Hypercube Two processors are connected if their binary indices differ in one bit position.

EECC722 - Shaaban #54 lec # 3 Fall Example: Asynchronous Matrix Vector Product on a Ring Input: –n x n matrix A ; vector x of order n –The processor number i. The number of processors –The ith submatrix B = A( 1:n, (i-1)r +1 ; ir) of size n x r where r = n/p –The ith subvector w = x(i - 1)r + 1 : ir) of size r Output: –Processor P i computes the vector y = A 1 x 1 + …. A i x i and passes the result to the right –Upon completion P 1 will hold the product Ax Begin 1. Compute the matrix vector product z = Bw 2. If i = 1 then set y: = 0 else receive(y,left) 3. Set y: = y +z 4. send(y, right) 5. if i =1 then receive(y,left) End T comp = k(n 2 /p) T comm = p(l+ mn) T = T comp + T comm = k(n 2 /p) + p(l+ mn)

EECC722 - Shaaban #55 lec # 3 Fall Creating a Parallel Program Assumption: Sequential algorithm to solve problem is given –Or a different algorithm with more inherent parallelism is devised. –Most programming problems have several parallel solutions. The best solution may differ from that suggested by existing sequential algorithms. One must: –Identify work that can be done in parallel –Partition work and perhaps data among processes –Manage data access, communication and synchronization –Note: work includes computation, data access and I/O Main goal: Speedup (plus low programming effort and resource needs) Speedup (p) = For a fixed problem: Speedup (p) = Performance(p) Performance(1) Time(1) Time(p)

EECC722 - Shaaban #56 lec # 3 Fall Some Important Concepts Task: –Arbitrary piece of undecomposed work in parallel computation –Executed sequentially on a single processor; concurrency is only across tasks –E.g. a particle/cell in Barnes-Hut, a ray or ray group in Raytrace –Fine-grained versus coarse-grained tasks Process (thread): –Abstract entity that performs the tasks assigned to processes –Processes communicate and synchronize to perform their tasks Processor: –Physical engine on which process executes –Processes virtualize machine to programmer first write program in terms of processes, then map to processors

EECC722 - Shaaban #57 lec # 3 Fall Levels of Parallelism in Program Execution Jobs or programs (Multiprogramming) Level 5 Subprograms, job steps or related parts of a program Level 4 Procedures, subroutines, or co-routines Level 3 Non-recursive loops or unfolded iterations Level 2 Instructions or statements Level 1 Increasing communications demand and mapping/scheduling overhead } } } Higher degree of Parallelism Medium Grain Coarse Grain Fine Grain

EECC722 - Shaaban #58 lec # 3 Fall Hardware and Software Parallelism Hardware parallelism: – Defined by machine architecture, hardware multiplicity (number of processors available) and connectivity. –Often a function of cost/performance tradeoffs. –Characterized in a single processor by the number of instructions k issued in a single cycle (k-issue processor). –A multiprocessor system with n k-issue processor can handle a maximum limit of nk threads. Software parallelism: –Defined by the control and data dependence of programs. –Revealed in program profiling or program flow graph. –A function of algorithm, programming style and compiler optimization.

EECC722 - Shaaban #59 lec # 3 Fall Computational Parallelism and Grain Size Grain size (granularity) is a measure of the amount of computation involved in a task in parallel computation : –Instruction Level: At instruction or statement level. 20 instructions grain size or less. For scientific applications, parallelism at this level range from 500 to 3000 concurrent statements Manual parallelism detection is difficult but assisted by parallelizing compilers. –Loop level: Iterative loop operations. Typically, 500 instructions or less per iteration. Optimized on vector parallel computers. Independent successive loop operations can be vectorized or run in SIMD mode.

EECC722 - Shaaban #60 lec # 3 Fall Computational Parallelism and Grain Size –Procedure level: Medium-size grain; task, procedure, subroutine levels. Less than 2000 instructions. More difficult detection of parallel than finer-grain levels. Less communication requirements than fine-grain parallelism. Relies heavily on effective operating system support. –Subprogram level: Job and subprogram level. Thousands of instructions per grain. Often scheduled on message-passing multicomputers. –Job (program) level, or Multiprogrammimg: Independent programs executed on a parallel computer. Grain size in tens of thousands of instructions.

EECC722 - Shaaban #61 lec # 3 Fall Example Motivating Problems: Simulating Ocean Currents –Model as two-dimensional grids –Discretize in space and time finer spatial and temporal resolution => greater accuracy –Many different computations per time step set up and solve equations –Concurrency across and within grid computations

EECC722 - Shaaban #62 lec # 3 Fall Simulating Galaxy Evolution Example Motivating Problems: Simulating Galaxy Evolution m1m2m1m2 r2r2 Many time-steps, plenty of concurrency across stars within one –Simulate the interactions of many stars evolving over time –Computing forces is expensive –O(n 2 ) brute force approach –Hierarchical Methods take advantage of force law: G

EECC722 - Shaaban #63 lec # 3 Fall Example Motivating Problems: Rendering Scenes by Ray Tracing –Shoot rays into scene through pixels in image plane –Follow their paths They bounce around as they strike objects They generate new rays: ray tree per input ray –Result is color and opacity for that pixel –Parallelism across rays All above case studies have abundant concurrency

EECC722 - Shaaban #64 lec # 3 Fall Limited Concurrency: Amdahl’s Law –Most fundamental limitation on parallel speedup. –If fraction s of seqeuential execution is inherently serial, speedup <= 1/s –Example: 2-phase calculation sweep over n-by-n grid and do some independent computation sweep again and add each value to global sum –Time for first phase = n 2 /p –Second phase serialized at global variable, so time = n 2 –Speedup <= or at most 2 –Possible Trick: divide second phase into two Accumulate into private sum during sweep Add per-process private sum into global sum –Parallel time is n 2 /p + n2/p + p, and speedup at best 2n 2 n2n2 p + n 2 2n 2 2n 2 + p 2

EECC722 - Shaaban #65 lec # 3 Fall Amdahl’s Law Example: A Pictorial Depiction 1 p 1 p 1 n 2 /p n2n2 p work done concurrently n2n2 n2n2 Time n 2 /p (c) (b) (a)

EECC722 - Shaaban #66 lec # 3 Fall Parallel Performance Metrics Degree of Parallelism (DOP) For a given time period, DOP reflects the number of processors in a specific parallel computer actually executing a particular parallel program. Average Parallelism: –given maximum parallelism = m –n homogeneous processors –computing capacity of a single processor  –Total amount of work W (instructions, computations): or as a discrete summation Where t i is the total time that DOP = i and The average parallelism A: In discrete form

EECC722 - Shaaban #67 lec # 3 Fall Example: Concurrency Profile of A Divide-and-Conquer Algorithm Execution observed from t 1 = 2 to t 2 = 27 Peak parallelism m = 8 A = (1x5 + 2x3 + 3x4 + 4x6 + 5x2 + 6x2 + 8x3) / ( ) = 93/25 = 3.72 Degree of Parallelism (DOP) Time t1t1 t2t2

EECC722 - Shaaban #68 lec # 3 Fall Parallel Performance Example The execution time T for three parallel programs is given in terms of processor count P and problem size N In each case, we assume that the total computation work performed by an optimal sequential algorithm scales as N+N 2. 1 For first parallel algorithm: T = N + N 2 /P This algorithm partitions the computationally demanding O(N 2 ) component of the algorithm but replicates the O(N) component on every processor. There are no other sources of overhead. 2 For the second parallel algorithm: T = (N+N 2 )/P This algorithm optimally divides all the computation among all processors but introduces an additional cost of For the third parallel algorithm: T = (N+N 2 )/P + 0.6P 2 This algorithm also partitions all the computation optimally but introduces an additional cost of 0.6P 2. All three algorithms achieve a speedup of about 10.8 when P = 12 and N=100. However, they behave differently in other situations as shown next. With N=100, all three algorithms perform poorly for larger P, although Algorithm (3) does noticeably worse than the other two. When N=1000, Algorithm (2) is much better than Algorithm (1) for larger P.

EECC722 - Shaaban #69 lec # 3 Fall Parallel Performance Example (continued) All algorithms achieve: Speedup = 10.8 when P = 12 and N=100 N=1000, Algorithm (2) performs much better than Algorithm (1) for larger P. Algorithm 1: T = N + N 2 /P Algorithm 2: T = (N+N 2 )/P Algorithm 3: T = (N+N 2 )/P + 0.6P 2

EECC722 - Shaaban #70 lec # 3 Fall Steps in Creating a Parallel Program 4 steps: Decomposition, Assignment, Orchestration, Mapping –Done by programmer or system software (compiler, runtime,...) –Issues are the same, so assume programmer does it all explicitly

EECC722 - Shaaban #71 lec # 3 Fall Decomposition Break up computation into concurrent tasks to be divided among processes: –Tasks may become available dynamically. –No. of available tasks may vary with time. –Together with assignment, also called partitioning. i.e. identify concurrency and decide level at which to exploit it. Grain-size problem: –To determine the number and size of grains or tasks in a parallel program. –Problem and machine-dependent. –Solutions involve tradeoffs between parallelism, communication and scheduling/synchronization overhead. Grain packing: –To combine multiple fine-grain nodes into a coarse grain node (task) to reduce communication delays and overall scheduling overhead. Goal: Enough tasks to keep processes busy, but not too many –No. of tasks available at a time is upper bound on achievable speedup

EECC722 - Shaaban #72 lec # 3 Fall Assignment Specifying mechanisms to divide work up among processes: –Together with decomposition, also called partitioning. –Balance workload, reduce communication and management cost Partitioning problem: –To partition a program into parallel branches, modules to give the shortest possible execution on a specific parallel architecture. Structured approaches usually work well: –Code inspection (parallel loops) or understanding of application. –Well-known heuristics. –Static versus dynamic assignment. As programmers, we worry about partitioning first: –Usually independent of architecture or programming model. –But cost and complexity of using primitives may affect decisions.

EECC722 - Shaaban #73 lec # 3 Fall Orchestration –Naming data. –Structuring communication. –Synchronization. –Organizing data structures and scheduling tasks temporally. Goals –Reduce cost of communication and synch. as seen by processors –Reserve locality of data reference (incl. data structure organization) –Schedule tasks to satisfy dependences early –Reduce overhead of parallelism management Closest to architecture (and programming model & language). –Choices depend a lot on comm. abstraction, efficiency of primitives. –Architects should provide appropriate primitives efficiently.

EECC722 - Shaaban #74 lec # 3 Fall Mapping Each task is assigned to a processor in a manner that attempts to satisfy the competing goals of maximizing processor utilization and minimizing communication costs. Mapping can be specified statically or determined at runtime by load-balancing algorithms (dynamic scheduling). Two aspects of mapping: –Which processes will run on the same processor, if necessary –Which process runs on which particular processor mapping to a network topology One extreme: space-sharing –Machine divided into subsets, only one app at a time in a subset –Processes can be pinned to processors, or left to OS. Another extreme: complete resource management control to OS –OS uses the performance techniques we will discuss later. Real world is between the two. –User specifies desires in some aspects, system may ignore

EECC722 - Shaaban #75 lec # 3 Fall Program Partitioning Example Example 2.4 page 64 Fig 2.6 page 65 Fig 2.7 page 66 In Advanced Computer Architecture, Hwang

EECC722 - Shaaban #76 lec # 3 Fall Static Multiprocessor Scheduling Dynamic multiprocessor scheduling is an NP-hard problem. Node Duplication: to eliminate idle time and communication delays, some nodes may be duplicated in more than one processor. Fig. 2.8 page 67 Example: 2.5 page 68 In Advanced Computer Architecture, Hwang

EECC722 - Shaaban #77 lec # 3 Fall

EECC722 - Shaaban #78 lec # 3 Fall Successive Refinement Partitioning is often independent of architecture, and may be done first: –View machine as a collection of communicating processors Balancing the workload. Reducing the amount of inherent communication Reducing extra work. –Above three issues are conflicting. Then deal with interactions with architecture: –View machine as an extended memory hierarchy Extra communication due to architectural interactions. Cost of communication depends on how it is structured –This may inspire changes in partitioning.

EECC722 - Shaaban #79 lec # 3 Fall Partitioning for Performance Balancing the workload and reducing wait time at synch points Reducing inherent communication. Reducing extra work. These algorithmic issues have extreme trade-offs: –Minimize communication => run on 1 processor. => extreme load imbalance. –Maximize load balance => random assignment of tiny tasks. => no control over communication. –Good partition may imply extra work to compute or manage it The goal is to compromise between the above extremes –Fortunately, often not difficult in practice.

EECC722 - Shaaban #80 lec # 3 Fall Load Balancing and Synch Wait Time Reduction Limit on speedup: –Work includes data access and other costs. –Not just equal work, but must be busy at same time. Four parts to load balancing and reducing synch wait time: 1. Identify enough concurrency. 2. Decide how to manage it. 3. Determine the granularity at which to exploit it 4. Reduce serialization and cost of synchronization Sequential Work Max Work on any Processor Speedup problem (p) 

EECC722 - Shaaban #81 lec # 3 Fall Managing Concurrency Static versus Dynamic techniques Static: –Algorithmic assignment based on input; won’t change –Low runtime overhead –Computation must be predictable –Preferable when applicable (except in multiprogrammed/heterogeneous environment) Dynamic: –Adapt at runtime to balance load –Can increase communication and reduce locality –Can increase task management overheads

EECC722 - Shaaban #82 lec # 3 Fall Dynamic Load Balancing To achieve best performance of a parallel computing system running a parallel problem, it’s essential to maximize processor utilization by distributing the computation load evenly or balancing the load among the available processors. Optimal static load balancing, optimal mapping or scheduling, is an intractable NP-complete problem, except for specific problems on specific networks. Hence heuristics are usually used to select processors for processes. Even the best static mapping may offer the best execution time due to changing conditions at runtime and the process may need to done dynamically. The methods used for balancing the computational load dynamically among processors can be broadly classified as: 1. Centralized dynamic load balancing. 2. Decentralized dynamic load balancing.

EECC722 - Shaaban #83 lec # 3 Fall Processor Load Balance & Performance

EECC722 - Shaaban #84 lec # 3 Fall Dynamic Tasking with Task Queues Centralized versus distributed queues. Task stealing with distributed queues. –Can compromise communication and locality, and increase synchronization. –Whom to steal from, how many tasks to steal,... –Termination detection –Maximum imbalance related to size of task

EECC722 - Shaaban #85 lec # 3 Fall Implications of Load Balancing Extends speedup limit expression to: Speedup problem (p)  Generally, responsibility of software Architecture can support task stealing and synch efficiently –Fine-grained communication, low-overhead access to queues Efficient support allows smaller tasks, better load balancing –Naming logically shared data in the presence of task stealing Need to access data of stolen tasks, esp. multiply-stolen tasks => Hardware shared address space advantageous –Efficient support for point-to-point communication Sequential Work Max (Work + Synch Wait Time)

EECC722 - Shaaban #86 lec # 3 Fall Reducing Inherent Communication Measure: communication to computation ratio Focus here is on inherent communication –Determined by assignment of tasks to processes –Actual communication can be greater Assign tasks that access same data to same process Optimal solution to reduce communication and achive an optimal load balance is NP-hard in the general case Simple heuristic solutions work well in practice: –Due to specific structure of applications.

EECC722 - Shaaban #87 lec # 3 Fall Implications of Communication-to- Computation Ratio Architects must examine application needs If denominator is execution time, ratio gives average BW needs If operation count, gives extremes in impact of latency and bandwidth –Latency: assume no latency hiding –Bandwidth: assume all latency hidden –Reality is somewhere in between Actual impact of communication depends on structure and cost as well: –Need to keep communication balanced across processors as well. Sequential Work Max (Work + Synch Wait Time + Comm Cost) Speedup <

EECC722 - Shaaban #88 lec # 3 Fall Reducing Extra Work (Overheads) Common sources of extra work: –Computing a good partition e.g. partitioning in Barnes-Hut or sparse matrix –Using redundant computation to avoid communication –Task, data and process management overhead Applications, languages, runtime systems, OS – Imposing structure on communication Coalescing messages, allowing effective naming Architectural Implications: –Reduce need by making communication and orchestration efficient Sequential Work Max (Work + Synch Wait Time + Comm Cost + Extra Work) Speedup <

EECC722 - Shaaban #89 lec # 3 Fall Extended Memory-Hierarchy View of Multiprocessors Levels in extended hierarchy: –Registers, caches, local memory, remote memory (topology) –Glued together by communication architecture –Levels communicate at a certain granularity of data transfer Need to exploit spatial and temporal locality in hierarchy –Otherwise extra communication may also be caused –Especially important since communication is expensive

EECC722 - Shaaban #90 lec # 3 Fall Extended Hierarchy Idealized view: local cache hierarchy + single main memory But reality is more complex: –Centralized Memory: caches of other processors –Distributed Memory: some local, some remote; + network topology –Management of levels: Caches managed by hardware Main memory depends on programming model –SAS: data movement between local and remote transparent –Message passing: explicit –Improve performance through architecture or program locality –Tradeoff with parallelism; need good node performance and parallelism

EECC722 - Shaaban #91 lec # 3 Fall Artifactual Communication in Extended Hierarchy Accesses not satisfied in local portion cause communication –Inherent communication, implicit or explicit, causes transfers Determined by program – Artifactual communication: Determined by program implementation and arch. interactions Poor allocation of data across distributed memories Unnecessary data in a transfer Unnecessary transfers due to system granularities Redundant communication of data finite replication capacity (in cache or main memory) –Inherent communication assumes unlimited capacity, small transfers, perfect knowledge of what is needed. – More on artifactual communication later; first consider replication-induced further

EECC722 - Shaaban #92 lec # 3 Fall Structuring Communication Given amount of comm (inherent or artifactual), goal is to reduce cost Cost of communication as seen by process: C = f * ( o + l + + t c - overlap) f = frequency of messages o = overhead per message (at both ends) l = network delay per message n c = total data sent m = number of messages B = bandwidth along path (determined by network, NI, assist) t c = cost induced by contention per message overlap = amount of latency hidden by overlap with comp. or comm. – Portion in parentheses is cost of a message (as seen by processor) –That portion, ignoring overlap, is latency of a message –Goal: reduce terms in latency and increase overlap n c /m B

EECC722 - Shaaban #93 lec # 3 Fall Reducing Overhead Can reduce no. of messages m or overhead per message o o is usually determined by hardware or system software –Program should try to reduce m by coalescing messages –More control when communication is explicit Coalescing data into larger messages: –Easy for regular, coarse-grained communication –Can be difficult for irregular, naturally fine-grained communication May require changes to algorithm and extra work –coalescing data and determining what and to whom to send Will discuss more in implications for programming models later

EECC722 - Shaaban #94 lec # 3 Fall Reducing Network Delay Network delay component = f*h*t h h = number of hops traversed in network t h = link+switch latency per hop Reducing f: Communicate less, or make messages larger Reducing h: –Map communication patterns to network topology e.g. nearest-neighbor on mesh and ring; all-to-all –How important is this? Used to be a major focus of parallel algorithms Depends on no. of processors, how t h, compares with other components Less important on modern machines –Overheads, processor count, multiprogramming

EECC722 - Shaaban #95 lec # 3 Fall Overlapping Communication Cannot afford to stall for high latencies Overlap with computation or communication to hide latency Requires extra concurrency (slackness), higher bandwidth Techniques: –Prefetching –Block data transfer –Proceeding past communication –Multithreading

EECC722 - Shaaban #96 lec # 3 Fall Summary of Tradeoffs Different goals often have conflicting demands –Load Balance Fine-grain tasks Random or dynamic assignment –Communication Usually coarse grain tasks Decompose to obtain locality: not random/dynamic –Extra Work Coarse grain tasks Simple assignment –Communication Cost: Big transfers: amortize overhead and latency Small transfers: reduce contention

EECC722 - Shaaban #97 lec # 3 Fall Relationship Between Perspectives

EECC722 - Shaaban #98 lec # 3 Fall Summary Speedup prob (p) = –Goal is to reduce denominator components –Both programmer and system have role to play –Architecture cannot do much about load imbalance or too much communication –But it can: reduce incentive for creating ill-behaved programs (efficient naming, communication and synchronization) reduce artifactual communication provide efficient naming for flexible assignment allow effective overlapping of communication Busy(1) + Data(1) Busy useful (p)+Data local (p)+Synch(p)+Date remote (p)+Busy overhead (p)

EECC722 - Shaaban #99 lec # 3 Fall Generic Distributed Memory Organization Network bandwidth? Bandwidth demand? –Independent processes? –Communicating processes? Latency? O(log 2 P) increase? Cost scalability of system? Multi-stage interconnection network (MIN)? Custom-designed? Node: O(10) Bus-based SMP Custom-designed CPU? Node/System integration level? How far? Cray-on-a-Chip? SMP-on-a-Chip? OS Supported? Network protocols? Communication Assist Extend of functionality? Message transaction DMA? Global virtual Shared address space?